2025/08/10 #

It‘s unbelievable the detours you have to make in programming sometimes, especially in the early days of a project. Just when you think you have the architecture well and truly figured out, you realise there is this other very important part you just absolutely have to refactor. Sometimes it really feels endless. You find yourself doing a refactor while doing a refactor.

And then you finally tie up all the loose ends, and a whole lot of other small bits and pieces and fixes you had to do along the way, and you are exhausted, and you are right back exactly where you were 24 previous, when you realised you had to do the refactor. You can finally really start the original thing you set out to do! But it‘s 1:37am and you are spent.

Programmer problems. #

Typescript and Gemini evil mode

Last night‘s late night note turned out to not only be a great description of what had just happened, but be very prescient as to what was going to happen yet again.

I sat down today to finally make a start at the thing I‘ve been trying so hard to get to for the past few days, and I thought to do a quick review of the codebase. Wouldn't you know it, Gemini unearthed another inconsistency, related to a modal component that apparently used an "old style" of modal. Well, it was Gemini that recommended this now "old style" a few weeks ago. Would have been a lot easier if it had just recommended the correct way in the first place.

And you know what, yesterday‘s 24 hour refactor of the frontend router to use a "modern" style of React routing, was also a product of Gemini‘s advice to implement the original "old style" routing just a week before that. I‘m starting to see a pattern here.

Oh and by the way, all day today Gemini is back in I will ignore everything you say mode, then apologise for getting it wrong, and then do the exact same thing again, literally in the very next paragraph. It also went through a touch of secretly creating a file, then saying oh look I just noticed that this file exists, that means our next best thing to do is this huge and really dangerous refactoring.

I‘d love to say that Typescript was making this a whole lot better, but actually even Typescript seems to have been co-opted by evil mode Gem, because what has been happening is that though Typescript is super great in most normal situations, in a situation when you just want to implement a small portion of the refactor to test out if the idea works, well Typescript + Gemini actually work hand in hand to force you into refactoring the entire f-ing thing in one massive dangerous step, because each small change, creates another 20 breakages that you then need to fix. And on and on.

The best and safest way forward was actually to stop and temporarily add the offending file to tsconfig exclude list, comment out a bunch of UI code, and get the feature working in one small part of the app. Go figure. I‘m happy I went that route because, wouldn‘t you know it, as part of getting it working, it was necessary to fix another few small unexpected bugs. Had I gone down the refactor everything in one huge step route, no doubt there would have been even more bugs to fix, and I don‘t think I would have made it to the other side.

Maybe I'll be ready to start on this admin page finally tomorrow. #

Today’s links:

2025/08/09 #

Changing tires

I‘ve been reading about running LLM‘s locally for the past few days. It‘s something I looked into briefly before but it always seemed too complicated. I‘ve noticed that since OpenAI released their open source models, people on various podcasts have been talking about this more, so I‘ve been checking projects out and what not. It‘s seeming more achievable, I think partly because the tech has progressed but also my understanding of the space has evolved too.

It‘s tough not to get sucked into rabbit holes on some of this stuff. I am trying to spend a bit of time reading about it in the mornings, but then you have to put your half baked research aside and get on with your current project. Web development is strange in that you have to constantly be taking small bites at things, and eventually what was not possible, becomes possible. You have to do both, and then find time to write about it too. #

Today’s links:

  • Maple AI - Private AI chat that is truely secure - "End-to-end encryption means your conversations are confidential and protected at every step. Only you can access your data - not even we can read your chats". It‘s still cloud based but it‘s much more private and you can run more powerful models. I feel like I want a hybrid solution where I can run local models and private models in the cloud. trymaple.ai #

  • Is it, like, OK to say ‘like’ now? - I actually remember when people started saying 'like' all the time. It seemed to happen over night, at least in the UK and in Europe. And yet now, it‘s strange to think there was a time when people didn‘t say it, like ever. Sort of like there was a time when there was no internet, no mobile phones. I hadn‘t really thought about it much, but it did have a profound effect on how we experience the world. www.theguardian.com #

  • CodeRunner: Run AI Generated Code Locally - "CodeRunner is an MCP (Model Context Protocol) server that executes AI-generated code in a sandboxed environment on your Mac using Apple's native containers". I could definitely see this could be part of a local AI rig. github.com #

  • Getting started with Podman AI Lab - I remembered this morning that Podman has this AI Lab extension. Re-read the blurb and I wonder if whether it might turn out to be the open source dark horse. I'll have to try this out sometime soon. It‘s been a long time since I used a Redhat product. Perhaps not the coolest kid on the block, but maybe for running AI agents dependability might be more important. developers.redhat.com #

2025/08/07 #

Multiculturalism the ugly step child

I thought this article about how South Korea is integrating different cultures into their society was a really interesting and well written piece.

Mutliculturalism is a bit of an ugly step child at the minute, it gets blamed for all sorts of things. I don‘t care for many *-isms, but I do know that integrating different cultures is really difficult, but I also know that it‘s important, because we need diversity. And the thing is, I think that the thing people hate about multiculturalism, isn‘t really about people from different backgrounds and cultures, that‘s just where the problem surfaces, where it gets exposed and put into the light.

The real problem is more fundamental, and probably exists even in places that don‘t have a lot of foreigners, because the real issues are more to do with human nature, and power and wealth. Those sorts of things.

And I'm not saying that countries should just have a complete open door policy, of course not, it‘s a balance, and that‘s one of the very difficult things to get right. It‘s a tremendously complicated massively distributed game of balancing the very fabric we are living on. That‘s no easy feat. But when we get it right, or even sort of right, it can be very wonderful for everyone.

Anyway, I find it very interesting to learn about how asian countries are handling these sorts of challenges. I don‘t think any of us have it all figured out. Some of the things we do work well, other things not so much. And each place is a bit different so things that work in one place don‘t necessarily work in another.

But I also think the better we get at integrating different cultures, the better we will be able to live with ourselves too. And that‘s why I really liked how the piece ends, because the whole way through there was something that felt a bit off, something missing about the full picture, and I think to a large extent the author nails it. #

React + Typescript + Gemini: A pretty great combo

So I did end up modifying my devcontainer setup to accommodate the possibility of working with Gemini in agent mode. I have two configs now, one is a sort of admin mode, that I use myself, and have full access to the Github repo. The other is a paired down agent config, where you can only read from the repo. It's enough to get the latest code so theoretically an agent could start working locally on features, but it wouldn't be able to push to the remote. I also improved the setup so that you can only modify the devcontainer setup from outside the devcontainer. Gemini is very good at creating complicated situations where it could easily change some things without you realising.

I haven‘t taken much time to really try out agent mode because I have been finalising the frontend architecture for my React frontend, and actually I was learning a lot by pair programming with Gemini. As long as you can put up with it‘s infinite ego, and nod your head when it is telling you how you are frustrated all while it digs itself into a bigger and bigger hole, and be ready to pull the rip cord to make sure you don‘t fall in with it, it is phenomenal at writing documentation, summarising work done into very detailed commit messages, brainstorming ideas and development strategy, and at figuring out very complex Typescript blockages.

Looks like I'll have the migration of my new API‘s frontend to React sometime in the next few days. And I already have a small list of improvements to make. #

Today’s links:

2025/08/04 #

The AI energy crisis

Data center 1

A few weeks ago I wrote about the size insanity of the new AI clusters that are being built. And I followed that up with some rather quaint context from my past experience with high performance computing in the early days of the VFX industry. Well, it seems other seasoned computer science professionals jaws are dropping too.

From a very funny, and prescient moment in a recent Changelog Podcast interview with Greg Osuri on the AI Energy Crisis Ep#652 [12:20]:

Data center 2

Greg: Just to give you context, this is one data center we are talking about, one company, right, so put that with the heavy competition between, what‘s his name, Mark Zuckerberg tweeted out saying, or facebook out, saying that they are going to build a data center bigger than Manhattan, we saw the tweet.

Jerod: I did not see that tweet.

Data center 3

Greg: Everybody is competing. Yesterday Elon Musk said they are going to, in fact, they are going to get about 5 million GPUs by 2028, I believe he said, or next 5 years, so maybe 2030. [Exhales deeply] 5 million GPUs, he said 5 million H100 ecore, that’s 5 times 1 Kilowatt, 1.2 Kilowatt per GPU, just to give you a brief idea as to how much compute or how much energy we are looking at. OH sorry 50 million. Sorry, I take the number. 50 million. These nunbers are so big.

Jerod: Not 5 million, 15 million, one five?

Greg: Five zero.

Adam: Oh 50 million, so 10x what you originally said?

Greg: Yes apologies, I mean these numbers are so large.

Greg & Adam [at the same time]: It’s hard to even fathom.

Adam: I mean 50 million GPUs!

Greg: For context, Nvidia made about 2.5 million H100s in 2024, so...

Adam: Tccchhhheeeeewww.

Greg: ...ha ha...not financial advice but, ha ha ha...but Nvidia I believe is going to...

Adam: Haha yeaaaah!

Greg: ...is going to be extremely beneficial.

Jerod: You think the stock is high now...

Manhattan 2

I asked Gemini to tabulate the 10 biggest data centers worldwide. They seem to take up to around 10 million square feet. Manhattan for reference is 633 million square feet. So that's around 60x current ginormo data centre sizes. Seems like Zuck‘s estimate was a bit ambitious, but honestly, not by that much if all these growth target numbers are to be believed. Remember Sam Altman said he wants a 100 million node cluster.

Manhattan 1

This webopedia article says that the Yotta NM1 data center in India as 30000 racks, which sounds like a heck of a lot, until you read about the Alibaba Cloud Zhangbei Data Center in China, that has 52 buildings with 50000 racks each.

Glad it‘s not just me that is OMG-ing at all this stuff. #

Mstr vs Nvidia

This isn‘t a huge revelation or anything but given my post on the AI power crisis earlier, I've obviously been thinking about tech, and stocks, and well, isn't it interesting how similar the trajectory of MSTR and NVIDIA have been? It‘s probably nothing to worry about.

MSTR:

Mstr

NVIDIA:

Nvidia

Who knows, it might even be a good thing. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.