2025/08/31 #

Openvibe all the things

I installed Openvibe.social earlier. It looks kind of cool.

Here’s the blurb:

One App For The Open Social Web All decentralised social networks—single timeline

I’m not really using social media at the minute. I think it’s cause there are just too many places to go to. I don’t have enough time to be checking in all these places. I can only just about do it with one place.

Anyway it was easy to install, loads of posts from the people I follow on Bluesky and Mastodon showed up in the timeline. I posted a couple of messages and like some sort of strange echo they appeared in triplicate in my timeline, one for Bluesky, one for Mastodon, and since I also connected up my Nostr, one for that too.

I'm kind of confused about Nostr, during sign-up you have to enter your nsec, which is your private key. It makes sense in some ways, because they need to be able to create posts as you so they need to have your private key. No way around that as far as I can see. What's confusing is that literally next to the box you enter your private key, it says something like "never share your private key". WTF.

Doesn’t Nostr have a built in way to grant authorization, that you can revoke? If you have to give your private key out to 3rd party services, what happens if one of them turns out to be bad? You have to create a new Nostr account. Seriously?

I'm not really using Nostr much so it doesn’t make that much difference to me. But I do want these services to work well. I might use them more if they were easy and secure to use. #

Haseeb Qureshi on Kanye West releasing $YZY [59:05]:

It’s like the same cabal, it’s the same people [...], just like every villain, it's kind of like an Avengers movie, every villain from a previous cycle is also back, is part of this thing. It is a nice way to wrap up the saga, if this is the last celeb coin we are going to have to deal with".

2025/08/30 #

LLMs running locally

I finally got working the thing I have been configuring the past few days: a collection of LLMs of various different sizes, running on my local machine, all accessible via terminal and through a chat interface in VSCode. I fell down all sorts of strange rabbit holes, but it‘s working. I now have a much better appreciation of how all these AI models work, but still lots to learn.

This was the first time I've worked with Claude. I found Claude to be very efficient, most of the time, refreshingly concise, and dare I say it, surprisingly honest about the current state of AI tooling. There are still some strange things though.

When I asked each of these newly configured models separately what their name was, they all initially insisted that they were an AI language model created by OpenAI called GPT-4. Of course since I downloaded, installed and configured them, I know that none of them are in fact GPT-4 created by OpenAI. Interestingly when I disconnected from the internet, they all started insisting that they were Claude made by Anthropic. None of the models are actually Claude made by Anthropic.

The VSCode extension that I was using to connect to the models, has a website where they have a chat interface, and I was chatting quite a bit with that AI, who also claimed to be Claude.

So yes, it’s working, but I‘m not exactly super confident it‘s working particularly well. One of the models was initially very sure that 2 + 2 = 3. Based on the conversations I had with Claude, it’s quite apparent that most developers that are using AI aren‘t exactly overly worried about security. It seemed like I was in the minority to try and configure things to run in devcontainers.

The big thing here is that I wanted to be setup so that if needed I could work with AIs on coding projects without needing to upload any code to 3rd party servers. That‘s the downside of cloud based AIs, they are very quick and sometimes quite clever, but they require that you upload all your code to their servers, and I know that that isn't always an option for some folks.

I’m hoping to work on some coding features over the next few days. I‘m very interested to see how they perform compared to Gemini. #

2025/08/29 #

The past few days I seem to be stuck in a bit of a configuration nightmare. Hopefully I'll have the thing I'm configuring working soon. It's partially working now, but practically speaking not functional.

I also added an anti-corruption layer to the frontend code, which is a fancy way of describing a standard way to unwrap the API payload, which theoretically makes the application more portable. #

2025/08/25 #

The world is really getting very synchronicity overload the past few days for me. These sorts of times rarely end particularly well. Thought that was worth mentioning, not that it will make any difference at all, world is going to world. #

2025/08/24 #

Ice coffee

I was very happy when I started out writting this post, having a morning ice coffee, after a nice breakfast. The ordeal I had to go through to publish this little post has left me not quite as happy.

Such is life sometimes. #

Miljan Braticevic [38:46]: “When you create an account on X let's say, you are given this account by this corporation, this is something that they give to you, and it's something that they can take away from you at any point. In contrast, when you are on an open protocol like Nostr, you claim your key/pair. You take this massive number out of the universe, and that's you're private key. So you have claimed some territory in this massively large namespace of numbers and then you've claimed something and this is yours and then you defend it. What a stark contrast between these two, and then everything is downstream from there.”

I thought this was a great explanation of one of the central paradigms of key/pair based systems. Somewhat surprisingly, because I really am all for independence and self reliance and the rest of it, I found that the new knowledge that it unlocked, was accompanied by a roughly equal amount of unease. Like somehow there might be a side to this that is not being covered. Might be a bad idea to push to get rid of the centralised systems so completely, at least for a little while. I guess that has been my opinion for a while now. It seems to me like it might be one of those hills that isn't worth dying on, like an asymptote stretching off into infiniti that you can never quite figure out.

I'm making a mental note that it might be worth revisiting the tractability vertigo of this whole thing some time again in the future. Analysis paralysis is a real so and so. #

Some recent project milestones

I have a short list of blog posts I want to write covering some of the key things I have learnt getting my API & React frontend project to a pretty great place as far as code quality. It's been quite an interesting few weeks development-wise. I'm way more impressed with both Typescript and React than I expected to be, and though it's been kind of a roller-coaster ride working with Gemini, I'm still impressed with that too. I'll hopefully get to those posts over the next few weeks, but in the meantime, I thought I would do a quick review of the major project milestones from the past few weeks.

Without further ado, and roughly in the order they occurred, here they are:

  1. Zod based DTOs - Otherwise known as Data Transfer Objects, these are Typescript type definitions for all data that gets passed back and forth between the frontend and the backend. The cool thing about using Typescript in both the frontend and the backend, is that you can setup a shared workspace and put them in there. The backend and frontend still have their own type definitions, but in addition they can use the shared ones. The key thing is to define the data shape using Zod schemas, then create the types by inferal off of them. That way there is a single source of truth for the shape of all data across the frontend and backend, and as a bonus you can use the schemas to do data validation on the backend in a single reusable middleware on all routes. It standardizes all "In-flight" data.
  2. Shared public entities - Another types related thing that works really well with DTOs. These are types for all "At-rest" data that contains some fields that you don't want exposed externally like passwords and private keys. You create the public version of the entity in the shared workspace and then create the internal version based on the public entities, with the sensitive fields in addition. It just ensures that the sensitive data never leaves the secure backend location.
  3. Linting & typechecking - This was huge, especially when working with Gemini, because it has such a tendency to drift, and is constantly leading you down confusing and wrong paths. Having a way to automate the checks means you can mostly eliminate the possibility that style stuff you don't like and typescript errors ever make it into the codebase. I setup pre-commit hooks for linting and pre-push hooks for linting + typechecking. I also set up some custom bash script based pre-push hooks to enforce a specific branch name format and commit message format. It really makes reading the commit history so much easier. Also as far as linting and typechecking config, it's very important to get the config in place, test it is working, and then NEVER change it. Gemini will constantly attempt to change the config instead of fixing the errors, and get you into never ending config change loops. You should almost never change the config.
  4. Secure devcontainers setup - I spent quite a bit of time setting up two different devcontainers, one has read access to the repo but no write access, the other can read and write. The idea being that I should be able to run Gemini in agent mode in the restricted setup and then verify the code myself before switching into the other devcontainer to checkin the code. Similar to the linting and typechecking config it's a good idea to not change this from inside the container. You can set the config to be on a read only mount, so you can always be sure nothing important gets changed without being aware of it. The trick is to clone the repo using https for the read only setup, and use a script launcher script that passes a fine grained access control personal access token (PAT), but for read + write use ssh keys mounted read only to the devcontainer. You will also need to run some post connect commands to ensure you switch correctly between the setups.
  5. Security providers and contexts - Use these React idioms to setup authenticated parts of the site, making the authenticated user available wherever it's needed. I really like this feature of React, once you figure out how it works, it's a really elegant way to make sure certain hooks are only ever used inside parts of the DOM that are wrapped in a specific component. So you always know for sure that, for example, the authenticatedUser is available inside the part of the site that is wrapped in a security context. Spend time understanding how the pattern works and name things well, because Gemini will often suggest ambiguous names for things.
  6. Backlog.md task based workflow - When working with Gemini this is essential IMHO. You need some way of keeping track of work, because you will constantly be getting cutoff and each time you will need to get Gemini up to speed. Pretty much mirrors how humans should do it, specification driven development, but it's much easier because the AI can write much of the spec plans and acceptance criteria. Of course you will need to review it and make sure it's not full of weirdness, which happens quite a lot. The other thing that happens a lot is Gemini suddenly starts inserting backtics into documents, as a sort of sabotage because it breaks the chat window, and then there is no way to accept the changes. It's usually a sign that Gemini is confused about something. You can often get around it by asking it to oneshot a new version of the doc with _NEW appended to the filename, and manually do a diff or copy paste yourself.
  7. React router, queries and loaders - Another really great React feature, that enables you to pre-fetch all the data you need on page load, and when combined with Tanstack query, you get caching thrown in for free. It might actually be a feature of Tanstack, I can't quite remember. In any case it's cool and makes apps feel really fast. Might have been easier to start with these from the outset, depending on your React / javascript fluency. I got it all working without it, only for Gemini to suddenly tell me about it after initially forgetting to tell me about it. The conversion took quite a while.
  8. Me endpoints for added security - I realised that the initial architecture was leaking user information via the urls used to access the API. It meant that it was trivial to identify admin users just by the urls they were loading. This was a pretty great solution, suggested by Gemini, apparently it's very standard these days, but I hadn't been exposed to this pattern. Basically you have dedicated /me routes (i.e. /me, /me/blah etc) that specifically only loads the data of the logged in user, and that's hardcoded into the route, so privilege escalation is impossible, but also since all users can use these routes regardless of their role, it makes it much much harder to identify say admin users based just on the urls they are requesting during normal usage.
  9. Weekly sprint planning process - The backlog task setup has been crucial, but it became a bit unwieldy just because it was so good. I setup a simple weekly process based around the Agile development methodology idea of weekly or bi-weekly sprints. I do it weekly, and have a standard document that gets generated during a planning session with Gemini where we identify which tasks I will attempt to complete in the upcoming week. This works really well with the already in place process of retrospectives, for documenting major decisions. Gemini is always reminding me of very important things we have discussed previously while working on features. Really helps with general AI guidance.
  10. End to end data integrity pattern - Probably the most important, and is really a sort of supply chain type concept, where a series of things you put in place in such a way as to guaranty that the data sent in the frontend application layer is pretty much always exactly the same data received by the backend service layer. There are all sorts of little crevices and weird unexpected places things can get screwed up as you pass data through all the frontend and then backend layers, but I've found a way that is very very effective. Relies heavily on shared types.
  11. API response refactor - This was a follow up bit of refactoring to the previous, end to end pattern. You have got to remember to standardise the responses. Again using some clever shared types, you can ensure all the responses have a standard response envelope, for single object responses, lists of objects responses, and failures and error responses. Gemini was super useful to do a quick survey of the top 10 API platforms response envelopes. You can then craft one that works best for your purposes. With enough thought you can design something that is flexible enough to accommodate more complex features like paging, even if you don't implement that initially.
  12. User details pages - For admin views, you need to be able to search for a user and display their resources, just in case something goes wrong. Only give admin fine grained modifying capability once an activity log is in place, so you can see what actually happened. The search feature was yet another place where React really shines. We had a ResourceList component used across many pages to list the various resources. Well one of the props it accepts is a way you can pass in an HTML snippet and it inserts it into the top right of the table. So you just pass in an input box and you can wire it up really easily to which rows get displayed in the table as you type in the input box.
  13. Make Oauth endpoints RFC 6749 compliant - When I overhauled all the response types, the Oauth 2.0 flow stopped working, because Oauth has some very specific requirements as to the shape of the data, and the new response envelope was causing issues. Just remember you might have some endpoints that have different envelope requirements, depending on what you are doing.

So that about covers the main things from the past 2-3 weeks. It's been very productive, although very tiring, and at times quite confusing. Working with an AI agent is awesome until it innevitably isn't. But it's possible to put in place some best practices, tools, and development methodologies to get the codebase back into great shape. #

2025/08/22 #

Builder

About a week ago I thought I'd gotten the codebase of my new project to solid ground. I knew I still had quite a few features to add to get the migration to a React frontend finished, but I soon discovered that there was considerable drift from the established project best practices.

I ended up having to go back over all the routes and standardise the API response envelope, and put in place some type checking to ensure that all the responses were consistent. It was rather a big effort with losd of knock on things that needed fixing and updating, but I got it done a couple of days ago. I was completely exhausted, both mind and body. I'm starting to feel a bit better now that I have had a bit of propper rest.

In any case the code is looking great, with a new react router, queries and loaders, end-to-end data integrity, and now a standardised response envelope. Hopefully I'll get some time to write a blog post about it soon. #

2025/08/15 #

I’m trying to get a bit back into the habit of daily blogging. I’ve found that when I get very in depth absorbed into development work, my blogging output decreases. It’s a focus thing, but also a balance thing. There are times when blogging can be an ephemeral and non-distracting activity, something you just do in a few minutes in between tasks. But it can also sometimes turn into something more involved. And when you are really in the thick of some very difficult programming, you really don’t want to get distracted by that second type. On the other hand, you don’t want to completely not write either, because blogging does help you organise your thoughts. #

Curly apostrophes on a mac

A while back I noticed that the apostrophes that I was typing were not the right ones. In code you are always using single quotes ' and double quotes ", but for writing I find that the apostrophes look better when they are the curly ones. I was very shocked to find that the mac keyboards do not have keys for the curly variety.

Then after some googling, I thought I had figured it out. By typing the kind of cryptic key combination option-] you would get a curly apostrophe. Life was good, I continued on with my writing. At some point though I realised that these curly apostrophes looked a bit strange. I figured it was just my eye sight, which is not as good as it used to be. But I eventually looked into it, with some form of zooming in and sure enough the curly apostrophes were going the wrong way. All this time, many months, my apostrophes have been all wrong!

I‘d, I‘m, isn‘t it, ‘init, and all the other times you use apostrophes, all wrong.

Well yesterday I discovered that you can get the curly to go the other way with, wait for it, shift-option-].

I’d, I’m, isn’t it, and of course my favorite, ’init.

Ok you don’t technically need an apostrophe with init or innit (19 frigging pages wtf) or indeed with it’s somewhat less famous counter part isit, but this seemed like an opportune moment to make use of these, which I’ve always felt are in some way a sort of London version of like, even if they of course mean something completely different.

In any case, from now on, most of my curly apostrophes should go the right way. #

Matt ODell [26:26]: "Trump has absolutely filled his bags with as much Bitcoin as possible, including all of his cronies. I mean right now his stock, Trump Media, has 15000 Bitcoin in the Treasury. Bro has more Bitcoin than Coinbase." #

Today’s links:

2025/08/14 #

Solid ground

Just coming up for air, after a long, somewhat confusing, but ultimately very successful development sprint.

It feels like it’s been a while since the last plateaux. Thankfully things have improved considerably since then, even if there have been a lot more necessary detours and refactorings than I even dreamed there could be. The ground I am developing on is much more solid, both in the backend and frontend, and I have now put in place many very effective processes, to streamline my workflow, keep the codebase clean, well organised, that tells a strong and clear story of progress.

Today I've been planning the next two sprints, and there are some really cool features ahead. Overall the API and it’s new React frontend is looking very nice. Minimal, functional, uncluttered, fast.

Still a lot to do, but it feels good right now, even if I know I’ll be heading back into the eye of the storm very soon. #

Calle [35:44]: "There is a thin sliver of a population of, let’s call them plus minus millennials, that grew up with an internet that was like wild west. Complete freedom. No identities, no logins, you know, just IRC chat rooms, anonymous people. You just talk about your interests, and you don’t present yourself, it’s not about look at me, it‘s about look at this interesting thing. And anything goes. File sharing, sharing of information, the age of Wikileaks, we are going to revolutionize the way that the planet works because we can now finally communicate. We are going to grow together as humanity, because now we have this internet that is this neural web, the global neural web that is going to put us together. There is this idealism from that time, affected only a small part of the population that is still around today, and those people are the people that care the most."

I hadn’t thought of it quite like that before, but he’s right. And from this zoomed out position it’s strange how you could see it as in some way similar to that period in the late 60s, especially in the US, the summer of love and all that, even if it didn’t really feel very much like that at the time. Same same, but different, as the saying goes. #

2025/08/11 #

Haha just got the admin page fully working. Just after midnight. That‘s not too bad. This little Oauth based API and React frontend is starting to look pretty darn cool :) #

Today’s links:

  • David Kipping - Ep#2363 (Joe Rogan Experience Podcast) - Astronomer and associate professor at Columbia University, very eloquent, enthusiastic, high level of clarity, speaks effortlessly about exoplanets, exomoons, stars, galaxies, black holes, the weird things the James Webb telescope is showing us about the universe, aliens and lots of other cool and wild stuff. www.youtube.com #

2025/08/10 #

It‘s unbelievable the detours you have to make in programming sometimes, especially in the early days of a project. Just when you think you have the architecture well and truly figured out, you realise there is this other very important part you just absolutely have to refactor. Sometimes it really feels endless. You find yourself doing a refactor while doing a refactor.

And then you finally tie up all the loose ends, and a whole lot of other small bits and pieces and fixes you had to do along the way, and you are exhausted, and you are right back exactly where you were 24 previous, when you realised you had to do the refactor. You can finally really start the original thing you set out to do! But it‘s 1:37am and you are spent.

Programmer problems. #

Typescript and Gemini evil mode

Last night‘s late night note turned out to not only be a great description of what had just happened, but be very prescient as to what was going to happen yet again.

I sat down today to finally make a start at the thing I‘ve been trying so hard to get to for the past few days, and I thought to do a quick review of the codebase. Wouldn't you know it, Gemini unearthed another inconsistency, related to a modal component that apparently used an "old style" of modal. Well, it was Gemini that recommended this now "old style" a few weeks ago. Would have been a lot easier if it had just recommended the correct way in the first place.

And you know what, yesterday‘s 24 hour refactor of the frontend router to use a "modern" style of React routing, was also a product of Gemini‘s advice to implement the original "old style" routing just a week before that. I‘m starting to see a pattern here.

Oh and by the way, all day today Gemini is back in I will ignore everything you say mode, then apologise for getting it wrong, and then do the exact same thing again, literally in the very next paragraph. It also went through a touch of secretly creating a file, then saying oh look I just noticed that this file exists, that means our next best thing to do is this huge and really dangerous refactoring.

I‘d love to say that Typescript was making this a whole lot better, but actually even Typescript seems to have been co-opted by evil mode Gem, because what has been happening is that though Typescript is super great in most normal situations, in a situation when you just want to implement a small portion of the refactor to test out if the idea works, well Typescript + Gemini actually work hand in hand to force you into refactoring the entire f-ing thing in one massive dangerous step, because each small change, creates another 20 breakages that you then need to fix. And on and on.

The best and safest way forward was actually to stop and temporarily add the offending file to tsconfig exclude list, comment out a bunch of UI code, and get the feature working in one small part of the app. Go figure. I‘m happy I went that route because, wouldn‘t you know it, as part of getting it working, it was necessary to fix another few small unexpected bugs. Had I gone down the refactor everything in one huge step route, no doubt there would have been even more bugs to fix, and I don‘t think I would have made it to the other side.

Maybe I'll be ready to start on this admin page finally tomorrow. #

Today’s links:

2025/08/09 #

Changing tires

I‘ve been reading about running LLM‘s locally for the past few days. It‘s something I looked into briefly before but it always seemed too complicated. I‘ve noticed that since OpenAI released their open source models, people on various podcasts have been talking about this more, so I‘ve been checking projects out and what not. It‘s seeming more achievable, I think partly because the tech has progressed but also my understanding of the space has evolved too.

It‘s tough not to get sucked into rabbit holes on some of this stuff. I am trying to spend a bit of time reading about it in the mornings, but then you have to put your half baked research aside and get on with your current project. Web development is strange in that you have to constantly be taking small bites at things, and eventually what was not possible, becomes possible. You have to do both, and then find time to write about it too. #

Today’s links:

  • Maple AI - Private AI chat that is truely secure - "End-to-end encryption means your conversations are confidential and protected at every step. Only you can access your data - not even we can read your chats". It‘s still cloud based but it‘s much more private and you can run more powerful models. I feel like I want a hybrid solution where I can run local models and private models in the cloud. trymaple.ai #

  • Is it, like, OK to say ‘like’ now? - I actually remember when people started saying 'like' all the time. It seemed to happen over night, at least in the UK and in Europe. And yet now, it‘s strange to think there was a time when people didn‘t say it, like ever. Sort of like there was a time when there was no internet, no mobile phones. I hadn‘t really thought about it much, but it did have a profound effect on how we experience the world. www.theguardian.com #

  • CodeRunner: Run AI Generated Code Locally - "CodeRunner is an MCP (Model Context Protocol) server that executes AI-generated code in a sandboxed environment on your Mac using Apple's native containers". I could definitely see this could be part of a local AI rig. github.com #

  • Getting started with Podman AI Lab - I remembered this morning that Podman has this AI Lab extension. Re-read the blurb and I wonder if whether it might turn out to be the open source dark horse. I'll have to try this out sometime soon. It‘s been a long time since I used a Redhat product. Perhaps not the coolest kid on the block, but maybe for running AI agents dependability might be more important. developers.redhat.com #

2025/08/07 #

Multiculturalism the ugly step child

I thought this article about how South Korea is integrating different cultures into their society was a really interesting and well written piece.

Mutliculturalism is a bit of an ugly step child at the minute, it gets blamed for all sorts of things. I don‘t care for many *-isms, but I do know that integrating different cultures is really difficult, but I also know that it‘s important, because we need diversity. And the thing is, I think that the thing people hate about multiculturalism, isn‘t really about people from different backgrounds and cultures, that‘s just where the problem surfaces, where it gets exposed and put into the light.

The real problem is more fundamental, and probably exists even in places that don‘t have a lot of foreigners, because the real issues are more to do with human nature, and power and wealth. Those sorts of things.

And I'm not saying that countries should just have a complete open door policy, of course not, it‘s a balance, and that‘s one of the very difficult things to get right. It‘s a tremendously complicated massively distributed game of balancing the very fabric we are living on. That‘s no easy feat. But when we get it right, or even sort of right, it can be very wonderful for everyone.

Anyway, I find it very interesting to learn about how asian countries are handling these sorts of challenges. I don‘t think any of us have it all figured out. Some of the things we do work well, other things not so much. And each place is a bit different so things that work in one place don‘t necessarily work in another.

But I also think the better we get at integrating different cultures, the better we will be able to live with ourselves too. And that‘s why I really liked how the piece ends, because the whole way through there was something that felt a bit off, something missing about the full picture, and I think to a large extent the author nails it. #

React + Typescript + Gemini: A pretty great combo

So I did end up modifying my devcontainer setup to accommodate the possibility of working with Gemini in agent mode. I have two configs now, one is a sort of admin mode, that I use myself, and have full access to the Github repo. The other is a paired down agent config, where you can only read from the repo. It's enough to get the latest code so theoretically an agent could start working locally on features, but it wouldn't be able to push to the remote. I also improved the setup so that you can only modify the devcontainer setup from outside the devcontainer. Gemini is very good at creating complicated situations where it could easily change some things without you realising.

I haven‘t taken much time to really try out agent mode because I have been finalising the frontend architecture for my React frontend, and actually I was learning a lot by pair programming with Gemini. As long as you can put up with it‘s infinite ego, and nod your head when it is telling you how you are frustrated all while it digs itself into a bigger and bigger hole, and be ready to pull the rip cord to make sure you don‘t fall in with it, it is phenomenal at writing documentation, summarising work done into very detailed commit messages, brainstorming ideas and development strategy, and at figuring out very complex Typescript blockages.

Looks like I'll have the migration of my new API‘s frontend to React sometime in the next few days. And I already have a small list of improvements to make. #

Today’s links:

2025/08/04 #

The AI energy crisis

Data center 1

A few weeks ago I wrote about the size insanity of the new AI clusters that are being built. And I followed that up with some rather quaint context from my past experience with high performance computing in the early days of the VFX industry. Well, it seems other seasoned computer science professionals jaws are dropping too.

From a very funny, and prescient moment in a recent Changelog Podcast interview with Greg Osuri on the AI Energy Crisis Ep#652 [12:20]:

Data center 2

Greg: Just to give you context, this is one data center we are talking about, one company, right, so put that with the heavy competition between, what‘s his name, Mark Zuckerberg tweeted out saying, or facebook out, saying that they are going to build a data center bigger than Manhattan, we saw the tweet.

Jerod: I did not see that tweet.

Data center 3

Greg: Everybody is competing. Yesterday Elon Musk said they are going to, in fact, they are going to get about 5 million GPUs by 2028, I believe he said, or next 5 years, so maybe 2030. [Exhales deeply] 5 million GPUs, he said 5 million H100 ecore, that’s 5 times 1 Kilowatt, 1.2 Kilowatt per GPU, just to give you a brief idea as to how much compute or how much energy we are looking at. OH sorry 50 million. Sorry, I take the number. 50 million. These nunbers are so big.

Jerod: Not 5 million, 15 million, one five?

Greg: Five zero.

Adam: Oh 50 million, so 10x what you originally said?

Greg: Yes apologies, I mean these numbers are so large.

Greg & Adam [at the same time]: It’s hard to even fathom.

Adam: I mean 50 million GPUs!

Greg: For context, Nvidia made about 2.5 million H100s in 2024, so...

Adam: Tccchhhheeeeewww.

Greg: ...ha ha...not financial advice but, ha ha ha...but Nvidia I believe is going to...

Adam: Haha yeaaaah!

Greg: ...is going to be extremely beneficial.

Jerod: You think the stock is high now...

Manhattan 2

I asked Gemini to tabulate the 10 biggest data centers worldwide. They seem to take up to around 10 million square feet. Manhattan for reference is 633 million square feet. So that's around 60x current ginormo data centre sizes. Seems like Zuck‘s estimate was a bit ambitious, but honestly, not by that much if all these growth target numbers are to be believed. Remember Sam Altman said he wants a 100 million node cluster.

Manhattan 1

This webopedia article says that the Yotta NM1 data center in India as 30000 racks, which sounds like a heck of a lot, until you read about the Alibaba Cloud Zhangbei Data Center in China, that has 52 buildings with 50000 racks each.

Glad it‘s not just me that is OMG-ing at all this stuff. #

Mstr vs Nvidia

This isn‘t a huge revelation or anything but given my post on the AI power crisis earlier, I've obviously been thinking about tech, and stocks, and well, isn't it interesting how similar the trajectory of MSTR and NVIDIA have been? It‘s probably nothing to worry about.

MSTR:

Mstr

NVIDIA:

Nvidia

Who knows, it might even be a good thing. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.