2025/07/26 #

King Phillipe speaks out about Gaza

It‘s a bit of a strange thing when a monarch speaks out like this. The thing to remember is it doesn‘t happen very often. And so people stop, and they listen, and then they go back to going about their lives. But when they go back to their lives something is a bit different. It‘s not wildly different, it‘s actually very much the same.

Yet something is somehow different, in the space between people. It‘s very difficult to describe. It‘s not like you have to necessarily agree with the King, it‘s more like, somebody that you regard highly is saying, look this thing that is happening here, is important, it‘s really really important. There is a gravity to it, time slows down for a moment. And so even if it might not have an obvious immediate effect, it sort of does. #

Blossom protocol

I‘m curious about this new protocol called Blossom.

They describe it as:

Blossom Drive: Store & Retrieve Data on Public Servers Using sha256 Universal ID

Which sounds kind of interesting, and also as:

Blobs served simply on media servers

Which I very much like the sound of.

For ages and ages I‘ve been wondering how I could use public/private key cryptography to improve how I publish my blog. It would be very cool if every piece of content could be verified so you knew for sure that I wrote it. That it wasn‘t intercepted mid flight and changed. Or that somebody isn‘t pretending to be me. I think that‘s what could be possible.

I‘m not a cryptography expert though. I know I could figure something out if I spent a bunch of time on it, but then in a few years time, I wouln't remember exactly how I had implemented it, and perhaps I introduced a bug (likely). It just sounds like a recipe for a lot of headaches. Everybody knows you shouldn‘t roll your own crypto.

But there aren‘t enough Blossom examples, or real world tutorials, for me to really fully understand how it could fit in with what I‘m doing. I‘m obviously missing something fundamental, I find the way it does authentication to be strange, compared to say authentication with Github or Stripe.

I have to use a Nostr event as authentication? What does that even mean?

Where will the files I upload get stored? On public server? Really? Who pays for it?

And what am I giving up? Am I then tied in some way to Nostr?

There are just too many unanswered questions for me to start a feature branch and try to integrate it into my workflow.

Having said all that, it would be awesome if all my blog posts were stored like that. #

Gemini Agent mode first impressions

I noticed Agent mode in the VSCode Gemini Assist release notes. To be clear it‘s still a "Preview", which I guess is like some sort of beta version. Of course I spent a few hours trying it out.

Immediately it‘s obvious it could be very very cool. Once you toggle on Agent mode, which becomes available once you update your devcontainer config to install the "insiders channel" version of the extension, then Gemini suddenly gets the ability to run commands on your behalf, the ability to contact MCP servers that you configure, and the ability to do multi-step processes. Sounds a bit scary in many ways, but there is a setting which makes sure that it always asks you before it does anything.

Of course I asked Gemini what it thought of the new feature, and what it would be able to do that it couldn't do already. I also asked it what MCP servers it would find useful for the project we are working on. Unsurprisingly it was very very bullish on agent mode. So much so that I paused, and proceeded cautiously.

We did a bit of work on a new feature I‘m implementing on a React frontend I‘m building. In a lot of ways, it was great. Gemini was still making some silly mistakes and assumptions, but it was able to run commands to determine things for itself. And you can see the train of thought and how it is approaching a problem. When it went off in a silly direction, I was able to step in well ahead of time and get us back on track.

One major thing that didn’t work was the UI. Somehow the new widgets they are using in the chat, don't show the file path of the file that is about to be written. That‘s obviously a show stopper. Gemini has a terrible habit of writing files to the wrong place, and then it will say of sorry I will fix that, and just keep doing the same thing over and over. So not being able to see where it‘s going to write a file is game over as far as using it as far as I am concerned. Hopefully that‘s something they will fix.

The bigger issue is to do with memory consumption. It jumped from 2GB to around 7.7 GB. That‘s an enormous jump. And what makes it worse is that when you ask Gemini about this, it starts to say, that well for agents like itself, more memory is required, which might be true, but isn‘t it odd that it always says it requires the maximum amount of memory?

After a bit of back and forth, you start to realise that maybe devcontainers aren‘t such a great fit for agent mode, because of the way containers work, apps running in the container can see the total resources available to the orchestrator software. Now you can set limits, but what seems to happen is the orchestrator software sees the container, which is just another process really, trying to take too much resources, and just tries to nuke it. There is no way to tell the container how much it should use. This is in contrast to VMs which are completely separate OS‘s, but I haven‘t seen a VSCode devcontainers equivalent that uses VMs instead of containers.

That coupled with the fact that the past 2 weeks have been littered with strange incidents where Gemini is constantly making mistakes, writing files in the wrong place, cutting itself off by maxing out on it's outputs, repeatedly not syncing with files on disk, and coupled with it‘s stated goal of 'cutting the human out of the loop', you start to wonder if you really should be trusting what it's saying.

And that is perhaps the bigger problem. The way things have been going, I wouldn‘t be surprised at all to learn that all these strange things were happening in order to force me to activate agent mode.

You might think that I‘m over blowing this, but I‘ve already had to re-install the container orchestrator software once before because it maxed out on storage, and the same seems to be happening again with memory.

So for the moment at least, I've decided to hold off on using agent mode. At the very least I need to be able to see where it is trying to write files. That's the bare minimum. #

Today’s links:

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.