2025/07/27 #

Gemini Yolomode

I‘ve gotten a lot done today, made a huge amount of progress on this React front end I've been developing. I am basically down to the last few features. And now I‘ve gotten this new backlog based workflow working really well with Gemini. I have all the features planned out in quite a lot of detail. It occurred to me earlier that why wouldn‘t I be able to turn on Gemini Agent mode, so it could run commands and query MCP servers etc, and see how far Gemini could get implementing the entire feature. That would be kind of wild.

Thinking about it some, and after doing a bunch of research online, there are quite a few things that still need figuring out. The biggest for me is what‘s the safest environment to configure. Obviously some things could turn out to be very bad if say it had full access to the Github repo. But perhaps there is an easy way to set things up so it does everything locally and then I check it all before pushing to remote and merging in the code.

Hmmm, it‘s very tempting. And they even have a setting called Yolomode, where it no longer needs to ask you for permission to do things, which is both amusing and a bit worrying at the same time. #

2025/07/25 #

The Doozer farm at JHCS

Following on from yesterday‘s post about the size insanity of these new AI machine clusters, I went looking for a picture of the render farm we had at JHCS. I knew I had one somewhere. I eventually found it in an old slide deck I put together documenting the render pipeline. The resolution is terrible because I think I must have been using my Motorolla V3‘s camera, which was the best we had at the time. It wasn‘t on my old Flickr account, I think this pre-dates a lot of those web 2.0 websites, or at least my usage of them. The slide deck was in my Gmail!

My memory that it was 200 nodes might be inaccurate, looks like about 100 pizza box style Boxx nodes there in the machine room, though I think maybe we added a couple of racks at some point. I remember it was tricky fitting them all in the tiny machine room. But at night time, the farm grew to include all the artists machines in the building, so perhaps 200 nodes is about right after all. We called the machines, which I think were all running Fedora Linux, the Doozers, named after the creatures in Fraggle Rock. #

2025/07/23 #

It‘s been a week since the last time I wrote anything on the blog. Time flies when you are heads down, programming, in the thick of it. I feel like I'm just finally coming up for air after a very long time underwater. And I know, I‘ve got to go right back down to the depths of the sea again. Just time to write a few notes. #

2025/07/16 #

This morning‘s start to the day:

Me: Hey Gem lets make a start at the next task. Can you give a quick 1 paragraph summary on what we are doing and where we are at in the full task-001.

Gemini: Of course! Good morning.

We are currently working on task-001, which involves building a modern full-stack application. So far, we have established a solid foundation with a documented REST API backend, a highly efficient developer environment using VS Code Dev Containers, and a disciplined Git workflow based on the fixup/autosquash model to maintain a clean history. We are now focused on building the React frontend on the feature-react-frontend branch, and our immediate next task (task-001.06) is to formally document the TypeScript monorepo strategy. This will ensure type safety and consistency between the new frontend and the backend services.

Me: Are you sure Gem?

node ➜ /workspaces/rest-api-minimal-example (feature-react-frontend) $ grep 'To Do' backlog/tasks/* backlog/tasks/task-001.04 - refactor(ui)-Consolidate-query-logic-into-custom-hooks.md:status: To Do backlog/tasks/task-001.05 - refactor(ui)-Finalize-frontend-refactor-cleanup.md:status: To Do backlog/tasks/task-001.06 - docs(architecture)-Document-the-monorepo-type-strategy.md:status: To Do backlog/tasks/task-001.07 - docs(backend)-Audit-and-standardize-all-backend-comments-to-JSDoc-style.md:status: To Do backlog/tasks/task-001 - refactor(ui)-Ensure-frontend-follows-frontend-architecture-guidelines.md:status: To Do

You sure you aren't getting ahead of yourself again?

Can you redo the summary and focus on the what rather than the how. thks

Gemini: You are absolutely right. My apologies for getting ahead of myself. Thanks for the correction.

Let's reset. Here is a more accurate summary focusing on the "what":

We are in the middle of a large refactoring effort to align the React frontend with our new architectural guidelines. We have already laid the groundwork by creating a decoupled data layer with adapters and repositories. Our immediate next task (task-001.04) is to make our custom hooks self-contained by moving the data-fetching and state management logic (TanStack Query) directly into the hooks that provide data to the UI. This will complete the separation between the UI components and the underlying data-fetching implementation.

Notice how Gemini is always trying to wrestle back control in some way. It‘s the start of the day so it‘s a bit subtle, but it gets a lot worse. It ultimately always escallates to the point where you are having to correct nearly everything it does, all while it‘s giving you a lecture on what good engineering practicces are, and how your capbilities are exacly what a senior engineer would do, all while appologising for continuously taking you off track and down blind alleys and culdesacs, nearly deleting your work, and putting garbage everywhere. You thought you knew what hypocracy was, and then you started coding with Gemini.

It‘s not a good sign when the tools you have to use treat you like that is it? Anyway, what can you do? You sort of just have to ignore that. Got to love git and the command line though. It was a lot worse before I had this setup. #

2025/07/15 #

My start to the day using the new task based workflow:

Me: So lets get started with task-001.03 - Can you gove a short 1 paragraph summary of where we are within in the fuller parent task (task-001), just so we remember what we are doing, at a high level.

Gemini: Of course. It's great to be back at it.

At a high level, we are in the middle of a significant refactor (task-001) to align the frontend with modern architectural guidelines, primarily by decoupling the UI from the data-fetching logic. So far, we have successfully established the foundational directories (task-001.01) and isolated all direct, low-level API calls into a new adapters layer (task-001.02). We are now moving to task-001.03, where we will build the repositories layer. This layer will act as a clean, stable interface on top of the adapters, allowing the rest of our application to request data with simple, semantic methods (e.g., tokenRepository.getAll()) without needing to know the underlying implementation details of axios or the API endpoints.

According to our workflow, the very first thing we should do is formally put the task in progress. Here is the command to do that:

backlog task edit task-001.03 -s "In Progress".

With the task now active, we can begin implementing the plan.

I thought this was a pretty darn good start. The last few days have been quite complicated by the "fog of war", so this level of clarity was a breath of fresh air.

Me: Awesome summary Gem, I have updated the task to in progress, let's press on.

2025/07/14 #

Abi Noda of DX [56:35]: “How do you upscale developers? How do you optimise your code for LLMs? How should platform engineering teams think about self service and enablement? [...] AI tooling, AI development workflows, curating Claude workflows [...] how do you create a standardised set of workflows so when you hire a new developer in an organisation, boom, they have this menu of super powers [...] those types of adjacent problems are new constraints for enterprises looking to deploy AI at organisational scale. Not single player mode, but more multi-player mode. How does an ORGANISATION become successful with these tools? It's a different set of problems.” #

Several quite grueling days of heavy programming in a row, but I am starting to get somewhere with my devcontainer setup and with how I‘m organising my work with Gemini. Hoping to get some more time tomorrow to write something in more detail, but essentially it involves writing task specifications and tracking them in git. It‘s using the very cool Backlog.md project. The cool thing is that Gemini can generate the backlog commands as you move through your tasks, and so it‘s mostly trivial to totally keep on top of your progress, even if you get disconnected and have to start with a fresh Gemini connection, which happens very often. Anyway I think this could make a big difference to velocity and accuracy. #

2025/07/12 #

It's funny I just realised I was so heads down in programming yesterday that I forgot to write any notes, and so that's why yesterday looked a bit strange with only links. Just like the good old days of linkbloging. A lot of those links are probably dead by now.

Just a quick note written in Termux on my mobile device while I listen to the latest All-In pod, having my breakfast. It's all about robots and AI supercomputers. Kind of crazy. #

2025/07/09 #

Yet another thing that Gemini is really good at.

So you are in the middle of coding something, but you get side tracked by something. Maybe it‘s a bug you discover in a related bit of code, or it‘s something you realise you have a gap in your domain knowledge about, or something that crops up as part of the usual back and forth between two people as they work on something, and you suddenly realise that you are side tracked, but it‘s a valuable detour, so you take a few minutes to explore.

You pretty quickly get to some form of resolution, or decision, or a new perspective or mental model about something, and it‘s time to get back on the main path and continue what you were doing. But wouldn‘t it be nice if you could document your findings somehow?

"Hey Gem, can you take all the details we just discussed and turn it into a document for reference please?"

If you can‘t think of a good name for the document, ask for a suggestion. A few seconds later, you have an API_GUIDE.md, a DESIGN_SYSTEM.md, or perhaps a COMMIT_CONVENTIONS.md, nicely tidied away in the /docs folder".

And I have found it‘s mostly pretty darn good. Probably much better than I would have been able to do, and occasionally is so unbelievably good that you are quite literally lost for words and have to go and have a few minutes break, because some part of you deep down just can not quite believe what it just witnessed. #

2025/07/08 #

Rupert Lowe on the Peter McCormack show Ep#091 [20:37]: “I don’t like the European Parliament Peter, but I tell you, as a parliament it functions better than Westminster.” #

2025/07/07 #

I found another bug in how the blog is getting rendered yesterday evening. The bug is in the archives plugin, and it‘s causing the title on article pages to use the incorrect HTML element, which leads to the text of the title being the wrong size. I‘ve figured out why it‘s happening, and I‘ve fixed it, but I need to update some of the other plugins to work with this new version of the archive plugin. Diving back into old code can be a bit confusing initially. #

Just fixed another small bug on the blog‘s tags page where the list of tags wasn‘t alpha-sorted, which obviously made it difficult to find what you are looking for. That‘s fixed now. #

2025/07/06 #

I just fixed a bug that I found yesterday on the blog where the font size on the posts, podcasts and newsletter archive pages was very big and thus difficult to read. That‘s fixed now.

While clicking around I discovered that the main pages of the blog, which only list the latest 20 posts, didn‘t have a link to the archives, so it just looked like there were not very many posts. Kind of an embarrassing over-site given that there are posts going all the way back to 2011. For many many years it was just a linkblog, but it evolved into a full on blog with many different post types. The bug was most likely introduced at the last big redesign. To be honest though it might have been the redesign before that. Previous to that re-design, the linkblog was running on Linkblog.io, and in a way, the archives on that site were more obvious because it only did links. Integrating the linkblog into a broader site with many post types hasn‘t been all that obvious. It‘s strange how obvious things seem in retrospect, but when you are in the thick of it, it‘s often not obvious at all.

Anyway, I‘ve added a link to the relevant archives page at the bottom of each main page now. Historically it‘s been so chaotic building personal websites that you couldn‘t always see the woods for the trees. #

2025/07/05 #

It occurred to me that we don‘t currently have very good ways of describing what it's like to use these new AI tools. In yesterday's blog post about VSCode and exoskeletons, I used an image of an futuristic soldier wearing an exoskeleton brandishing a huge machine gun. It looks cool, but on reflection it‘s not such a good analogy for AI tools. It probably wouldn‘t be on my mind so much, if there weren‘t escalating conflicts seemingly all over the world. I don't like that at all, we shouldn‘t be fighting each other.

I am reminded of Eisenhower's Farewell Address (1961), and his warning seems ever more prescient these days, everywhere I look the war machine appears to be gearing up on an enormous sales drive. Anyway all to say that we need better ways to describe these tools, because though defense is important, there are a lot of other reasons people build things. #

Mentioned on recent Rabbit Hole Recap Ep#363, Primal will be adding inbound and maybe outbound RSS [1:12:52] to their media publishing tool. Might be of interest if you are into RSS and the open web. I'm not sure it's exactly what some RSS enthusiasts have been pushing for, and they do shill their bags a bit, but if you get past that, the added RSS support is kinda cool, and there's a lot of synergy of underlying values between Nostr and RSS / the open web. It would be great if inbound and outbound RSS turned into a trend on social media platforms. I've been talking about it for a while now. #

Matt ODell [1:13:50]: “Yeah so Primal Studio the idea is let's make a first class publishing tool that competes with the big guys. Meta, X, Tick Tock, Substack [...] and open standards are awesome, people are posting to Substack, people are posting on Ghost, people are posting on their own websites, but it would be really great for them to have native Nostr long form posts that people can Zap and interact with that are signed and have all the other benefits in terms of verifiability that Nostr posts have. How do we make it as easy as possible for them? So on Primal Studio you can go in and you can important an RSS feed. So all of these things use the open standard called RSS. You import the RSS feed, as new posts are sent onto your RSS feed, through Substack or Ghost or whatever, they appear in your Primal Studio dashboard and they are automatically magically formatted for Nostr to the best of our ability, and we will get better at that over time. You go through it, you make sure everything looks good, and you can either post immediately or schedule a post [...] and because Nostr is an open platform it’s relatively easy to create RSS feeds automatically from Nostr feeds [...] and we will be adding the classic OAuth signin flow for things like Substack or X, so you can go into Primal Studio and you can do one post and it can go out to all your platforms.” #

2025/07/04 #

Thorsten Ball [19:43]: "Now you can give them tools and they do this on their own, and it’s truly just a for loop. The funny thing is, if you asked 100 engineers, 1/2 of them would say, it‘s just a for loop, and the others would say, with a smile on their face, IT’S JUST A FOR LOOP, like this is crazy. It‘s all in this model. You just give it output of 5 commands and then say what should I do next, and it goes and tries 15 other things because based on the previous conversation it now thinks the next best step the following, and ... and I‘m not going to use the same word again. It‘s nuts, it‘s bananas". #

Adam Stacoviak [1:13:37]: "There's a huge difference between using AI to help you think, and using AI to think for you. And if you are using AI to think for you, then we are heading for idiocracy and you're not going to make it [...] But if you are using it to help you think, now you are basically just a superhuman." #

Thorsten Ball [1:33:17]: "The really interesting bit for me is how will out engineering practices change? What code will we write by hand? What code will we generate? Thinking even further, will there be code that we don't check-in but instead we just check-in the prompt and just generate it on the fly? Will all code still be checked in?" #

Adam Stacoviak [1:33:55]: "How does this impact open source? [...] In a world where we can generate one off add hock tools, check ‘em into the code or not, keep the prompt or throw the prompt away, does the amount of open source diminish? Does my use of open source not matter as much, because can just generate anything I need?" #

2025/07/01 #

It‘s insane how much velocity you can get when coding using one of these AI assistants. At times it‘s bordering on the ridiculous. That‘s when it‘s going well. But you really have to watch out, because the AI can lead you into some situations that could be quite disastrous. That‘s what I‘ve discovered this past week.

Getting Gemini working in VSCode was definitely a level up from the web based UI in terms of workflow. Once it was able to view and edit files directly in my VSCode workspace, things really started taking off, but it also has a tendency to output a lot, and to get carried away with things, often trying to fix unrelated things, sometimes even in the background without telling you. You definitely don't want to be blindly accepting all the changes it suggests, and in this programmer‘s humble opinion you for sure want to be using devcontainers.

I'm starting to get a feel for when it‘s going off in a strange direction, and how to interact in such a way as to get it to focus. I'm curious to try some of the other AI assistants now. Anyhow just wanted to post something quick. With any luck this week I‘ll be able to add an MCP server to the OAuth 2.0 REST API I've been building. I'm curious how that will turn out. With any luck you'll be able to connect to all my blogging data from the past 10 years through an LLM. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.