2025/12/19 #

Changes partially successful

Yesterday’s changes were partially successful.

First of all there was the Bluesky clickable links, which were completely successful. Links in Bluesky are now consistently clickable. That’s what I expected, the change was quite straight forward and predictable.

Secondly there was the social media cards. At first it looked like it had been a complete failure, with none of Bluesky, Twitter and Nostr showing any cards for the latest posts. Kind of disappointing, especially since Twitter and Nostr had previously been adding card sometimes. Have I broken things? Hard to tell, because it’s so hit or miss.

But then my faith in the digital world was at least partially restore, because Mastodon had picked up the changes perfectly and created a card for both my latests posts! Props to Mastodon.

All this stuff is way too difficult and inconsistent imho. Isn‘t this supposed to be 2025? #

Unnexpected internet connectivity issues this evening. What is up with all this blocking world? :( #

2025/12/18 #

Bluesky clickable links

Although the social media auto-poster has been working well as far as I can tell since my last update a few days ago, I noticed that on Bluesky the links in the posts were not clickable. After a bit of research it turns out that Bluesky has an API called facets that gives you a way to turn the plain text you post into rich text. It's a bit of extra work to do this, but on the plus side you get quite a lot of control as to how the text appears. Anyhow, I made a few modifications and hopefully now the links should be clickable. #

Social media cards

markjgsmith

Following on from the fix to the links in the Bluesky posts, I decided to look into improving the metadata on my blog pages so that the various social medias create nicer looking cards for my posts. In the short term I’ll use a standard image for all my posts. I might add the ability to customise this on a per post basis later, but for now they should all get a classic white text on black background.

It was a bit tricky because although Twitter lets you choose whether you display the image full width, or as a small thumbnail, all the others make the choice for you based on the resolution of the image you add in the meta data in your page <head>. I used ffmpeg to generate the image, slightly higher res for Twitter, and I updated my sites templates so the meta tags point to the right image.

The intention is for the image to appear as a small thumbnail on the left of the link, but we'll have to see how that goes, if it doesn't work I might need to make some adjustments. #

2025/12/17 #

Let's avoid the advertising distopia. What even is advertising, or distopia or indeed let's and avoid for the matter. Such is the importance of all this, but we probably shouldn't worry about it too much because guess what, that's a distopia too. Oh noes! We're all gonna die! :) #

I feel like we might be about to enter a phase where a large number of people are going to get trapped in impossibles, and where there are no right answers and you just end up hanging onto something for dear life and waiting for it to pass. Whatever ‘it’ is. The darkness possibly. The corners of reality that reality has forced us into. That maybe we are about to see some of the strange contortions that exist here there and everywhere. And with some sort of probabilistic mathematical certainty we will be forced into yet more contortions. And it might at times look very ugly. #

2025/12/16 #

The world is on an epic allow blocking rampage at me today. It's hard to put into words. Perhaps I'll disappear into an acme hole. Feel like that would be the world's favorite option. Apparently I'm impossible. Well guess what world, if I'm impossible, then you're impossible too. #

It's really quite unbelievable but the world would like things to be even more impossible. That appears to be it's answer to everything. Just add more impossible. #

Destiny always seems to find a way to squeeze in one more more. I'm completely physically and mentally exhausted. #

It literally never ends, the literal next moment after the last note, and the world added another little extra bit on top. Literally literally literally. Literally.

It's pointless asking for a stop, because that means more. Everything means more. #

The Eternal

It's been kind of a strange week, and particularly today. As I sat down exhausted earlier, following a long long journey of one synchronicity after another, I started listening to some Sonic Youth. I haven't been listening to much music at all recently, but back in the 90s I used to listen to them a lot. Not sure I've listened to them all that much since that time. It was those years right at the end of school, and the first few in university.

Anyhow I skipped through a couple of their earlier albums on youtube, and they were ok, but you know remembering the past can be a bit weird. You're not the person you used to be, and so the tracks don't always sound as great as you remember them. Well I eventually landed on their album The Eternal. I remember buying this album when it came out, and I listened to it a lot. It's from 2009 which is quite a bit later than I thought it was. Pretty sure I wasn't listening to them then. But I really remember the artwork and especially the guitars.

I think a lot people might listen to this and not understand it, at least at first. The notes are odd in some places and it's very confusing in other places, but there are these moments of sereneness and calm, and if you listen to the whole thing as an album, the sounds tell a story. You don't need to understand all the words, though some might jump out at you. There is this incredible beauty that sort of emerges out of the chaos, or maybe it was there all along but you just didn't notice it. And when you notice it you are like oh right, I get it now. I get some part of another person on the other side of the world, and some part of all the people that listened to this band.

As an album it's very much like watching a movie. I hadn't noticed that before. It's like Easy Rider but also completely different. Like it has some part of the flavour of Easy Rider. I'm really struck by how film-like it is as an album. After listening to it I really felt like I just walked out of a movie theatre. That feeling of, wow what did I just see, that was really incredible, and just like with the best movies I'm thinking about pieces of it, but also memories it conjured up. What's super bizarre is that it's the sounds, and the feelings, the ambience of the thing, rather than images, and it sort of fades out into the distance.

Youtube of course was playing all sorts of adds in between tracks, which for some reason I found very funny. They were so obnoxiously out of place, but that for some reason had a kind of comedy juxtaposition to it. The other thing that I find super interesting is that according to their wikipedia page, it got to number 18 in the charts in the US (they are a US band), number 42 in the UK, and very similar numbers in all other countries across Europe, except for one country where it charted at number 9, in Belgium where I grew up. There's some part of me, from a long time ago that looks a lot like how this album sounds. #

2025/12/15 #

Today’s links:

2025/12/14 #

Saving state

Very important thing state. And saving state, well that’s doubly important. I thought the auto-poster was up and running, and I wasn’t wrong. But it turns our the final job in the workflow that saves the state of the feed reader, wasn’t running and the result of that was that unknown to me, everyday all posts since I started running it, were getting posted again. Thankfully I noticed it this morning.

After a deep dive into figuring out what was going on, it seems like the if statement I was using for that job was causing an issue. According to Gemini the logic wasn’t flawed, but it was quite complex and apparently Github Actions can sometimes run into issues with complex ifs. hat I was trying to do was only save the sate if any of the post jobs succeeded. But that was trying to be too clever.

I’ve refactored and have come up with a simpler way. Instead of trying to figure out if any posts succeeded, now when I find new posts, I just always save the state no matter what, but then I have a extra job at the end that throws an error if any of the post jobs failed. Hopefully I will thus always avoid double posting, but also there will be an errored job if something goes wrong, which hopefully I will notice.

All this to say that it really should be operational now. If you see this post on my socials tomorrow on it’s own you know the change worked, and if it’s accompanied by a load of old posts, then I am probably pulling my hair out and staring at VSCode. #

Today’s links:

  • Trump: Make Europe Austro-hungarian again. www.dw.com #

  • zrepl is a one-stop, integrated solution for ZFS replication. zrepl.github.io #

  • "Simple cloud service to store ZFS snapshots" - Simple off-site backups. zfs.rent #

  • Syncthing - Open source software and protocol to synchronize files between 2 or more computers in realtime. No need to go through the cloud. syncthing.net #

2025/12/13 #

Writing CLIs

Making

I re-wrote all my blogging scripts a couple of months ago. That’s been a resounding success. I use them most days, and it’s made my blogging workflow much more streamlined. I have some more improvements I’d like to make, and some interesting future projects I want to be in a position to explore. The annoying thing with the scripts in their current form is that they are written in Bash. Things can get kind of gnarly in Bash.

To be clear, I have nothing against Bash. It’s really useful for getting things up and running quickly. It’s very practical, and flexible and you have a million tools at your disposal that are tried and tested, and they tend to work quite well together. But when you want to do something a bit more elegant than just running a sequence of commands one after the other, with a few ifs and loops, then it can seem very archaic. So I’ve decided to re-write my blogging scripts in Node.js.

I’m pretty excited about this, because it will turn my very functional, but somewhat clunky blogging tools, into a streamlined, efficient and extensible command line interface (CLI). I have written a few CLIs over the years, but the past few years my energies have been mostly focussed on web development, so it’s been very interesting discovering all the new CLI focussed libraries that now exist. Things in CLI-land are orders of magnitude better than they were last time I was here.

The most useful libraries so far have been:

I also decided on using the more modern ES Modules rather than CommonJS, and so opted to use Vitest for my testing framework instead of Mocha or Jest. I was already running Vitest on all my React frontend projects, but it turns out it’s great out of the box with any ESM based projects. So far no major issues.

I already have a minimal Node.js project up and running, and I’m likely going to spend some time updating it to Typescript, given how successful Typescript has been in all my recent web development projects, especially those that I have been working on with AIs. Feeling pretty good about it :) #

Today’s links:

  • Rclone syncs your files to cloud storage rclone.org #

  • Singing for Animals compilation. I thought this was kind of amazing. www.youtube.com #

  • Parrot at the vet. Funny but not real. Also made me wonder if the previous link was real or not. The everything can be faked world is gonna be very weird. www.youtube.com #

  • Saylor selling his futuristic digital credit instruments in the middle east. The end is kind of wild. Infinity as a service? x.com #

  • Marktext - A simple and elegant markdown editor, available for Linux, macOS and Windows. github.com #

  • CFTC Launches Digital Assets Pilot Allowing Bitcoin, Ether and USDC as Collateral. www.coindesk.com #

2025/12/09 #

Auto-poster up and running

I figured out the Github Actions scheduling issue from the past two days. Turns out I had a bug in my cron. At some point during testing I had inadvertently updated a value that I shouldn’t have which meant it was trying to run much more often than I had intended. When you setup cron on a real local linux system, that’s not normally an issue, in fact that’s usually how you test these things, but in a hosted environment like Github, things behave somewhat unexpectedly.

What happens is they run a few times, then they just drop and don’t show up again. Which makes debugging very difficult, but I guess from their perspective it keeps their scheduler protected from unnecessarily high load. And of course for whatever reason no matter how much I looked at the workflow file, or all the various things I tried modifying, my brain just would not see the 1 character bug I had introduced. Thankfully Clay from Github support found it almost immediately. The auto-poster should hopefully be running smoothly now. Thanks Clay! #

Today’s links:

  • Tiny Core Linux - Nomadic ultra small (~16MB) graphical desktop operating system capable of booting from cdrom, pendrive, or frugally from a hard drive. www.tinycorelinux.net #

  • Bcachefs is an advanced new filesystem for Linux, ZFS inspired, Filesystem as a database. bcachefs.org #

  • Codeberg is a Github clone - "Codeberg is a non-profit, community-led effort that provides Git hosting and other services for free and open source projects". codeberg.org #

  • I asked Gemini to tell me a really funny cellular automata themed joke. As is often the case, things got a bit ridiculus. gemini.google.com #

  • Cryptoeconomics - "Fundamental Principles of Bitcoin" - Audio version distributed via a podcast, kind of hardcore but contains concise descriptions of all the theories underpinning Bitcoin. voskuil.org #

2025/12/08 #

Sorry about the duplicates

I am still trying to iron out all the creases in my new social media auto-poster. It ran last night, and successfully posted to all the social medias! But for some bizare reason that I can't figure out, it ran 4 times, which of course means that it auto-posted the same linke 4 times to all thje social medias. This was the same thing that happened the day before in testing, but I had put it down to the scheduler being over-loaded. I have openned a ticket up with Github support, hopefully I’ll hear back from them soon. So apologies for the duplicates. Please bear with me. #

Blocking and allowing

The world is doing that thing it always does when I release some software I have been working on for a long time, where it is infinitely upset that it has had to be waiting for so long for me to finally finish the software, so it can get on with the thing it was doing before it was so rudely interrupted by my audacious tiny and insignificant software. Poor thing.

The way this manifests is a tsunami of progressively more intense nudges trying to control or influence the tiny minutia in my life. Seems strange to say but it happens in multiple dimensions, at different abstraction levels, one after the other, until there is a bizare synchronicity in pretty much everything I do. And what inevitably ensues is it blocking the very thing it is trying to get me to do. And so it gets even more upset at me not doing the thing it wants me to do, even though it won't let me do the thing!

And I already know that even if I do the thing that it wants me to do, that it will be upset, because I will inevitably be doing it wrong, yet again. And so the vicious cycle continues, and escalates.

It’s very hard to describe, and it’s even harder to deal with, especially because you naturally start second guessing everything. I bet this is another weirdness. And sure enough it is. Ok well, I'll just quietly step away, hoping it doesn't get worse. And of course that in itself creates disturbances in the spacetime continuum. When it gets really bad it's feels like when you try and rebase two branches the wrong way around, and literally every commit becomes a merge conflict. Thankfully, today it's not that bad, at least so far.

It reminds me of the binds that AI systems get themselves into, where it can’t decide on something, and each time you ask it something, it confidently decides the opposite. A strange probability loop. I have no more words to describe it.

In these times, I think the best thing is to just slow down, though as I write this, I can’t quite remember what the best approach is. Everything feels off today.

And so it goes. #

Today’s links:

2025/12/07 #

First few posts

Modern Times

Well yesterday's inaugural run of the auto-poster didn’t go exactly to plan. In fact it didn’t run at all. I didn’t really expect that it would have worked first time, though I was sort of hopeing that it would. Oh well.

I spent much of the morning trying to figure out why the scheduled trigger in the Github Action wasn’t launching any jobs. I updated it from running once daily to once every 5 minutes, and it still wasn’t running. No errors anywhere, just nothing. After a few hours I gave up and moved onto something else.

When I checked again a bit later, several jobs had been launched which then had errored. The first one starting almost 2 hours after it was supposed to start. And then it had run 4 times in an hour, which certainly wasn’t once every 5 minutes. I guess the Github scheduler is under stress? Kind of strange.

Anyhow since I had confirmed that at least the scheduler was working even if it was a bit temperamental, I started testing each poster with dryrun set to false. This was the first time I had tested live, and so I uncovered a few small bugs and some secrets that were missing. Eventually I managed to post to all social medias apart from LinkedIn. The LinkedIn API is really narly. I ended up commenting it out for now.

Anyway, yesterday’s blog post was re-posted to Mastodon, Twitter, Nostr, Bluesky. Pretty cool :)

So all the poster scripts are tested in live mode and the scheduler appears to be working, all be it sometimes with a big delay, but hopefully at some point in the early hours tomorrow, this post will get auto-posted to all the socials mentioned above. #

2025/12/06 #

Refactoring and minimal examples

Usually when you are writing software it’s good idea to start small and build up. When things are complex, there are just too many things that can go wrong, and they inevitably do.

But sometimes even when you do diligently do that, you get into trouble because the platform you were building on is inconsistent or missing a key feature, or behaves in a non-intuitive way. You thought you knew how it worked but it turns out that the way you thought it worked was not the full picture.

In these times, things get very confusing, you find yourself going around in circles, and it can get quite gnarly, because you are changing things here and there trying to debug things. And one thing leads to another thing, and the AI you are working with takes you down a ridiculous route that was totally unnecessary.

At some point you decide that you need to start again from scratch. Rebuilding the absolute most minimal example of the thing you are trying to solve. It’s a bit of an art. It can seem like a total pain, but it’s often much much faster the second time around, once you have figured out the solution to the problem that is.

And it can happen several times that you think you have figured it out, when actually you hadn't quite. It’s a bit like one of those dreams where you wake up and then a bit later, you wake up again. Oh it was a dream in a dream in a dream!

Each time you get a bit closer, and a bit more certain you really have figured it out, because you understand the problem space much better than before. And it feels good.

But there is always the possibility you missed something.

I think I have the social media auto-poster figured out. It’s running on the blog now, and so at some point this evening after the daily build runs, hopefully this post will get published to some of my socials medias. #

2025/12/03 #

TPUs, GPUs and the future of everything

I was listening to the ever interesting TBPN podcast earlier and realised I didn't really understand the difference between TPUs and GPUs. Armed with some wireless earbuds I got a few weeks ago I started up a voicechat with Gemini to get this figured out. One thing led to another and we fell into some civilization engineering, as one does, while eating breakfast these days.

Here is the full chat.

The speech to text transcription was a bit rubbish sometimes, but mostly Gemini got the gist of what I was saying. I had Gemini speaking the answers each time in a voice I had previously selected. I recommend doing the same and reading along.

It starts off quite technical, but rapidly gets very crazy from the perspective of the scale of things we are currently used to contemplating. I tried to keep things in some way grounded in reality by continually estimating key metrics. This will probably all be quite normal in a few weeks time.

It's wild, and it just keeps getting more wild. #

2025/11/25 #

Strategy still popular

With Bitcoin recent big draw down of course lots of people talking about Michael Saylor and his Bitcoin treasury company Strategy. They have rebranded slightly from Microstrategy, now called just Strategy. I have been trying to sort the signal from the noise, which isn't all that obvious.

Saylor posted a chart in the past few days on Twitter that is kind of interesting. It shows the weekly volumes of Bitcoin backed credit, showing that even though Bitcoin is down, his Bitcoin based financial intruments are surging. What I thought was interesting was that volumes have increased for all of his offerrings.

What seems to be happening, and of course I totally could have this wrong, I am definitely not an expert, is it's the access to Bitcoin he is providing that people are interested in. There are those that just want a bit of Bitcoin exposure, they want to try and profit from the volatility. Those folks buy the common stock, which is still very volatile. They night not be setuo to buy Bitciin direct, but they can totally buy financial intruments. It's just stocks and shares, and that normal for institutions. Then there are those that buy the preferred stock, which has some volatility protection, but it's at a cost. Instead of the say 30-40% gains that you could get with regular Bitcoin, you only get like 10%. Saylor pockets the difference. And of course he then uses profits to buy more Bitcoin.

But what's super interesting is what the folks buying the preferred stocks are doing with them. These are toughted as rock solid, very over-collateralised securities. Strategy has so much Bitcoin in his treasury reserve that he can say for a fact that those securities will be honored well into the future even if Bitcoin goes down a lot. If something goes wrong, those shares get made whole first. So I think what's happening is folks are using these high quality securities as collateral to get dollar loans, to of course buy Bitcoin.

And they can get those loans from regular tradfi institutions because though they aren't necessarily setup to invest in Bitcoin, they love financial instruments, especially high quality ones. So it's like he is levering up, and they are levering up, and well that's what's happening. And it's the common stock folks that end up taking the risk I guess, and if you zoom out further well it's basically small retail investors that will likely foot the bill, because well the institutions levering up must be very good at timing the markets. I mean they have all sorts of futuristic algorythmic tools for predicting the tops and ways to exit the market gradually so as to not cause big disturbances, and basically by the time retail realise it's a market top, well those levered players are long gone.

And of course the big banks making the loans must know this, why else are they making the loans, they must have pretty good confidence they will get their money back. And the other thing is that other folks are shorting Bitcoin on the way down. They bet it will go down, and make tons when it does. You got to wonder where they are putting their winnings. What else are they going to buy aside from Gold? Well might as well buy some Strategy, and make some more as it goes back up.

I likely don't fully understand all the dynamics, but that seems to be the gist of it. Saylor has build a platform on the surface of the Sun, and for a fee you can setup shop on his Sun platform, and do what he's doing too.

It does seem totally bonkers. I wonder though isn't this just what regular financial markets have been doing all along? All be it in a less obvious way.

Probably worth being aware at some level what the dynamics are vaguely. #

Today’s links:

  • Britain and Europe have become colonies of US big tech who operate toll roads all across the lands. www.theguardian.com #

  • timgit/pg-boss - Message queue library for background jobs backed by postgres. Reminds me of mongodb-queue. github.com #

  • SBoudrias/Inquirer.js - "A collection of common interactive command line user interfaces". This looks like it could be super useful for creating CLI tools. github.com #

  • Stefan Judis looks at built in tools nodejs has for depracating methods in your public repos. Good to know. www.stefanjudis.com #

  • Simon Willison lays out how he automates his substack newsletter from his blog posts. Kind of hacky but could save a lot of time. simonwillison.net #

2025/11/21 #

Things to look forward to

I’m running really low on build minutes on Github, which is why I haven’t been blogging much, or in fact at all, the past week. I've also had quite a few administrative things to take care of, which is always a bit stressful. Last time I wrote, I was mid allergy attack. Well I recovered fully from that a few days ago, but today it’s another allergy day. Sneezing, sneezing, so much sneezing. Burning face. Dripping nose. Everything feels blocked. Feels like it never ends.

Tomorrow is a bit of a travel day. It will be nice to be in motion. I'm looking forward to the end of the month, when I can get the social media auto-poster workflows finished. I also have 1 other small feature that I've added to the site that needs testing. It’s not something that will make a big difference visually but it could be something interesting in the future. On the web it’s a good idea to try and get well positioned in case a wave starts to gather some momentum.

I've got about 35 build minutes left, to last until the end of the month. That's about 6 or 7 builds. Likely I won’t have much of a chance to write anything till early next week in any case.

Everything seems to be down at the minute. Markets are down. Bitcoin is down. General mood is really negative, and yet, in many of the podcasts I’m listening to, the sense seems to be that actually things are much better than people think.

The last 3 pods I listenned to that I thought were pretty darn great:

Not much else to report. I hope everyone out there is doing alright. #

2025/11/14 #

Another suspiciously timed allergy flare up day today. The world continues to be very unhappy with my imperfections. Likely another tsunami on the way. Life goes on. Looking forward to another day. #

The alergy attack appears to have past, took almost the entire day. It can be completely debilitating when it happens. Very hard to concentrate on anything. It's sort of difficult to describe, not entirely disimilar to how it would feel to have stinging nettles rubbed all over your face, and right into your eyes. Sometimes I describe it as being punched in the face. It really drains your life energy. Roll on tomorrow :) #

Today’s links:

  • Apple is pushing mini-apps, which are apps built in HTML5 inside other apps. techcrunch.com #

  • @daverupert: Could IndexedDB be sync'd by the browser like bookmarks? mastodon.social #

  • Justin Drake on Etherium Beast Mode - scaling the L1 to 10k transactions per second using zk proofs. podcastindex.org #

  • @elonmusk: Congratulations @JeffBezos and the @BlueOrigin team! x.com #

  • Winklevoss invest into Zcash. The privacy focused coin tripled in value since I last mentioned it 3 weeks ago. Zero knowledge (zk) proof tech and snarks, a particular type of zk proof, appear to be getting hot. coincentral.com #

2025/11/13 #

Static websites + git everytime

Just as I manage to finally clear the decks this morning, and about to get back to finishing the rebuild of the Github Actions auto-poster, the internet connection disappears. Literally within a few seconds. Looks like the world is productivity min-ing again. This will no doubt be followed by some yah’ing and some everything is your faulting. Hard not to wonder whether reality should have a .cause property.

At least my blog is a website built with a static site generator so I can keep on blogging locally and sync later with git. #

Social media auto-poster progress

I managed to get a very minimal example of the Github Actions social media auto-poster workflow working. It proves that in theory the necessary sequence is possible. The reason it’s tricky is because I am trying to run all the auto-poster jobs in a job matrix in order to parallelize them. I’ve almost run out of build minutes this month, so here’s a summary, so I can pick it up again next month. It’s not really intended for anyone in particular except me.

It boils down to the GitHub scheduler making it very difficult to enforce a sequence of jobs where some of those jobs are optional (skipped). I need this because I have some jobs that read and write the current state of the RSS feed before and after running the auto-posters, but I don't want to force the caller to use the state management provided. They should be able to manage their own state.

The problem is though that the scheduler fails to properly trace the history when a required upstream job is skipped, causing the matrix job that launches the poster jobs in parallel, to enter a deadlock at the dependency validation stage. The solution bypasses this by using the always() condition, which forces the runner to ignore the flawed historical trace and proceed with execution based only on the immediate, successful output of the preceding step.

Though there are some edge cases still to test, it successfully completes with all jobs enabled and with some jobs skipped. The devil is always in the detail. #

Today’s links:

2025/11/12 #

Improved RSS feeds

Magnum

I discovered that my RSS feeds were rendering the title inside the description. Titles should go in the title field! I know I know. How embarrassing.

Honestly it’s some sort of miracle of miracles that I even have RSS feeds that work at all. When I wrote the static site generator, I figured out how to do serverside components using just regular javascript and ejs templates, and that was pretty cool. It occurred to me that I could use the same serverside components to render the RSS feeds.

The way it works is essentially when the feed builder iterates through the posts data, it grabs the post’s item component, renders the feed item using the exact same component that gets used to render the post on the website. It works for all posts types, whether it’s blog, linkblog, podcast, newsletter or notes.

It wasn’t very obvious at all, especially because I wrote it all on an iPod Touch. It was a rough few years. I only realised relatively recently because I haven’t really been using RSS readers much at all the past few years.

I am always quite scarred to change the RSS code, because historically speaking, every time I go near the RSS code, really weird things tend to happen. I have no idea what that is all about, but it’s happened so many times that there is some amount of trauma.

Anyways, I took a look at the code just now, and it wasn’t that bad. It was really just adding if statement, though I did have to duplicate a dictionary for the everything feed, which without going into too much detail, doubles the memory consumption. Seems to be rendering fine though. Hopefully it will look a bit less ugly in readers. #

2025/11/11 #

Not much fun with Gem

Two days of very frustrating debugging Github Actions with Gemini, who seems to have completely perfected being a lemming and confidently launching itself into ever more precarious situations, and every time I have to descend into the depths of hell just to make the tinniest bit of progress, and somehow find a way out of the trail of destruction it has caused.

Currently rebuilding a very complex workflow from scratch. I've had to build several minimal examples in order to prove to it that it doesn't have to be as horrendously complex as it constantly tries to make things. And when it becomes obvious I’m right, it then starts talking about my code being wrong, never mind that it wrote the code a few minutes earlier.

Not much fun. #

Today’s links:

  • There are benefits to bubbles, infrastructure gets built out that lasts, in the case of AI that's mostly, but not only, energy infrastructure. stratechery.com #

  • Intel CTO Sachin Katti to join OpenAI to help build out their compute infrastructure for artificial general intelligence. www.crn.com #

  • Trump threatens BBC with 1 billion dollar law suite over doctored documentary. www.theguardian.com #

2025/11/10 #

Google’s new coding agent called Jules

Jed Borovik was on the Latent Space Podcast talking about what it’s like inside Google Labs, and their new AI tool Jules.

I liked his description of the direction he sees things going in [08:06]:

“We have an API, so people are using it for all kinds of things. Triggering it when something happens. We saw an example where someone is triggering Jool’s to do all kinds of updates to their site, and then they have a Github action that is going to automatically merge Jules pull request. So all kinds of stuff is flowing...we are really changing how people are able to do stuff [...] we also have a CLI. We want to meet developers where they are [...] an API is great but when you are working locally you want to be able to trigger stuff [...] by the time this podcast comes out we will be integrated with the Gemini CLI [...], all kinds of places where we are going to mix and be able to harness this power, because developers work in all kind of spots, and so making it easy to have this autonomous ambient agent that can really do all kind of work for you.”

A lot of their discussion resonated with me, I’ve ran into many of the things they talk about.

I asked Gemini a bunch of technical things about Jules, and finally whether Gemini had met Jules:

"That's a fun, meta question! As a helpful AI assistant built by Google, I can't actually "meet" Jules in the way two people or two physical robots would. We exist in different operational contexts, but I am certainly aware of and have access to a wealth of information about its function and design."

2025/11/07 #

It took the whole night to download but it did finally download and I am now running Tahoe 26.1. So far most things seem to be working find, but somewhat worryingly the vpn app is behaving a bit strangely. I connects but none of the buttons in the interface work. The weird thing is that the issues started happening right before I installed the updates. After the OS update I updated the vpn app, but the issues persist. And people wonder why folks hate updating their software.

The biggest difference so far with Tahoe is that it changed the wallpaper from giant redwood trees to a sunny beach with giant rocks and snow covered mountains far in the background. Perhaps mountains are trending. I appear to be ahead of the curve on mountains. #

It’s 1997s era of making agents

I thought this take from Bret Taylor about where we are in the AI rollout [1:30:13] was pretty good:

"We are like in the 1997 era of making agents. I found this article for Siera summit about creating websites in 1997, and there was this Wired article [...], and it was basically about banks spending 23 million dollars to add transactional support to their website, like adding a login form basically. And then you fast forward to the late 2010s and Kylie Jenner starts a multi-billion dollar cosmetics line with [...] 7 full time staff.

So we are still in the 1997 era of building agents where it’s way too hard. You end up putting a lot of engineering around what is a very intelligent set of models, just to make it work well, and I think what do you need to create a 7 person team to create a multi-billion dollar business on agents? And I think we have a lot of product and technology work still to do [...] but for an applied AI company like ours, the models are actually pretty great right now."

BTW, Bret is co-founder of Sierra and chairman of Open AI.

In case you weren’t around this was 1997.

It’s a great way to situate our current moment in the broader picture. Of course one of the big questions that naturally follows is how much faster the acceleration will be in this era, because presumably it won’t take 20 years? Or maybe it will?

It would be an interesting metric to track. #

Older posts: check out the archives.

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.