Rebuilding your container based dev environment
2025-09-19 13:26:41 +01:00 by Mark Smith
If you have been reading the blog you might have been aware that I was going through some turbulance with the dev environment. Some significant progress in the past day or so, but still not quite out of the woods. Just to give you an idea of how bad it‘s been, here's the commit message where I realised that the dev environment was hosed:
Author: Mark Smith [email protected] Date: Wed Sep 10 18:42:31 2025 +0700
chore(backlog): Emergency re-prioritisation of current sprints tasks
The end of the
2025-08-29 sprint
saw us reach a point where we had a great foundation for our new dev environment that supported local LLMs running in Ollama. One big downside was performance. Most queries on the larger models would typically take 30-45 seconds. Add that to the generally significantly less good response quality, the solution was not very practical.It was discovered that GPU acceleration for these local models might be possible so we did a spike on trying to get that operational. It was long and drawn out, but success was reached with a minimal project example demonstrating a container running in Podman that could run vulkaninfo and get back output computed by the host GPU.
Getting this working took significant reconfigurations and trial and error. As a result of which it seems something was broken in the main podman / devcontainer causing devcontainer builds to fail. This happened even outside of the GPU spike feature branch, with the main branch code, which had previously worked fine.
We now need to get the devcontainers setup back onto solid ground, and so all tasks in the current sprint have been pushed into stretch goals while focus is concentrated on remedying the situation.
The plan is the plan until the plan changes as the old saying goes.
Hold on to your butts.
Refs: task-037
That was just over a week ago, and since then I have been deep in the depths of the nightmare. And as is always the case, the world immediately around me and the wider world has gone absolutely bat shit crazy insane in the membrane. Horrendous. I‘ve had to rebuild the whole environment from scratch, starting with the simplest possible configuration, getting all aspects of a usable development environment working with that and then slowly building that up into something that has the same structure as my API + React app.
I have a few minor things to add to the fresh environment, and then I need to backport it into the original project. So not quite on stable ground, but I can see the light at the end of the mixed metaphors abstraction apocalypse tunnel.
On re-reading this commit message, I noticed that I started referring to 'we' and 'us', even though it‘s just me solo dev'ing this thing. Working with LLMs is very odd.
In any case I think the thing to learn in all this is that when working with AIs and especially if you are running everything in containers, it‘s a good idea to have a very minimal working project that you can spin up if something goes weird in the main project. There are just so many layers where things can get crossed the wrong way. #