The new container based project structure

2025-10-02 23:16:33 +01:00 by Mark Smith

The past few days, I have been writing about the recent project rebuild I carried out, as well as some thoughts on the modern web based stack as it relates to AI/LLMs. I wanted to spend a bit of time reviewing my new project structure, highlighting the major features.

  • Develop in a single devcontainer, deploy to service containers - Initially I made the mistake of trying to containerise everything into service containers, both in development and in production. What I didn't realise was that the vscode devcontainer does much more than just pass your settings to the container management tool. It does a whole load of things that prepare the file system so you can edit the files on your host directly in the devcontainer. I thought it would be great to have a complete replica of production in development, with each workspace running in a separate service container. What ended up happening was a never ending cascade of impossible to debug file permission errors, because especially on a mac, your host file system has to go through the podman vm's filesystem and then to the container filesystem. Just go with a much simpler strategy, in development use 1 single devcontainer, that contains all your project's code, don't mess around with service containers and podman-compose. It's not worth it. But for production, do spend the time to create separate service containers for each workspace. In production you don't need to worry about file permissions in the same way because you don't need to edit the files. I have a top level project folder called containers where all the Containerfiles (dev and prod) live.
  • Self contained npm workspaces - I was already using npm workspaces for the frontend, backend and shared modules of the app, but they were not very self contained. Each module was also scattered around the project root. I moved all the modules under a single packages directory, and I made sure that each module was designed with the notion that it should be able to function independently, if it was pulled out into a separate project. This is great for future proofing but also for being sure that your modules can be run in separate service containers. That means they should have their own linting, and typescript configurations, as well as npm scripts to typecheck, lint, build, start, etc. Use the same script names across the modules so you can run them via npm run from the root of the project using the -w flag to specify the workspace.
  • Well thought out and modern typecheck and linting configuration strategy - Unfortunately the typescript configuration had evolved along with the project, which meant it suffered from a lack of a cohesive strategy, because the AI/LLMs will give you different advice depending on when you ask them. I had just added various configurations as I went along. It's really important to get both these configurations right in your minimal project, because once the full project structure and files is there, it becomes much harder to think about and troubleshoot. One important thing to remember if you are using Typescript in both the frontend and backend, is you will likely need a shared workspace to house the types that are shared between both. This is where the typescript configuration comes unstuck, because the compiler will refuse to see the shared workspace, and you will end up going around in circles with the AI suggesting the same set of changes, each one confidently touted as the "final industry standard best practice that will fix everything", and each time it will not work. Another reason why having a single devcontainer is better than service containers in development. Minimise the amount of things that could go wrong. Get it working for a super simple frontend and backend workspace that both use a shared workspace. I just used the example expressjs and vite react app.
  • Linting - base config in shared module, other workspaces import and use this in their own config where they can override bits if needed, separate out style rules to use @stylistic, project wide rules for imports ordering using perfectionist. I like all imports to visually look the same in each file, with the same ordering. It significantly reduces cognitive load. The ordering I use is vendor modules, vendor types, then project modules and project types, each separated by a single empty line.
  • Typescript - Unified and Modern, ES Modules (ESM) as Standard, with configuration symmetry between backend and frontend, every workspace has a tsconfig that references app tsconfigs, and if needed references a tools tsconfig (frontend), and conscious separation of typechecking rules and build optimization, which is handled by specialised tools. I based this on the config from the vite typescript example app which had the most modern looking config that uses ESM, which is the best as far as future proofing your project.
  • Githooks using husky - Run lint using lint-staged pre-commit and lint+build+typecheck on pre-push. I also have a custom commit linter script that I run on pre-push that enforces Conventional Commits standard for commit titles and a related naming convention for branch names. It makes a huge difference to cognitive load during development, because at any time you can run git log --oneline and get a really great summary of what has been happening on the project, that is super easy to parse at a glance.
  • Bind mounted volume for node_modules folder in the devcontainer - The devcontainer will automatically bind mount your project files into the container, but what you can do also is to create a separate volume mount for your node_modules folder. This is a necessity if your host OS is different to your devcontainer OS because it means some of your modules will need to be compiled for different platforms. You can easily add this in the mounts section of devcontainer.json. You can also add mounts for things like ssh keys, gitconfig and even your .devcontainer folder which you can mount readonly. Typically you will want to install your projects modules (i.e. npm install) in your devcontainer's post create command. One gotcha is that if you are installing any global modules via npm -g, you will likely get permission errors as the globals directories are owned by root. What you can do is reconfigure npm to install it's global packages into a directory that the non root user can write to (e.g. $HOME/.npm-globals). You can do that using npm config set prefix "$HOME/.npm-globals" in the post create command before you npm install.
  • Building production service containers from workspaces, multi-stage builds - The big idea here is that each workspace should run in it's own service container, with only the files it needs to run. I have a development Containerfile (podman equivalent of a Dockerfile) for the devcontainer, and production Containerfiles for the backend and frontend service containers. It's a bit tricky to get this right, but the general concept is for each service container (i.e. frontend and backend) to do the build in 2 stages. In the first stage you move across all the files you need to build your code to a builder image, and you do what you have to do to generate your dist folder. And in the second stage you copy across your dist folder from the builder to the service container. The backend is a bit more complex than the frontend, because you have to build your code using the shared workspace, and then make sure to package up the shared module into a tgz file, which you can then reference in your package.json. The frontend also needs to be built with the shared workspace, but since you just output frontend files, there is no node_modules folder to worry about. I use a typical slim node image for the backend and for the frontend I use the unprivileged nginx image, so you can run it as a non root user. You need to re-configure nginx slightly, so it proxies any api requests to the backend.
  • Easily test at each stage of development - I have npm scripts so it's possible to run each workspace in dev mode (automatic restarts using vite / nodemon), and run lint and typecheck, but also to build the development and production images, and another to build and run the production code. I also have a podman-compose triggerable via npm that runs both the production service containers, so it's super easy to run things in dev and in prod. Something I haven't done yet, but will at some stage, is to run a minicube in podman, which is a local kubernetes cluster, where you can test full blown production deployments all on your local machine.
  • Keep local LLMs in a separate project, but connect them all via a private dev nework - Running ollama was one of the reasons I ended up down this path in the first place. Initially I had it in my head that it would be best to have ollama part of each project. But that really increased the complexity of things. What was much much easier was to have a completely separate ollama-local project that runs an ollama and an open-webui service container, using the standard images distributed by their respective projects, using podman-compose. That way you can really easily spin up a local AI/LLM cluster with the models and a web chat interface. The trick is to configure them to be on a private network that you create separately. You create the private network on login to your OS. On mac that's via launchd. It's just a bash script that checks if the network is there, and creates it if it isn't. Then you configure the devcontainers in your projects to be on that network, and when you start them up, if the ollama local cluster is up, your vscode extensions can connect, but if they aren't then it won't crash the container. Another thing worth doing is to store your models in a persistent location on the host, which you then bind mount into the ollama container. That way you don't have to re-download all the models each time your rebuild the container.
  • Backup and restore for your podman images - Rebuilding podman vm should not be difficult. At first I was super worried about recreating the podman vm, because when you do that you lose all your images, and since some of these can be very large, it can take ages to re-download them. I created some bash scripts to save images to a backup location and another to restore them all in one go.
  • Repeatable way to download and install assets into your containers - This was another useful script to write, which downloads an asset into a persistent location in your home directory, and copies it into your project's assets directory, which you add to gitignore. Have an npm script to initiate all project downloads. It means you can download everything you need when you have an internet connection, but you aren't blocked if you go offline. For example I download the neovim tgz, which later during the development image build, I install via a script that I copy across with the asset to the image, install and then delete the extra files.

So those are the main things about the new project structure. I'm pretty happy with it as it feels very lean and efficient. It's only been a few days so far, but it looks good, and now that everything is working and there aren't a billion errors everywhere, containers for development definitely feels like the right approach to these modern web apps, especially when it involves developing with AI/LLMs. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.