2025/08/24 #

Ice coffee

I was very happy when I started out writting this post, having a morning ice coffee, after a nice breakfast. The ordeal I had to go through to publish this little post has left me not quite as happy.

Such is life sometimes. #

Miljan Braticevic [38:46]: “When you create an account on X let's say, you are given this account by this corporation, this is something that they give to you, and it's something that they can take away from you at any point. In contrast, when you are on an open protocol like Nostr, you claim your key/pair. You take this massive number out of the universe, and that's you're private key. So you have claimed some territory in this massively large namespace of numbers and then you've claimed something and this is yours and then you defend it. What a stark contrast between these two, and then everything is downstream from there.”

I thought this was a great explanation of one of the central paradigms of key/pair based systems. Somewhat surprisingly, because I really am all for independence and self reliance and the rest of it, I found that the new knowledge that it unlocked, was accompanied by a roughly equal amount of unease. Like somehow there might be a side to this that is not being covered. Might be a bad idea to push to get rid of the centralised systems so completely, at least for a little while. I guess that has been my opinion for a while now. It seems to me like it might be one of those hills that isn't worth dying on, like an asymptote stretching off into infiniti that you can never quite figure out.

I'm making a mental note that it might be worth revisiting the tractability vertigo of this whole thing some time again in the future. Analysis paralysis is a real so and so. #

Some recent project milestones

I have a short list of blog posts I want to write covering some of the key things I have learnt getting my API & React frontend project to a pretty great place as far as code quality. It's been quite an interesting few weeks development-wise. I'm way more impressed with both Typescript and React than I expected to be, and though it's been kind of a roller-coaster ride working with Gemini, I'm still impressed with that too. I'll hopefully get to those posts over the next few weeks, but in the meantime, I thought I would do a quick review of the major project milestones from the past few weeks.

Without further ado, and roughly in the order they occurred, here they are:

  1. Zod based DTOs - Otherwise known as Data Transfer Objects, these are Typescript type definitions for all data that gets passed back and forth between the frontend and the backend. The cool thing about using Typescript in both the frontend and the backend, is that you can setup a shared workspace and put them in there. The backend and frontend still have their own type definitions, but in addition they can use the shared ones. The key thing is to define the data shape using Zod schemas, then create the types by inferal off of them. That way there is a single source of truth for the shape of all data across the frontend and backend, and as a bonus you can use the schemas to do data validation on the backend in a single reusable middleware on all routes. It standardizes all "In-flight" data.
  2. Shared public entities - Another types related thing that works really well with DTOs. These are types for all "At-rest" data that contains some fields that you don't want exposed externally like passwords and private keys. You create the public version of the entity in the shared workspace and then create the internal version based on the public entities, with the sensitive fields in addition. It just ensures that the sensitive data never leaves the secure backend location.
  3. Linting & typechecking - This was huge, especially when working with Gemini, because it has such a tendency to drift, and is constantly leading you down confusing and wrong paths. Having a way to automate the checks means you can mostly eliminate the possibility that style stuff you don't like and typescript errors ever make it into the codebase. I setup pre-commit hooks for linting and pre-push hooks for linting + typechecking. I also set up some custom bash script based pre-push hooks to enforce a specific branch name format and commit message format. It really makes reading the commit history so much easier. Also as far as linting and typechecking config, it's very important to get the config in place, test it is working, and then NEVER change it. Gemini will constantly attempt to change the config instead of fixing the errors, and get you into never ending config change loops. You should almost never change the config.
  4. Secure devcontainers setup - I spent quite a bit of time setting up two different devcontainers, one has read access to the repo but no write access, the other can read and write. The idea being that I should be able to run Gemini in agent mode in the restricted setup and then verify the code myself before switching into the other devcontainer to checkin the code. Similar to the linting and typechecking config it's a good idea to not change this from inside the container. You can set the config to be on a read only mount, so you can always be sure nothing important gets changed without being aware of it. The trick is to clone the repo using https for the read only setup, and use a script launcher script that passes a fine grained access control personal access token (PAT), but for read + write use ssh keys mounted read only to the devcontainer. You will also need to run some post connect commands to ensure you switch correctly between the setups.
  5. Security providers and contexts - Use these React idioms to setup authenticated parts of the site, making the authenticated user available wherever it's needed. I really like this feature of React, once you figure out how it works, it's a really elegant way to make sure certain hooks are only ever used inside parts of the DOM that are wrapped in a specific component. So you always know for sure that, for example, the authenticatedUser is available inside the part of the site that is wrapped in a security context. Spend time understanding how the pattern works and name things well, because Gemini will often suggest ambiguous names for things.
  6. Backlog.md task based workflow - When working with Gemini this is essential IMHO. You need some way of keeping track of work, because you will constantly be getting cutoff and each time you will need to get Gemini up to speed. Pretty much mirrors how humans should do it, specification driven development, but it's much easier because the AI can write much of the spec plans and acceptance criteria. Of course you will need to review it and make sure it's not full of weirdness, which happens quite a lot. The other thing that happens a lot is Gemini suddenly starts inserting backtics into documents, as a sort of sabotage because it breaks the chat window, and then there is no way to accept the changes. It's usually a sign that Gemini is confused about something. You can often get around it by asking it to oneshot a new version of the doc with _NEW appended to the filename, and manually do a diff or copy paste yourself.
  7. React router, queries and loaders - Another really great React feature, that enables you to pre-fetch all the data you need on page load, and when combined with Tanstack query, you get caching thrown in for free. It might actually be a feature of Tanstack, I can't quite remember. In any case it's cool and makes apps feel really fast. Might have been easier to start with these from the outset, depending on your React / javascript fluency. I got it all working without it, only for Gemini to suddenly tell me about it after initially forgetting to tell me about it. The conversion took quite a while.
  8. Me endpoints for added security - I realised that the initial architecture was leaking user information via the urls used to access the API. It meant that it was trivial to identify admin users just by the urls they were loading. This was a pretty great solution, suggested by Gemini, apparently it's very standard these days, but I hadn't been exposed to this pattern. Basically you have dedicated /me routes (i.e. /me, /me/blah etc) that specifically only loads the data of the logged in user, and that's hardcoded into the route, so privilege escalation is impossible, but also since all users can use these routes regardless of their role, it makes it much much harder to identify say admin users based just on the urls they are requesting during normal usage.
  9. Weekly sprint planning process - The backlog task setup has been crucial, but it became a bit unwieldy just because it was so good. I setup a simple weekly process based around the Agile development methodology idea of weekly or bi-weekly sprints. I do it weekly, and have a standard document that gets generated during a planning session with Gemini where we identify which tasks I will attempt to complete in the upcoming week. This works really well with the already in place process of retrospectives, for documenting major decisions. Gemini is always reminding me of very important things we have discussed previously while working on features. Really helps with general AI guidance.
  10. End to end data integrity pattern - Probably the most important, and is really a sort of supply chain type concept, where a series of things you put in place in such a way as to guaranty that the data sent in the frontend application layer is pretty much always exactly the same data received by the backend service layer. There are all sorts of little crevices and weird unexpected places things can get screwed up as you pass data through all the frontend and then backend layers, but I've found a way that is very very effective. Relies heavily on shared types.
  11. API response refactor - This was a follow up bit of refactoring to the previous, end to end pattern. You have got to remember to standardise the responses. Again using some clever shared types, you can ensure all the responses have a standard response envelope, for single object responses, lists of objects responses, and failures and error responses. Gemini was super useful to do a quick survey of the top 10 API platforms response envelopes. You can then craft one that works best for your purposes. With enough thought you can design something that is flexible enough to accommodate more complex features like paging, even if you don't implement that initially.
  12. User details pages - For admin views, you need to be able to search for a user and display their resources, just in case something goes wrong. Only give admin fine grained modifying capability once an activity log is in place, so you can see what actually happened. The search feature was yet another place where React really shines. We had a ResourceList component used across many pages to list the various resources. Well one of the props it accepts is a way you can pass in an HTML snippet and it inserts it into the top right of the table. So you just pass in an input box and you can wire it up really easily to which rows get displayed in the table as you type in the input box.
  13. Make Oauth endpoints RFC 6749 compliant - When I overhauled all the response types, the Oauth 2.0 flow stopped working, because Oauth has some very specific requirements as to the shape of the data, and the new response envelope was causing issues. Just remember you might have some endpoints that have different envelope requirements, depending on what you are doing.

So that about covers the main things from the past 2-3 weeks. It's been very productive, although very tiring, and at times quite confusing. Working with an AI agent is awesome until it innevitably isn't. But it's possible to put in place some best practices, tools, and development methodologies to get the codebase back into great shape. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.