2025/06/16 #

I just created an OAuth 2.0 API server and another example server that consumes the API. I was able to get it all working in a couple of hours with the help of Claude and Gemini. It wasn‘t totally straight forward, but definitely sped things up overall. #

Coding an OAuth 2.0 server with Claude and Gemini

This morning I have been working with both Claude and Gemini to create an example API server that implements OAuth 2.0. I've read a lot about OAuth over the years, used it quite often as a user authorising various webapps, but I had never coded an app that used it.

It‘s basically the typical way website integrations are delivered these days. Let‘s say you have a popular website and you want to give your users a way to get access to their data from other websites, that‘s when you would add OAuth to your API.

Once your API is available via OAuth, other websites have a way to send their users to you, so said users can give you authorisation for the other website to programatically interact with your website on behalf of the user via the API. First the developers of these other websites, would generate a client-id and client-secret on your website, which they store in their app. They then have to implement a few things in their website to enable the OAuth authorisation flow, which is the thing where the user gets sent to you to authorise and you then send them back.

That‘s basically what happens when you add Twitter or Github to a web app you are using. Typically you get taken to the website that needs your authorisation, i.e. Twitter or Github, where you get told what access you are granting, then after you authorise the app, you are then able to use the integration in the web app you are using.

I found it really useful to set an OAuth sever up because there are so many moving parts it becomes difficult to reason about. I‘m always getting OAuth confused with regular API access, like when you want to write a script that consumes data from a website you use. In that case you just need the API credentials, no need to go through the whole OAuth dance, because as a script writer, you are in effect the equivalent of the website developer. Your script doesn‘t have users that need to authorise access. OAuth is really just for integrations that are done via a website. For integrations that are command line tools talking to an API, you typically just copy and paste credentials rather than do authorisation via a website. Based on how difficult this blog post has been to write, I clearly still find it difficult to disambiguate between these different types of API use.

Having said that, and after further research, it turns out you can actually use OAuth in CLI apps too. There‘s a neat authentication flow where your CLI app can send a user to a 3rd party website to authorise API access, and they then get redirected back to the CLI app via a hidden local webserver that the CLI app runs temporarily in the background. And actually there are a few other ways to authenticate on APIs, it does get a bit involved. The point is that when you are trying to move an API from a hobbie project to something production ready it‘s complicated and not the easiest code to write and write well.

With all this in mind, I was quite impressed how quickly I was able to get this little OAuth API demo project working with the help of the GPTs, but it wasn‘t without issues. I had to bail on Claude at one point and get Gemini to help me finish what we had done, because Claude kept running out of output size and kept being cut off, so I was only getting half written scripts. And then since it doesn‘t remember conversations, I ended up having to paste the half written script into the chat, and it would go off again and run out of output and give me another 1/2 written script with even more errors, that it had already added. Gemini was able to take what Claude did, and wasn‘t running out of output.

However it was doing all sorts of other strange things that you wouldn‘t expect from someone that knows how to code. It‘s like working with someone that is very clever, but has very poor eyesight, because it‘s constantly not seing or ignoring things that already exist and trying to code up with it‘s own version of things, so you end up with duplicate or nearly the same objects in your code, or you find yourself going down a totally bogus avenue and waste a whole load of time. It‘s very annoying, but could also leave you in a bad situation if you aren‘t paying attention.

It does make me wonder whether this will be a strange emergent behaviour, where developers with less resources build their app knowing that their users will paste the not quite completed results into another bigger and more well funded GPT. In a strange way, since I'm having to finish off the work for the GPTs each time, I am in fact the bigger GPT, but I'm also using these GPTs that don‘t quite finish the job. There‘s some weird dilema / dysfunctional human dynamic somewhere in all this, the big fish feeding off of the small fish, yet never giving attribution, and always complaining, something like that, which I really would rather not have to think about right now. Feel free to link to any of my posts. I at least do link to the little fish, and a lot of the big ones, when I can.

IMO, you would have no chance whatsoever in getting a complex project working using GPTs if you weren‘t able to code. At least not with THESE GPTs.

Some of the chats from today‘s exploration: Claude try 1, Claude try 2, Claude try 3, Claude try 4, Claude try 5, Gemini fixes things....eventually

I guess in summary, be careful using these GPTs, don't get led down a ruinous path, and if you are trying to understand OAuth 2.0, try checking out these two repos. I can‘t guaranty that there aren‘t any bugs, but I did get both apps running and working together. You might want to read up a bit on the OAuth flow. I just did a quick google search and found this article. There are probably better ones.

The final code:

I wonder if physicists that spend all their time studying the very small quantum world get on well with those that study the very very big, like solar systems and galaxies. You would think that those who study the very small things would somehow have internalised at an unconscious level that they are absolutely enormous, like giants that can get away with anything, and that those who study very large things are minuscule, insignificant and on some level not worthy. One would think that it would lead to a lot of tension, miscommunication and misunderstandings. I wonder if the universe has to somehow balance it all out. But of course at infinity things get very strange, and distance turns into time, and time into distance, and everything gets rather unbearable. On the other hand perhaps any publicity is good publicity? I‘m not super bothered, just wish it was a little (or a lot) easier to pay the bloody bills. #

How much should you share online?

Following on from earlier‘s rapid API development project, I‘ve spent most of this afternoon and this evening working on a much more ambitious API minimal app. I am really very impressed by the GPTs. API authentication and authorisation is a very complex topic, but Gemini knows a lot about it, and the chat interface is a great way to work on specifications. We were able to outline a very robust system that implements all the major ways you would think of presenting an API using the latest technologies and security best practices.

It‘s the sort of work that would typically take you several days of reading odds and ends everywhere, and eventually after much graft, manage to put something together. With Gemini I was able to do it bit by bit, having side conversations on every topic that needed clarifying, right then and there, and Gemini would update the working specification document. Next thing will be to try and get it to build the app. You definitely need to know a lot for this to be useful. I‘m not sure I would have made so much progress today had I not already spent years doing requirements gathering as a solutions architect, and also having built my own Saas. I already have a lot of experience implementing these type of systems.

One of the things it brings up is how much of this type of work should you share online?

I have already shared quite a bit writing today‘s blog post, and generally I have over the years been quite generous with my contributions, it‘s something that I have in the past always been a proponent of doing.

But the amount of forward motion you could give to somebody using these tools is so much more than before, that you have to wonder at what point might it become self defeating. With AI and the right way to prompt the GPTs it‘s not an exageration to say that somebody with hardly any knowledge could theorectically accomplish what somebody else spent an entire lifetime learning, in just a few hours. And even then the person that shared would probably still be called selfish by those that try to convince you that the things you do have no value.

Anyway, it‘s a strange new world. Hopefully we will find humane ways to make it worthwhile for people to share, because that‘s the only way we will continue to grow as a species. #

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.