I‘ve been reading about running LLM‘s locally for the past few days. It‘s something I looked into briefly before but it always seemed too complicated. I‘ve noticed that since OpenAI released their open source models, people on various podcasts have been talking about this more, so I‘ve been checking projects out and what not. It‘s seeming more achievable, I think partly because the tech has progressed but also my understanding of the space has evolved too.
It‘s tough not to get sucked into rabbit holes on some of this stuff. I am trying to spend a bit of time reading about it in the mornings, but then you have to put your half baked research aside and get on with your current project. Web development is strange in that you have to constantly be taking small bites at things, and eventually what was not possible, becomes possible. You have to do both, and then find time to write about it too. #