Why do the AI/LLM folks hate people that run Macs so much?
2025-10-14 23:57:51 +01:00 by Mark Smith
Since I now have GPU accelerated local LLMs for chat, I thought it might be a interesting thing to see if I could do the same for the other types of media. That's basically image and video generation, as well as audio and music generation. After several hours of web searching and asking Gemini, I’m starting to get the impression that much like how it was with the text chat LLMs, getting any of this working on a Mac is not at all obvious. I guess all the AI folks are either running on linux or windows. They certainly are not doing things day to day on Macs, at least not in containers, and as for as I am concerned that’s the only safe way to run AI/LLMs on your computer.
The 2 big platforms seem to be Pytorch and Tensorflow. Just to narrow it down to those two was quite a journey. There are tons of different software, but ultimately it always boils down to running the models, and as far as I can tell, that means Pytorch and Tensorflow. And that’s important because these don’t support vulkan out of the box, so you have to compile from source and enable vulkan backends, which is not easy to do at all. The reason you need vulkan support is so the models can access the GPU from inside the container.
It really feels like another massive uphill battle. Not being funny but why do the AI/LLM folks hate people that run Macs so much? #