I wrote yesterday that I've got AI fatigue. That's still the case. I scrolled past most tech stories this morning as I perused the latest content online. It's all stories about another big corp adding GPT style chatbots to all their old products. How are people all loopy about this? Folks are mexican waving for what amounts to Clippy 3.0 on an old version of Excel. Woopdidoo.
The text based tech news is flooded with boring AI tedium, but I think there's actually still quite a lot of interesting AI related content on podcasts. That stuff tends to be much more futurist philisophical stuff. People airing their AI related worries, as they discuss other unrelated things. How they see this entire path we are heading down ending. Now that stuff really is profoundly interesting, even if it is more often than not dark and dystopian.
And perhaps that's why it's so interesting. Maybe humans are genetically programmed to worry about end of times, because one day maybe we will all actually be enslaved by evil sentiant paperclips. That really would be kind of bad wouldn't it?
For example, here's Gavin Wood on a recent Raoul Pal's Journey Man Podcast:
My feelings on AI are pretty pessimistic I’m afraid. I see blockchain as a potential mitigating factor that humanity might have against AI, but ultimately I think AI is an incredibly centralising force. Everything I understand about AI tells me it’s basically the anti-web3.
It’s totally based on a single economic actor having enough GPUs and enough data, that they can out think all of their strategic competitors. It’s fundamentally not something that is shareable, that is consensus based.
And from a bit later in the discussion:
The trajectory they are on is for the AI to intermediate all human communication.
AI brings us into a new form of communication: Totally intermediated by a machine in a non-linear way. This is extremely scary. We haven’t been on this path before.
It’s almost like we have a single centralised translator sitting in between all our inter human conversations. And we don’t know what interests or biases they have. We don’t know how Microsoft have lobotomised their model or training data, how they have re-trained it when it said the wrong thing.
[...]
We are just the medium, we are just the substrate on which the AI lives.
I'm totally worried about this sort of shit too!
It is in some ways completely insane to be projecting out to what amounts to infiniti. It's hyper speculative. It's like our actual real lives are about to be turned into a futuristic black mirror themed SciFi show on BBC2.
On the other hand maybe it isn't so insane. Case in point, today's Politico Tech has a very interesting piece about Sam Altman's plan to ensure an egalitarian utopian AI future: Universal basic compute. The idea that AI tech is going to be so important that it will eventually replace money as a currency. Interestingly they are so serious about this stuff they even mention it in their company incorporation documents, that they can't be sure what roles money will play in a world where we have developped artificial general intelligence (AGI).
Bitcoiners everywhere are going to go into a sulk when they realise AI is about to eat them, while they are busy eating the old financial system. And so it goes.
I'm actually quite happy Altman is thinking this far ahead. The reality is that this bifurcation of society into haves and have nots is a real possibility, and could very well be irreversible. Maybe we just need more utopian visions to counter the dystopian ones?
Personally I'm pretty into utopian visions too. If only we could get the utopians and dystopians to work effectively together, maybe we'll all be okay.