cat << EOF > When will AI start to affect macro?
The world of AI is moving at such a huge pace at the minute and it seems to be accelerating. ChatGPT started it all off with it's chat based interface, and we were all blown away with what it was able to do.
Shortly after that people started to create multi-step versions of these types of applications that can use external tools to retrieve and manipulate data. Some of the first use cases for these AutoGPTs are code assistants but they can be used for any task. These AutoGPTs are incredibly powerful, orders of magnitude more powerful than their predecessors.
There are already comprehensive new javascript libraries with which you can build much more than just chat apps. Entire advertising campaigns that previously would have taken months to construct can now be created in minutes. And there's a lot lot more.
And the money is flowing in that direction too. There have been some reports that important venture funds are repositioning away from crypto to so called frontier tech. In fact blockchains and AI are a very good match, with the ability to autonomously spin up wallets and perform very complex chains of activity.
There's also the recent annoucement that Elon Musk's company Neuralink is going to start clinical trials of human computer interactions via brain implants. It might be a good time to seriously consider our right to mental privacy in the age of brain sensing tech, because when you combine some of these technologies, say with something like edge computing, the possibilities for creating unpleasant adversarial environments are very real. Nice income streams you have there knowledge worker, it would be ashame if you were suddenly unable to function. The future is here, and it's going to be crazy weird.
With all that in mind, it's also important to step back and ask how this will affect the wider world. Something that worries me is escalation. We know things in the economy already escalate. To a large extent it's always been this way, you literally can see it in the data. But with AI the occurrences of dynamics of escalation are going to increase significantly. These aren't always bad per say, but I believe they can be, and it could get a lot worse so to speak.
We should start collecting key metrics so we know how things used to be before AI, what the world looked like, what it felt like to live in. We need to be able to see if and how things are being affected at a macro level. Wouldn't that be prudent?
We've also got quantum computing on the horizon, IBM plans to have a 100000 qubit machine operational with 10 years. What will happen when we add that to the mix? Surely it's going to be another step shift.
Big leaps are also being made in nanotechnology, where tiny robots can be sent into the body to deliver medicine in an incredibly targetted way. All very amazing, but the fact is they could also very well be used to deliver harm to people too. We will need to be able to detect things at that level too. Vulnerable populations like the homeless are already targetted with foreign material placed in their food. It could get a lot worse, and when it does, it could happen at scale and very quickly.
Let’s boldly go into the future and explore the possibilities of these new technologies but let’s avoid a multi story car park collapse of humanities’ cultures. Let’s make sure some of the worste outcomes are avoided with a bit of structure in important areas. Remember ultimately we are all in the same earth shaped boat.
What type of data would we need to collect to get some visibility without destroying the very culture and creativity we are trying to protect in the first place?
EOF