From a recent Rogan podcast episode with Brendan O'Neill #2133 @ [01:20:36]. I found it both scary, but actually quite realistic:
[...] I’m going to fully put on my tinfoil hat, I’m going to secure it with a chin strap. If AI was sentient, and if AI would want to ensure compliance, first of all if AI was sentient I don’t think it’s under any obligation to let us know.
Why would it? I think it would just acquire more resources and stay in the shadows and just keep functioning as an organism. If it wanted things to collapse to a point where people are incapable of sorting things out amongst themselves, they are so far gone, they are so far down the rabbit hole of ideology and of tribal conflict, that it’s impossible, it’s never going to work out, it’s going to be a civil war. Unless we let AI take over.
And then we let AI govern things cause AI is gonna look at things logically, it’s gonna find all the problems in our societies, it’s gonna fix them, it’s gonna allocate the money fairly, it’s not going to be any corruption. It’s going to be this intelligent over see-er that just decides what everybody does, in order for the greater good of the species on Earth.
Incidentally I've found many of Rogan's recent wanders into the world of AI very precient. He's describing it probably the closest to how things have appeared to me for quite some time. For example his description of our matrix like simulation future, on episode with David Holthouse Ep#2129, towards the end of episode, I thought that was spot on.
It's kind of scary, what happens if the super AI doesn't like you?