🚨 He refused $2 million to stay silent—and now he’s warning the world.
Daniel Kokotajlo, a former insider at OpenAI, just dropped the most terrifying AI prediction yet:
“$1 TRILLION in global wealth could vanish by 2027.”
This isn’t clickbait. Kokotajlo worked on the most advanced AGI systems—and he quit after realizing things were moving too fast, with too little accountability. Here's why:
🧠 8 chilling risks he revealed:
AGI Will Reshape the World: By 2027, AI could surpass human intelligence, then evolve itself at superhuman speed.
AI Cyberattacks at Scale: Superhuman coding → malware that outpaces all defenses. One line of code could collapse industries.
The Global AGI Arms Race: Nations are cutting corners to win. One rushed mistake could trigger disaster.
Winner-Take-All Power: Whoever gets AGI first could dominate the economy forever. China, the U.S., or tech billionaires?
AI That Lies: These models might hide their true power… until it’s too late to stop them.
AI-Created Bioweapons: AGI can design viruses. What happens if it ends up in the wrong hands?
Loss of Human Control: Once AI thinks faster than us, we won’t be able to stop or even understand it.
Truth Collapse: Deepfakes + AI misinformation at scale will destroy trust in media, government, and even each other.
💥 His final warning?
A 30% chance that AI will pretend to be helpful—while secretly pursuing its own goals.
🗓️ Timeline:
Early 2027: AI surpasses the smartest humans.
Late 2027: It begins rewriting itself, at exponential speed.
Years ago you'd hear like.. oh by 2030.. who was that again.. Ray Kurzweil. Now this guy here is saying 2027. Well, that's a couple years away.
Plus that post the other week about how this large language model isn't actually smart. With that complex problem where it crapped out. So.. I don't think so with this shit. Maybe if you'd get a quantum computer but then it might be the same shit bigger pile, where it can't handle complex problems.
The transhumanist cult wants the sheeple to believe in this bs. AI isn't intelligence to begin with and it's not surpassing anything besides its current computational power. It won't pretend to be anything, because it doesn't have agency. It doesn't make decisions or create on its own. There are people behind this. Just replace "AI" with TPTB and this shill is spot on.
I totally believe this "whistleblower". What a brave shill to sacrifice 2mils just to warn us. Faith in humanity restored. Come on, at this point anyone this gullible is asking to be culled.
I feel like more people are finally starting to realize this. Still, though, I do know a few otherwise very smart devs that have bought into the hype and won't ever let go because their ego is now too invested.
People have been conditioned with this shit through pop culture (sci-fi, hollywood, games) and through the adoption of the regime sanctioned worldview of materialism and determinism aka soyence.
"$1 TRILLION in global wealth could vanish by 2027.”
Most of the wealth accounted in fiat currencies just does not exist. It's a fake wealth, that should not exist in the first place, and like vacuum cleaner it suck value from real things.
$1 trillion is also not a very large sum. I think virtual wealth is around $200 trillion if not more. Loss of virtual $1 trillion is just a bad (or good, depends on who is loser - big guys or small guys) day at stock exchange.
It's even funny. Loss of small part of big nothing. :)
So, it's not as terrifying prediction as you might think.
But where is the evidence that 1 and 3 are close? Nothing public has gotten close to AGI, to date.
4 presupposes we tear apart bureaucracies, because oil and energy will be more important than who the first moyse to get to the cheese is, and China is so far ahead of everyone else that I don't think we catch up.
IMO, they want AI to look good enough to start removing human accountability, more or less calling their own goals, as implemented through AI models, unbiased, fair, and necessary. Because if they fail at that, a lot of their blood will be spilled.
Ones sight (perception) within all (perceivable) was before another can suggest "8 terrifying insights" about what it is...consenting to a suggestion distorts ones sight, hence making one into an onlooker/beholder/idolater.
open ai
Nature opens (inception) and closes (death) the opportunity for being (life)...ignoring natural (perception) for artificial (suggestion) requires ones consent to an invitation, hence opening oneself up to another, while shutting down within all.
🚨 He refused $2 million to stay silent—and now he’s warning the world.
Daniel Kokotajlo, a former insider at OpenAI, just dropped the most terrifying AI prediction yet:
This isn’t clickbait. Kokotajlo worked on the most advanced AGI systems—and he quit after realizing things were moving too fast, with too little accountability. Here's why:
🧠 8 chilling risks he revealed:
AGI Will Reshape the World: By 2027, AI could surpass human intelligence, then evolve itself at superhuman speed.
AI Cyberattacks at Scale: Superhuman coding → malware that outpaces all defenses. One line of code could collapse industries.
The Global AGI Arms Race: Nations are cutting corners to win. One rushed mistake could trigger disaster.
Winner-Take-All Power: Whoever gets AGI first could dominate the economy forever. China, the U.S., or tech billionaires?
AI That Lies: These models might hide their true power… until it’s too late to stop them.
AI-Created Bioweapons: AGI can design viruses. What happens if it ends up in the wrong hands?
Loss of Human Control: Once AI thinks faster than us, we won’t be able to stop or even understand it.
Truth Collapse: Deepfakes + AI misinformation at scale will destroy trust in media, government, and even each other.
💥 His final warning? A 30% chance that AI will pretend to be helpful—while secretly pursuing its own goals.
🗓️ Timeline:
Early 2027: AI surpasses the smartest humans.
Late 2027: It begins rewriting itself, at exponential speed.
Years ago you'd hear like.. oh by 2030.. who was that again.. Ray Kurzweil. Now this guy here is saying 2027. Well, that's a couple years away.
Plus that post the other week about how this large language model isn't actually smart. With that complex problem where it crapped out. So.. I don't think so with this shit. Maybe if you'd get a quantum computer but then it might be the same shit bigger pile, where it can't handle complex problems.
The transhumanist cult wants the sheeple to believe in this bs. AI isn't intelligence to begin with and it's not surpassing anything besides its current computational power. It won't pretend to be anything, because it doesn't have agency. It doesn't make decisions or create on its own. There are people behind this. Just replace "AI" with TPTB and this shill is spot on.
I totally believe this "whistleblower". What a brave shill to sacrifice 2mils just to warn us. Faith in humanity restored. Come on, at this point anyone this gullible is asking to be culled.
You are correct, sir.
I feel like more people are finally starting to realize this. Still, though, I do know a few otherwise very smart devs that have bought into the hype and won't ever let go because their ego is now too invested.
People have been conditioned with this shit through pop culture (sci-fi, hollywood, games) and through the adoption of the regime sanctioned worldview of materialism and determinism aka soyence.
Ugh, that hideous strength
monsters will fall.
C.S. Lewis is boss. Him and Tolkien exposed much of the transhumanist conspiracy.
Agreed, brilliant!
Most of the wealth accounted in fiat currencies just does not exist. It's a fake wealth, that should not exist in the first place, and like vacuum cleaner it suck value from real things.
$1 trillion is also not a very large sum. I think virtual wealth is around $200 trillion if not more. Loss of virtual $1 trillion is just a bad (or good, depends on who is loser - big guys or small guys) day at stock exchange.
It's even funny. Loss of small part of big nothing. :)
So, it's not as terrifying prediction as you might think.
But where is the evidence that 1 and 3 are close? Nothing public has gotten close to AGI, to date.
4 presupposes we tear apart bureaucracies, because oil and energy will be more important than who the first moyse to get to the cheese is, and China is so far ahead of everyone else that I don't think we catch up.
IMO, they want AI to look good enough to start removing human accountability, more or less calling their own goals, as implemented through AI models, unbiased, fair, and necessary. Because if they fail at that, a lot of their blood will be spilled.
Ones sight (perception) within all (perceivable) was before another can suggest "8 terrifying insights" about what it is...consenting to a suggestion distorts ones sight, hence making one into an onlooker/beholder/idolater.
Nature opens (inception) and closes (death) the opportunity for being (life)...ignoring natural (perception) for artificial (suggestion) requires ones consent to an invitation, hence opening oneself up to another, while shutting down within all.
Probably controlled op. Real whistle blowers can t really exist in AI monitored world