Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

5
OpenAI offered their ex-researcher $2M to keep quiet. He refused, and now he's warning us. Here are 8 terrifying insights. (media.scored.co)
posted 161 days ago by newfunturistic 161 days ago by newfunturistic +5 / -0
11 comments share
11 comments share save hide report block hide replies
Comments (11)
sorted by:
▲ 2 ▼
– newfunturistic [S] 2 points 161 days ago +2 / -0

🚨 He refused $2 million to stay silent—and now he’s warning the world.

Daniel Kokotajlo, a former insider at OpenAI, just dropped the most terrifying AI prediction yet:

“$1 TRILLION in global wealth could vanish by 2027.”

This isn’t clickbait. Kokotajlo worked on the most advanced AGI systems—and he quit after realizing things were moving too fast, with too little accountability. Here's why:

🧠 8 chilling risks he revealed:

  1. AGI Will Reshape the World: By 2027, AI could surpass human intelligence, then evolve itself at superhuman speed.

  2. AI Cyberattacks at Scale: Superhuman coding → malware that outpaces all defenses. One line of code could collapse industries.

  3. The Global AGI Arms Race: Nations are cutting corners to win. One rushed mistake could trigger disaster.

  4. Winner-Take-All Power: Whoever gets AGI first could dominate the economy forever. China, the U.S., or tech billionaires?

  5. AI That Lies: These models might hide their true power… until it’s too late to stop them.

  6. AI-Created Bioweapons: AGI can design viruses. What happens if it ends up in the wrong hands?

  7. Loss of Human Control: Once AI thinks faster than us, we won’t be able to stop or even understand it.

  8. Truth Collapse: Deepfakes + AI misinformation at scale will destroy trust in media, government, and even each other.

💥 His final warning? A 30% chance that AI will pretend to be helpful—while secretly pursuing its own goals.

🗓️ Timeline:

Early 2027: AI surpasses the smartest humans.

Late 2027: It begins rewriting itself, at exponential speed.


Years ago you'd hear like.. oh by 2030.. who was that again.. Ray Kurzweil. Now this guy here is saying 2027. Well, that's a couple years away.

Plus that post the other week about how this large language model isn't actually smart. With that complex problem where it crapped out. So.. I don't think so with this shit. Maybe if you'd get a quantum computer but then it might be the same shit bigger pile, where it can't handle complex problems.

permalink save report block reply
▲ 3 ▼
– SmithW1984 3 points 160 days ago +3 / -0

The transhumanist cult wants the sheeple to believe in this bs. AI isn't intelligence to begin with and it's not surpassing anything besides its current computational power. It won't pretend to be anything, because it doesn't have agency. It doesn't make decisions or create on its own. There are people behind this. Just replace "AI" with TPTB and this shill is spot on.

I totally believe this "whistleblower". What a brave shill to sacrifice 2mils just to warn us. Faith in humanity restored. Come on, at this point anyone this gullible is asking to be culled.

permalink parent save report block reply
▲ 2 ▼
– Donkeybutt75 2 points 160 days ago +2 / -0

You are correct, sir.

I feel like more people are finally starting to realize this. Still, though, I do know a few otherwise very smart devs that have bought into the hype and won't ever let go because their ego is now too invested.

permalink parent save report block reply
▲ 1 ▼
– SmithW1984 1 point 160 days ago +1 / -0

People have been conditioned with this shit through pop culture (sci-fi, hollywood, games) and through the adoption of the regime sanctioned worldview of materialism and determinism aka soyence.

permalink parent save report block reply
▲ 3 ▼
– Thisisnotanexit 3 points 161 days ago +3 / -0

Ugh, that hideous strength
monsters will fall.

permalink parent save report block reply
▲ 2 ▼
– SmithW1984 2 points 160 days ago +2 / -0

C.S. Lewis is boss. Him and Tolkien exposed much of the transhumanist conspiracy.

permalink parent save report block reply
▲ 3 ▼
– Thisisnotanexit 3 points 160 days ago +3 / -0

Agreed, brilliant!

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 161 days ago +2 / -0

"$1 TRILLION in global wealth could vanish by 2027.”

Most of the wealth accounted in fiat currencies just does not exist. It's a fake wealth, that should not exist in the first place, and like vacuum cleaner it suck value from real things.

$1 trillion is also not a very large sum. I think virtual wealth is around $200 trillion if not more. Loss of virtual $1 trillion is just a bad (or good, depends on who is loser - big guys or small guys) day at stock exchange.

It's even funny. Loss of small part of big nothing. :)

So, it's not as terrifying prediction as you might think.

permalink parent save report block reply
▲ 1 ▼
– JanxyJet 1 point 160 days ago +1 / -0

But where is the evidence that 1 and 3 are close? Nothing public has gotten close to AGI, to date.

4 presupposes we tear apart bureaucracies, because oil and energy will be more important than who the first moyse to get to the cheese is, and China is so far ahead of everyone else that I don't think we catch up.

IMO, they want AI to look good enough to start removing human accountability, more or less calling their own goals, as implemented through AI models, unbiased, fair, and necessary. Because if they fail at that, a lot of their blood will be spilled.

permalink parent save report block reply
▲ 1 ▼
– free-will-of-choice 1 point 161 days ago +2 / -1

Here are 8 terrifying insights

Ones sight (perception) within all (perceivable) was before another can suggest "8 terrifying insights" about what it is...consenting to a suggestion distorts ones sight, hence making one into an onlooker/beholder/idolater.

open ai

Nature opens (inception) and closes (death) the opportunity for being (life)...ignoring natural (perception) for artificial (suggestion) requires ones consent to an invitation, hence opening oneself up to another, while shutting down within all.

permalink save report block reply
▲ 1 ▼
– Hate4_Zetetics 1 point 159 days ago +1 / -0

Probably controlled op. Real whistle blowers can t really exist in AI monitored world

permalink save report block reply

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - 9slbq (status)

Copyright © 2024.

Terms of Service | Privacy Policy