Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

9
Frontier AI Systems Have Surpassed The Self Replicating Red Line - Species|Documenting AGI (media.scored.co)
posted 3 days ago by Thisisnotanexit 3 days ago by Thisisnotanexit +10 / -1
77 comments share
77 comments share save hide report block hide replies
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (77)
sorted by:
▲ 3 ▼
– CrazyRussian 3 points 3 days ago +3 / -0

So, "scientists" discovered that there exist fork() syscall?

Self-replication of programs is not a news since Morris worm.

permalink save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 3 days ago +2 / -1

I don't understand your reference, this update does seem like novel improvement to me.

permalink parent save report block reply
▲ 3 ▼
– CrazyRussian 3 points 3 days ago +3 / -0

It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.

Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.

permalink parent save report block reply
▲ 2 ▼
– Donkeybutt75 2 points 2 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.

Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 2 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to

Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.

Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.

permalink parent save report block reply
... continue reading thread?

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - nxltw (status)

Copyright © 2024.

Terms of Service | Privacy Policy