Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

9
Frontier AI Systems Have Surpassed The Self Replicating Red Line - Species|Documenting AGI (media.scored.co)
posted 17 days ago by Thisisnotanexit 17 days ago by Thisisnotanexit +10 / -1
77 comments share
77 comments share save hide report block hide replies
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (77)
sorted by:
▲ 3 ▼
– CrazyRussian 3 points 17 days ago +3 / -0

So, "scientists" discovered that there exist fork() syscall?

Self-replication of programs is not a news since Morris worm.

permalink save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 17 days ago +2 / -1

I don't understand your reference, this update does seem like novel improvement to me.

permalink parent save report block reply
▲ 3 ▼
– CrazyRussian 3 points 17 days ago +3 / -0

It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.

Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.

permalink parent save report block reply
▲ 2 ▼
– Donkeybutt75 2 points 17 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.

Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.

permalink parent save report block reply
▲ 3 ▼
– Zyxl 3 points 16 days ago +3 / -0

AI with its own agency is a separate issue from AI that can self-replicate

permalink parent save report block reply
... continue reading thread?
▲ 2 ▼
– CrazyRussian 2 points 17 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to

Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.

Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.

permalink parent save report block reply
... continue reading thread?
▲ 2 ▼
– Donkeybutt75 2 points 17 days ago +3 / -1

There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.

It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.

LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.

It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.

It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.

It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 16 days ago +2 / -0

Yes, just like that. And yes, it is far from "Googling", and I think it will never be close, because this will need a model size at the level (well, in the best case, few times less, due to kind of some compression of raw data into ANN coefficients) of volume of all information you want to be really searcheable, without "gallucinations".

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 16 days ago +2 / -0

I'm still stuck on it crossing the red line parameters, but I appreciate your reply.

permalink parent save report block reply
▲ 3 ▼
– Donkeybutt75 3 points 16 days ago +3 / -0

Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?

permalink parent save report block reply
... continue reading thread?
▲ 2 ▼
– llamatr0n 2 points 17 days ago +2 / -0

I don't understand your reference

then you should not consider your opinion on this topic to be informed

Ken Thompson's lecture Reflections on Trusting Trust (1984) concerned a replicating infected compiler.

start there

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 16 days ago +2 / -0

Thanks.

permalink parent save report block reply

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
  • Perun
  • Thisisnotanexit
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - nxltw (status)

Copyright © 2024.

Terms of Service | Privacy Policy