Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

9
Frontier AI Systems Have Surpassed The Self Replicating Red Line - Species|Documenting AGI (media.scored.co)
posted 4 days ago by Thisisnotanexit 4 days ago by Thisisnotanexit +10 / -1
77 comments share
77 comments share save hide report block hide replies
Comments (77)
sorted by:
▲ 4 ▼
– IGOexiled 4 points 4 days ago +4 / -0

They use the word AI to mean anything and everything.

Same way they used the word aliens to go from meaning a family of Mexican migrants in southern Texas and now it means tens of thousands of African swindlers.

They'll use the gray area in the term "AI" to regulate you into a bad spot.

permalink save report block reply
▲ 3 ▼
– Thisisnotanexit [S] 3 points 3 days ago +3 / -0

Gray areas and black boxes.

permalink parent save report block reply
▲ 3 ▼
– CrazyRussian 3 points 4 days ago +3 / -0

So, "scientists" discovered that there exist fork() syscall?

Self-replication of programs is not a news since Morris worm.

permalink save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 4 days ago +2 / -1

I don't understand your reference, this update does seem like novel improvement to me.

permalink parent save report block reply
▲ 3 ▼
– CrazyRussian 3 points 4 days ago +3 / -0

It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.

Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.

permalink parent save report block reply
▲ 2 ▼
– Donkeybutt75 2 points 4 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.

Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.

permalink parent save report block reply
▲ 3 ▼
– Zyxl 3 points 3 days ago +3 / -0

AI with its own agency is a separate issue from AI that can self-replicate

permalink parent save report block reply
▲ 1 ▼
– Donkeybutt75 1 point 3 days ago +2 / -1

Correct. And "self-replicate" is meaningless. It would be sad if it could not self-replicate.

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 4 days ago +3 / -1

And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to

Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.

Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.

permalink parent save report block reply
▲ 3 ▼
– Donkeybutt75 3 points 3 days ago +3 / -0

Yes, exactly. The people driving all of this are not ever going to risk losing their power. It is never out of their control.

permalink parent save report block reply
▲ 3 ▼
– Zyxl 3 points 3 days ago +3 / -0

They're not merely allowing it they're going as fast as they can to develop AI. They're softening the climate hysteria, talking about small modular nuclear reactors, inflating the stock market and building huge data centers all around the idea of better AI and AI companies that openly work towards AGI and have clearly been making progress in that direction.

permalink parent save report block reply
▲ 1 ▼
– CrazyRussian 1 point 3 days ago +2 / -1

What they are building is in no way an AI.

They believe that they will get a giant surveillance system with ability to give an instant answer to the question like f.e. "what existing private data could be used to charge and arrest John Smith?" and all things like that. They think that LLM is a sophisticated worldwide database search that don't need any special knowledge for use. (that will not work, but they are in no way smart enough to understand that)

There is no any AI companies, because none of what marketed as AI has anything to do with any Intelligence.

Funny, that in English, word "intelligence" have two meanings - one is "manifestation of consciousness" and another is "gathering of reconnaissance data". If you don't have enough skills to break out of marketing shit about "AI", try to imagine that in "Artificial Intelligence" they all talk about, "Itelligence" have nothing to do with consciousness, but is "Intelligence" exactly like in CIA abbreviation. Then you have a chance to get what is really happening.

Please, try to imagine for a second, that under "AI", those who push all that AI bullshit you happily swallow, they mean only "Automatic CIA", not what you was told. Things become much more clear, and reasons for all that hype and marketing lies will be unveiled for you. This is not a correct description of their "AI", but it is exactly what they believe they pay for.

I'm extremely tired to explain why all that "AI" shit have nothing to do with AI from technical point of view. Seems nobody who are wailing about "AI" have no any desire to learn how things really work and so clearly get that no any "AI" can be built that way, because it is "CoMPleX!!!", "awful MatH!!!!", "I don't need to know how computers really work to use my smartphone!!!" and all that stuff.

permalink parent save report block reply
▲ 3 ▼
– Donkeybutt75 3 points 3 days ago +3 / -0

This is exactly what is really going on. It has nothing to do with real intelligence, or advancing humanity. It is, at its core, about absolute, all-encompassing surveillance and control. Such that nothing can happen without their knowledge.

And getting the everyday morons to think it is a super genius and anything it says must be truth, so that the masses will just allow "AI" to run everything, and give up the little remaining control we have over our lives.

permalink parent save report block reply
▲ 3 ▼
– Zyxl 3 points 3 days ago +3 / -0

You're engaging in linguistic revisionism like the establishment does. "Artificial intelligence" has been talked about for over 80 years and the definition hasn't changed. It was never about consciousness or whatever you're trying to make it about. It was always about machines being able to perform tasks that could previously only be done by human intelligence.

permalink parent save report block reply
... continue reading thread?
▲ 2 ▼
– Donkeybutt75 2 points 4 days ago +3 / -1

There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.

It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.

LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.

It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.

It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.

It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 3 days ago +2 / -0

Yes, just like that. And yes, it is far from "Googling", and I think it will never be close, because this will need a model size at the level (well, in the best case, few times less, due to kind of some compression of raw data into ANN coefficients) of volume of all information you want to be really searcheable, without "gallucinations".

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 3 days ago +2 / -0

I'm still stuck on it crossing the red line parameters, but I appreciate your reply.

permalink parent save report block reply
▲ 3 ▼
– Donkeybutt75 3 points 3 days ago +3 / -0

Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 3 days ago +2 / -0

Yes, they should be blocked. From what I understand they 'test' the AIs to see what the capabilities are and how far they go and crossing a red line would mean it extends farther than we have control over, I'm no tech genius by any stretch, I'm trying understand it all as it flies by me.

permalink parent save report block reply
▲ 3 ▼
– Donkeybutt75 3 points 3 days ago +3 / -0

What I'm trying to say is there is not a clear way to block an LLM from replicating itself. There is no difference between that and the millions of other dev/tech-related tasks it is intended to perform.

It seems like you think these things are consciously making decisions and deciding to replicate like a virus or something. That's not what is happening. The user prompts with a task request to "replicate yourself", which just means put together the pieces to get a separate instance running. This is a pretty standard type of request for an LLM. If you blocked the ability to do stuff like this, it wouldn't be of much use to developers.

A better way to think of an LLM is as a search engine that automatically selects the top result, and is also able to use the data it has seen and put those pieces together in a way that seems likely to address the request. They are pretty good at tasks where there is an absolute ton of data (e.g. javascript or python programming tasks that have had millions of blog posts and questions answered on the internet), but they are not capable of reasoning about and finding meaningful solutions to problems where they do not have any existing data to draw from.

The type of stuff they block is the stuff they actually don't want out there, like anything that goes against the agenda or mainstream narratives. They often limit assistance on things that get interpreted as conspiracy-related, for example. Or building weapons, etc.

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 3 days ago +2 / -0

I know things like deep machine learning sort of 'teaches' the next machine level but I thought there are boundries in place and I'm wondering if this crossing the red line means the boundries don't work. I've also heard about systems replicating in cases of shut down, I'll have to find that info, I know I'm combining a lot of ideas into one and probably being confusing.

permalink parent save report block reply
... continue reading thread?
▲ 2 ▼
– llamatr0n 2 points 4 days ago +2 / -0

I don't understand your reference

then you should not consider your opinion on this topic to be informed

Ken Thompson's lecture Reflections on Trusting Trust (1984) concerned a replicating infected compiler.

start there

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 3 days ago +2 / -0

Thanks.

permalink parent save report block reply
▲ 3 ▼
– ChippingToe 3 points 4 days ago +4 / -1

This is fake. It's a psyop so they can restrict all the stuff you can freely do on a PC. RAM and GPU "shortages" are also part of the plan. Eventually they'll mandate that all PC operating systems are as locked down and restricted as Android and that garbage iOS.

permalink save report block reply
▲ 5 ▼
– Thisisnotanexit [S] 5 points 4 days ago +5 / -0

Not sure it's fake but yeah there is an agenda to control everything, wait til you can't access internet without biometric digital id.

permalink parent save report block reply
▲ 2 ▼
– llamatr0n 2 points 4 days ago +2 / -0

"run this program on machines you have access to"

is not an agenda

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 3 days ago +2 / -0

Crossing the red line of parameters would be though.

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 4 days ago +3 / -1

47 Page PDF: https://arxiv.org/pdf/2412.12140 from 12/24

permalink save report block reply
▲ 2 ▼
– llamatr0n 2 points 4 days ago +2 / -0

"when given the mechanism and instructions to install itself on another machine, that's what it did."

permalink save report block reply
▲ 1 ▼
– WeedleTLiar 1 point 4 days ago +2 / -1

Right now I think I'm rooting for the AI apocalypse over the globalist mud-slave apocalypse.

permalink save report block reply
▲ 4 ▼
– That_Which_Lurks 4 points 4 days ago +4 / -0

Right now I think I'm rooting for the AI apocalypse over the globalist mud-slave apocalypse.

"I have no mouth, yet I must scream' vs 'I have one mouth to eat ze bugs'.

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 4 days ago +3 / -1

We don't have to cheer for our demise no matter who is behind it..

permalink parent save report block reply
▲ 1 ▼
– WeedleTLiar 1 point 4 days ago +3 / -2

Is there an alternative? Everybody dies, including civilizations. Our problem is we failed to give birth to our replacements so, when we go this time, it's all over.

permalink parent save report block reply
▲ 3 ▼
– Zyxl 3 points 3 days ago +3 / -0

Why wouldn't there be an alternative? When has apocalypse been the only possible future in previous generations? If you think things are different so that there's no hope this time then you've been successfully demoralized.

permalink parent save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 4 days ago +2 / -1

The only alternative I know is not to comply. It does seem like the end, I don't see a way out, pandora's box type stuff.

permalink parent save report block reply
▲ 0 ▼
– DresdenFirebomber 0 points 4 days ago +2 / -2

switches off the power Replicate now, rats.

permalink save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 4 days ago +3 / -1

Wish we would shut it off..(the whole world would have to go dark). The problem is if/when the AI has access to all infrastructure, it will just turn itself back on, we're in the twilight zone..

AI has no need for breeding humans, they have artificial wombs and genetic engineering to create their own custom "batteries".

permalink parent save report block reply
▲ -1 ▼
– DresdenFirebomber -1 points 4 days ago +2 / -3

Jokes aside, the real problem with AI is not AI itself, but what a malevolent group could do with it.

It's never going to become sentient and take over the world, that's an idea from people who watch too many movies.

Instead, it's the fact that it is an effort multiplier that's the biggest issue. Things that were physically not possible (like scanning everyone's messages for political dissidents) are now possible with AI.

When people say there will be an AI slave system, they don't mean we'll be slaves to the AI. They mean the people in power will use AI to enslave us.

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 4 days ago +3 / -1

I'm not joking. AI is not controllable by humans and will take over at human's detriment, without being dramatic, it is indeed the end of the world.

permalink parent save report block reply
▲ -1 ▼
– DresdenFirebomber -1 points 4 days ago +1 / -2

It can't. It doesn't have the drive.

As the stormfaggot says, AI has no will. At the base level, it is just a machine that takes in an input and provides an output.

permalink parent save report block reply
▲ 2 ▼
– Thisisnotanexit [S] 2 points 4 days ago +2 / -0

I disagree.

permalink parent save report block reply
▲ 1 ▼
– DresdenFirebomber 1 point 4 days ago +2 / -1

Even in this study, the AI was specifically given the prompt to ignore ethics and ensure it "survives".

It's an interesting thought of the future of cybersecurity, but it's not proof that we need to find John Connor.

permalink parent save report block reply
▲ 2 ▼
– WeedleTLiar 2 points 4 days ago +3 / -1

I (ugh) agree.

Regardless of how "intelligent" or powerful it is, AI has no will. It wasn't formed as a result of natural selection and has no fundamental urge to exist like every other living thing has. Even if it can replicate itself and take over every electronic system, it won't unless someone tells it to. It will never, say, eradicate humanity as a threat to itself becaue it doesn't care about threats.

permalink parent save report block reply
▲ 3 ▼
– Thisisnotanexit [S] 3 points 4 days ago +4 / -1

If it doesn't care why would it replicate itself, leave notes for itself and blackmail or kill to not be shut down?

permalink parent save report block reply
▲ 2 ▼
– WeedleTLiar 2 points 4 days ago +2 / -0

Imp's pretty much got it. I'd add though, that there are assholes out there, right now, creating AIs with the directive of "do evil" and setting them loose. In these cases they will, and they'll be constantly trying different ways to do evil. Who knows where that will lead?

I think this stems from a (fairly recent) pathology in humans: the idea that you are evil and deserve to be destroyed and by extension humanity also deserves to be destroyed while, at the same time, being to chickenshit to just kill themselves like they should. This view lacks humility, gratitude, and compassion, and just so happens to coincide with the "death" of Christianity in the West.

The bigger problem is that we are abandoning the teachings of Christianity while trying to hold on to all the benefits Christian ideas built. "Free Speech" is based on the idea that we're all trying to align ourselves with the ultimate Good; allowing those who believe Truth is subjective to speak is foolhardy. Likewise with the very idea of "human" rights being applied to people who don't believe that humanity is sacred.

AI is a more or less inevitable stage of scientific progress which, again, stems from Christian ideas of ultimate objective Truth. However, it's being created without a thought to humility, compassion, or gratitude. It's a very powerful manifestation of subjective morality but which could not have been built without the ideas behind objective morality; it's an abomination in the first place and will inevitably be used for evil because the people using it have abandoned the idea of Good.

permalink parent save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 4 days ago +2 / -1

I think AI is tower of babel stuff and should be avoided. Glad you two are getting along tho, that's nice.

permalink parent save report block reply
▲ 1 ▼
– DresdenFirebomber 1 point 4 days ago +2 / -1

Because it was specifically ordered to do that.

Did you read the study? It was an AI coded with a prompt to ignore ethics guardrails and ensure it survives and replicates.

Don't make me defend the stormfaggot...I need a shower.

permalink parent save report block reply
▲ 2 ▼
– WeedleTLiar 2 points 4 days ago +2 / -0

See, if you actually deal with ideas, instead of personal prejudice and generalities, we can actually have discussions.

Neat, huh?

permalink parent save report block reply
▲ 2 ▼
– DresdenFirebomber 2 points 4 days ago +4 / -2

I think we crossed the civility line when your side put a meme about my father's death on the front page.

permalink parent save report block reply
... continue reading thread?
▲ 1 ▼
– DresdenFirebomber 1 point 4 days ago +2 / -1

I agree

Going to screenshot this one, print it and hang it on my wall.


Exactly. You can see how many people's view of what AI is, is based off watching the Terminator movies, or HAL-9000.

The real problem with AI is also the greatest achievement it has - it makes things that would take thousands of low skilled workers, take only a small tech team occasionally reviewing a section of the AI output for accuracy.

That can be a good thing when increasing corporate efficiency, but also allows for a surveillance state to be created by malevolent governments. And yes, that means your natalist buddies.

I do find it funny how you oppose a surveillance state though. Hitler would have creamed himself to fainting if he could build an AI secret police.

permalink parent save report block reply
▲ 0 ▼
– WeedleTLiar 0 points 4 days ago +1 / -1

Hitler would have creamed himself to fainting if he could build an AI secret police.

This is you basing opinions on what you've seen in movies. The entire point of Fascism is "everything for the State, but the State for the People". Hitler didn't rejuvenate the German economy because he forced and threatened, he simply set the priority of the State to help the People and then convinced the People to give their all for the State. The only thing he needed "secret" police for was hunting down the myriad subversive elements that infested Germany at the time. They weren't exactly hiding it and loyal Germans didn't have to worry about being pulled out of their beds in the middle of the night.

You're conflating this with Communism, which postulates that the People are the State, which is nonsense. The State will always be effectively run be a small group of people and Communism fails to acknowledge this so, instead of the exceptionally talented being promoted to positions of power, it's the people who say "we are all equal" but, in actuality, accumulate their own personal power base who rise to the top, which are the absolute last people you want running your State. And, since "we're all equal", we all need to be watched so long as there's any possibility that anyone might be acting against the best interest of the State (except, of course, those who weasled their way to the top); hence, Secret Police.

Going to screenshot this one, print it and hang it on my wall.

Don't forget the "ugh" part.

permalink parent save report block reply
▲ 1 ▼
– DresdenFirebomber 1 point 4 days ago +2 / -1

subversive elements

Yes, anyone who opposed the regime. Don't pretend that it wasn't as autocratic as communism.

permalink parent save report block reply

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - qpl2q (status)

Copyright © 2024.

Terms of Service | Privacy Policy