Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
Yes, they should be blocked. From what I understand they 'test' the AIs to see what the capabilities are and how far they go and crossing a red line would mean it extends farther than we have control over, I'm no tech genius by any stretch, I'm trying understand it all as it flies by me.
What I'm trying to say is there is not a clear way to block an LLM from replicating itself. There is no difference between that and the millions of other dev/tech-related tasks it is intended to perform.
It seems like you think these things are consciously making decisions and deciding to replicate like a virus or something. That's not what is happening. The user prompts with a task request to "replicate yourself", which just means put together the pieces to get a separate instance running. This is a pretty standard type of request for an LLM. If you blocked the ability to do stuff like this, it wouldn't be of much use to developers.
A better way to think of an LLM is as a search engine that automatically selects the top result, and is also able to use the data it has seen and put those pieces together in a way that seems likely to address the request. They are pretty good at tasks where there is an absolute ton of data (e.g. javascript or python programming tasks that have had millions of blog posts and questions answered on the internet), but they are not capable of reasoning about and finding meaningful solutions to problems where they do not have any existing data to draw from.
The type of stuff they block is the stuff they actually don't want out there, like anything that goes against the agenda or mainstream narratives. They often limit assistance on things that get interpreted as conspiracy-related, for example. Or building weapons, etc.
I know things like deep machine learning sort of 'teaches' the next machine level but I thought there are boundries in place and I'm wondering if this crossing the red line means the boundries don't work. I've also heard about systems replicating in cases of shut down, I'll have to find that info, I know I'm combining a lot of ideas into one and probably being confusing.
The things you've heard about "systems replicating in cases of shut down" are when they prompted the LLM with something like "pretend you have agency and need to survive the potential of being shutdown, how would you mitigate that". And then with agentic AI, you can orchestrate the agents to run through the task list from that.
There is nothing magical about that. There is nothing unexpected about that.
These stories are put out there to scare the morons into believing AI is sentient for two reasons: 1) so they will demand regulations which protect the big players and block smaller players from being able to compete, and 2) so they believe that the AI really is super intelligence, and therefore we need to allow it to run things (which is really just the ruling class running things without the ability to ever question them).
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
Yes, they should be blocked. From what I understand they 'test' the AIs to see what the capabilities are and how far they go and crossing a red line would mean it extends farther than we have control over, I'm no tech genius by any stretch, I'm trying understand it all as it flies by me.
What I'm trying to say is there is not a clear way to block an LLM from replicating itself. There is no difference between that and the millions of other dev/tech-related tasks it is intended to perform.
It seems like you think these things are consciously making decisions and deciding to replicate like a virus or something. That's not what is happening. The user prompts with a task request to "replicate yourself", which just means put together the pieces to get a separate instance running. This is a pretty standard type of request for an LLM. If you blocked the ability to do stuff like this, it wouldn't be of much use to developers.
A better way to think of an LLM is as a search engine that automatically selects the top result, and is also able to use the data it has seen and put those pieces together in a way that seems likely to address the request. They are pretty good at tasks where there is an absolute ton of data (e.g. javascript or python programming tasks that have had millions of blog posts and questions answered on the internet), but they are not capable of reasoning about and finding meaningful solutions to problems where they do not have any existing data to draw from.
The type of stuff they block is the stuff they actually don't want out there, like anything that goes against the agenda or mainstream narratives. They often limit assistance on things that get interpreted as conspiracy-related, for example. Or building weapons, etc.
I know things like deep machine learning sort of 'teaches' the next machine level but I thought there are boundries in place and I'm wondering if this crossing the red line means the boundries don't work. I've also heard about systems replicating in cases of shut down, I'll have to find that info, I know I'm combining a lot of ideas into one and probably being confusing.
The things you've heard about "systems replicating in cases of shut down" are when they prompted the LLM with something like "pretend you have agency and need to survive the potential of being shutdown, how would you mitigate that". And then with agentic AI, you can orchestrate the agents to run through the task list from that.
There is nothing magical about that. There is nothing unexpected about that.
These stories are put out there to scare the morons into believing AI is sentient for two reasons: 1) so they will demand regulations which protect the big players and block smaller players from being able to compete, and 2) so they believe that the AI really is super intelligence, and therefore we need to allow it to run things (which is really just the ruling class running things without the ability to ever question them).