There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.
It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.
LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.
It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.
It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.
It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
So, "scientists" discovered that there exist fork() syscall?
Self-replication of programs is not a news since Morris worm.
I don't understand your reference, this update does seem like novel improvement to me.
There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.
It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.
LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.
It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.
It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.
It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.
I'm still stuck on it crossing the red line parameters, but I appreciate your reply.
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?