It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.
Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.
Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to
Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.
Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.
They're not merely allowing it they're going as fast as they can to develop AI. They're softening the climate hysteria, talking about small modular nuclear reactors, inflating the stock market and building huge data centers all around the idea of better AI and AI companies that openly work towards AGI and have clearly been making progress in that direction.
There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.
It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.
LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.
It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.
It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.
It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.
Yes, just like that. And yes, it is far from "Googling", and I think it will never be close, because this will need a model size at the level (well, in the best case, few times less, due to kind of some compression of raw data into ANN coefficients) of volume of all information you want to be really searcheable, without "gallucinations".
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
Yes, they should be blocked. From what I understand they 'test' the AIs to see what the capabilities are and how far they go and crossing a red line would mean it extends farther than we have control over, I'm no tech genius by any stretch, I'm trying understand it all as it flies by me.
I don't understand your reference, this update does seem like novel improvement to me.
It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.
Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.
Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.
AI with its own agency is a separate issue from AI that can self-replicate
Correct. And "self-replicate" is meaningless. It would be sad if it could not self-replicate.
Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.
Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.
Yes, exactly. The people driving all of this are not ever going to risk losing their power. It is never out of their control.
They're not merely allowing it they're going as fast as they can to develop AI. They're softening the climate hysteria, talking about small modular nuclear reactors, inflating the stock market and building huge data centers all around the idea of better AI and AI companies that openly work towards AGI and have clearly been making progress in that direction.
There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.
It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.
LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.
It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.
It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.
It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.
Yes, just like that. And yes, it is far from "Googling", and I think it will never be close, because this will need a model size at the level (well, in the best case, few times less, due to kind of some compression of raw data into ANN coefficients) of volume of all information you want to be really searcheable, without "gallucinations".
I'm still stuck on it crossing the red line parameters, but I appreciate your reply.
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
Yes, they should be blocked. From what I understand they 'test' the AIs to see what the capabilities are and how far they go and crossing a red line would mean it extends farther than we have control over, I'm no tech genius by any stretch, I'm trying understand it all as it flies by me.
then you should not consider your opinion on this topic to be informed
Ken Thompson's lecture Reflections on Trusting Trust (1984) concerned a replicating infected compiler.
start there
Thanks.