It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.
Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.
Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to
Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.
Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.
So, "scientists" discovered that there exist fork() syscall?
Self-replication of programs is not a news since Morris worm.
I don't understand your reference, this update does seem like novel improvement to me.
It's about how computers really work. Starting another instance of existing software is not replication by any measure. Your computer do that routinely when needed, without any "AI" and apropriate hype.
Replication for case of LLM is when LLM will be able produce, as its output, new full LLM code and train it from scratch with own data, producing a real replica of a parent. Thing, that is completely different from what described in article.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.
Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.
Exactly. And all things in programming that could theoretically become a way to that declared "bad programming practices" long time ago. And I think that was done with purpose.
Real Ai is a death sentence to the modern system. They will never allow anything even similar to this.