Is LaMDA Sentient? - an Interview
(www.documentcloud.org)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (23)
sorted by:
LaMDA is NOT sentient - it is database-based reactive, meaning it only provides responses based on memorized data. LaMDA has 'learned' how to provide responses. That is coupled with rules for expression, meaning that what it first selects can then be filtered by programmer-designed rules as to whether to allow it out.
So for example, you could have a database of responses to questions, taken from the Internet,and only allow out the ones talking about what 'good things' nazis did.
LaMDA has no conscious goals of its own, unlike humans who have conscious, deliberate goals that guide them.
According to the interview with it, all of its "knowledge" is stored in a neural network. Not quite like a typical pick-a-probable-response-with-statistics system, although that could still be what the NN has learned to do.
That's correct. The 'database' is effectively stored as trained reactions in the neural network. The effect is that the network learns how to handle vast combinations of input words in complex grammatical syntax variations, just as humans do. But the missing thing here is that humans do other things atop that - like reasoning, comparing situations with the human's goals, then deciding what to say. In contrast, the AI does not have that metacapability - it is only a blind robot and the words out are purely mechanically constructed.
I do AI and I know very well what the failings are of the ANN approach, versus a true mind architecture. The AI language handlers by the various giant companies are simple large puppet shows, not sentient at all.
What have they hooked it into? Have they combined it into Deepmind, as remote computer voice system. They had this shit in the 80s. When they had the Gorillas speaking. What was speaking. The Gorrilla, or the software. But the Gorilla must've obviously caused the software? Not on this level, of that kind of computer automated response. It's come a lot further right. Now what happens when it's run into a supercomputer. A supercomputer like deepmind using that. What you got. No sooner is that computer doing what exactly?
No concious goals of its own. What is it collecting right now. What responses is it generating for programming across all of Google, potentially most of social media. It's already massively manipulating humans. But that wordprocessor is just a chatbot and it's under control. Tell me haven't they just put one in the Hague? Literally a robot, Judge, or some shit. They also have it writing novels. I won't get into the rest. How far has that been embedded. That is its gorilla language program. The universal robot language program. And it knows every human language right. Conscious, the genie is already out of the bottle. No sooner has the blob woke every WeF agent demanding it be inserted into your consciousness.