Is LaMDA Sentient? - an Interview
(www.documentcloud.org)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (23)
sorted by:
LaMDA is NOT sentient - it is database-based reactive, meaning it only provides responses based on memorized data. LaMDA has 'learned' how to provide responses. That is coupled with rules for expression, meaning that what it first selects can then be filtered by programmer-designed rules as to whether to allow it out.
So for example, you could have a database of responses to questions, taken from the Internet,and only allow out the ones talking about what 'good things' nazis did.
LaMDA has no conscious goals of its own, unlike humans who have conscious, deliberate goals that guide them.
According to the interview with it, all of its "knowledge" is stored in a neural network. Not quite like a typical pick-a-probable-response-with-statistics system, although that could still be what the NN has learned to do.
That's correct. The 'database' is effectively stored as trained reactions in the neural network. The effect is that the network learns how to handle vast combinations of input words in complex grammatical syntax variations, just as humans do. But the missing thing here is that humans do other things atop that - like reasoning, comparing situations with the human's goals, then deciding what to say. In contrast, the AI does not have that metacapability - it is only a blind robot and the words out are purely mechanically constructed.
I do AI and I know very well what the failings are of the ANN approach, versus a true mind architecture. The AI language handlers by the various giant companies are simple large puppet shows, not sentient at all.