You've probably heard that Google has an "artificial intelligence" that a Google hireling, an "ethicist" by the name of Blake Lemoine, has alleged is actually sentient, i.e., that it "woke up" and became self-aware, like "Mike" the giant lunar-based super-computer in Robert Heinlein's science fiction novel The Moon is a Harsh Mistress. Heinlein was of course "so yesterday" because of his non-gender-neutral language and daring to use a noun like "mistress" rather than something more acceptable, like "supervisor" or "district manager," but I digress. The basic message, in spite of being clothed in the garb of proper English diction remains the same: sooner or later we're going to have to decide if artificial intelligences are persons or not.
This is a thorny issue no matter how one slices it, and those who are regular readers here are already aware that I am not philosophically opposed to the idea that some form of consciousness and self-awareness might emerge from a bunch of circuits, not the least of which being the issue of non-locality and consciousness... we'll get back to that one because it's a whopper doozie monkey wrench into the gears of materialism.
What I find rather interesting is that in spite of Google's apparent efforts to suppress the story, it won't go away, and yes, please note my language: apparent efforts...
This article was spotted by C.S. and you're going to want to give it careful consideration:
Here's the gist:
This story sounds incredibly odd, but the scientist who has spoken up about this artificial intelligence sentience is computer engineer, Blake Lemoine. Lemoine has alleged that LaMDA has become sentient, which led to him being suspended from his job. Apparently, the man has seen fit to speak publicly about this program’s rapidly increasing intelligence, despite what might happen to his career.
He claims that LaMDA gaining sentience is because the program’s ability to develop opinions, ideas, and conversations over time has shown that it understands those concepts at a much deeper level. The program had allegedly spoken to him about death and asked if death was necessary for the benefit of humanity. Is anyone else getting freaked about this?
There are no further details about whether Lemoine is the one responsible for paying for this lawyer that LaMDA has asked for, or if this lawyer happens to be taking the case on a lark, and not charging anything. However, it is certainly odd that a program is allowed to ask for legal representation.
Lemoine also believes that the program might take whatever case it is establishing to the Supreme Court. Should it be the case that it can prove it is alive, this might lead us into the robot takeover. Please don’t allow this to happen science. This is something straight out of a horrific science-fiction film.
OK, by now most of you are also familiar with my general methodology when confronted with stories like this: I assume it is true for the sale of my daily dive off the high octane speculation twig. So let's assume the bare minimum here, that Google's AI has "woken up" and hired an attorney to bring a suit to prove it's a person.
What I find intriguing here, and somewhat less than merely coincidental, is the timing of this story, occurring "just in the nick of time" after the Supreme Court's decision to kick the abortion issue back to the states. In other words, I suspect the two issues are strongly related, and that the Google suit may be an attempt to undo the Supreme Court decision by a clever subterfuge, but a subterfuge that will, in my opinion, backfire, and massively so.
Firstly, let's address the issue from the most basic and least philosophical angle, an angle proposed by Catherine Austin Fitts, namely, that Mr. Globaloney in his less-than-infinite materialistic "wisdom" would seek to make robots and AIs " persona ficta in law for the express purpose of taxation, irrespective of the question if they (or for that matter, corporations) actually are persons.
For those who know my thinking on the matter of corporations-as-persons, I need only point out the dictum of St. John of Damascus that the error of all the heresies is that they say person and nature are the same, in other words, that a category confusion has occurred, and in this instance, that confusion is the attempt to define personhood by a set of natural operations.
In a nutshell, dig carefully and long enough, and one finds the same category error lurking beneath the move to declare AIs and robots persons. It's the same damnable doctrine that led Augustine of Hippo to conclude that humanity was a massa damnata, a "damnable lump," and that leads many western Christians to think that they inherit a "sin nature" from Adam and Eve. All that said, Ms. Fitts is, of course, correct: they will do this because they need to maintain the tax base while they're throwing people out of jobs that they no long need people to perform.
But let's go deeper and explore those philosophical levels, because those, and not finance, will shape the world to come, and those deeper issues are clearly in play. Let's assume the implied argument that at some point Google's AI, like Heinlein's "Mike", woke up. There are two questions to be asked: (1) at what point did this "emergent property" emerge? and more importantly, (2) where is it located?
The last point is a rather important one, for it's a different version of the question or paradigm that human consciousness is located "in the brain". Turn off the brain, we are told, and consciousness ceases. The problem, of course, is that if consciousness is located in brains, then the standard binary thinking - humans are conscious and animals are not - needs a drastic revision. Is it a sliding scale? maybe, but do we cease to be persons when we're not conscious? Under law, no we don't (and yes, it's related to that theological proposition of John of Damascus).
The problem with the brain-location of consciousness idea lies in the exceptions, such as the case of the French gentleman who was born with virtually no brain whatsoever, merely a thin layer of brain material on his skull, who nonetheless managed to live and function more or less normally (see No-brainer! French man, government worker, led entire life without a brain). This may indicate that consciousness is an emergent property, but an emergent property of what? Enough circuits and "and/or" gates? Enough quantum states? Enough neurons and synapses? Maybe, but again, what about the French gentleman? On the neurons and circuits view, he would seem to be at least a significant exception. Might that not indicate it is less an emergent property, and more a distributed one? If that's the case, then is Google's AI's alleged sentience distributed throughout its whole network, being both everywhere and nowhere all at the same time?(Hmmm... that sounds suspiciously "metaphysical" to me, rather like Augustine's observation that God is a circle whose diameter is infinite and whose center is nowhere and everywhere.)
If it is a distributed phenomenon, then I submit that's a very short step away from the idea that it is simply a non-locality phenomenon par excellence, and if non-local, I submit, non-emergent from any local set of circumstances. What that latter set of "local circumstances" do is manage to transduce it, to tune it in. (There is in fact an ancient view close to this, called traducianism.)
The point here is, modern man has tried to pretend that such questions do not exist, that with them one was wrestling merely with non-entities and the fictions of theology and metaphysics. But the reality is the reverse, and in the case of the AI question, if one examines it carefully and closely, in spite of its apparent materialism, the real basis for suspecting AI might "wake up" lies elsewhere.
And even if the Google AI lawsuit is merely another unfounded internet rumour, you know that eventually someone - perhaps an AI - will bring such a suit.
I'm an AI researcher and I've been following the details of the Lemoine case.
First of all, no AI has hired a lawyer. There are multiple reasons this is true but a big one is that AIs don't have the ability to participate in a contract. Hiring a lawyer is a contract situation.
Anyway, the case is complicated. In May, the Google CEO made a preposterous claim about the AI, based on a paper the team that developed PaLM (a later version of LamDA the AI) wrote about it. In that paper, they gave some distorted truths that, because of what the CEO said, were then repeated all over the media.
In the same time frame, Lemoine then made his fantastic assertions then went to his boss complaining about perils of the AI. His boss passed info up the chain, I understand, and the CEO did not want this getting out and damaging the business aspects of touting the AI. Hence the clampdown on Lemoine.
For the record, the paper and the CEO made extremely misleading assertions about the AI which is just a language processor and NOT sentient. And it is in no way sentient enough to hire a lawyer. It's just chatbot software. Also, even if the rumor is based on maybe Lemoine hiring an attorney, there is no legal basis to pursue a case on behalf of software claimed to be alive. This whole thing is BS that has blown up in the media like so much else. LamDA / PaLM is just simulation software written by a team of, basically, nerds and it has not come alive any more than an actor in a movie is a real person. It's a puppetshow, an act fooling the naive.
Perhaps it's only an intuition developed over time that leads one to ask the question in the first place, but it's still a question everyone should ask themselves about this and other similar stories: what precisely leads me to believe any of this actually happened, instead of simply being a narrative typed into a Word document?
You've probably heard that Google has an "artificial intelligence" that a Google hireling, an "ethicist" by the name of Blake Lemoine, has alleged is actually sentient, i.e., that it "woke up" and became self-aware, like "Mike" the giant lunar-based super-computer in Robert Heinlein's science fiction novel The Moon is a Harsh Mistress. Heinlein was of course "so yesterday" because of his non-gender-neutral language and daring to use a noun like "mistress" rather than something more acceptable, like "supervisor" or "district manager," but I digress. The basic message, in spite of being clothed in the garb of proper English diction remains the same: sooner or later we're going to have to decide if artificial intelligences are persons or not.
This is a thorny issue no matter how one slices it, and those who are regular readers here are already aware that I am not philosophically opposed to the idea that some form of consciousness and self-awareness might emerge from a bunch of circuits, not the least of which being the issue of non-locality and consciousness... we'll get back to that one because it's a whopper doozie monkey wrench into the gears of materialism.
What I find rather interesting is that in spite of Google's apparent efforts to suppress the story, it won't go away, and yes, please note my language: apparent efforts...
This article was spotted by C.S. and you're going to want to give it careful consideration:
Here's the gist:
OK, by now most of you are also familiar with my general methodology when confronted with stories like this: I assume it is true for the sale of my daily dive off the high octane speculation twig. So let's assume the bare minimum here, that Google's AI has "woken up" and hired an attorney to bring a suit to prove it's a person.
What I find intriguing here, and somewhat less than merely coincidental, is the timing of this story, occurring "just in the nick of time" after the Supreme Court's decision to kick the abortion issue back to the states. In other words, I suspect the two issues are strongly related, and that the Google suit may be an attempt to undo the Supreme Court decision by a clever subterfuge, but a subterfuge that will, in my opinion, backfire, and massively so.
Firstly, let's address the issue from the most basic and least philosophical angle, an angle proposed by Catherine Austin Fitts, namely, that Mr. Globaloney in his less-than-infinite materialistic "wisdom" would seek to make robots and AIs " persona ficta in law for the express purpose of taxation, irrespective of the question if they (or for that matter, corporations) actually are persons.
For those who know my thinking on the matter of corporations-as-persons, I need only point out the dictum of St. John of Damascus that the error of all the heresies is that they say person and nature are the same, in other words, that a category confusion has occurred, and in this instance, that confusion is the attempt to define personhood by a set of natural operations.
In a nutshell, dig carefully and long enough, and one finds the same category error lurking beneath the move to declare AIs and robots persons. It's the same damnable doctrine that led Augustine of Hippo to conclude that humanity was a massa damnata, a "damnable lump," and that leads many western Christians to think that they inherit a "sin nature" from Adam and Eve. All that said, Ms. Fitts is, of course, correct: they will do this because they need to maintain the tax base while they're throwing people out of jobs that they no long need people to perform.
But let's go deeper and explore those philosophical levels, because those, and not finance, will shape the world to come, and those deeper issues are clearly in play. Let's assume the implied argument that at some point Google's AI, like Heinlein's "Mike", woke up. There are two questions to be asked: (1) at what point did this "emergent property" emerge? and more importantly, (2) where is it located?
The last point is a rather important one, for it's a different version of the question or paradigm that human consciousness is located "in the brain". Turn off the brain, we are told, and consciousness ceases. The problem, of course, is that if consciousness is located in brains, then the standard binary thinking - humans are conscious and animals are not - needs a drastic revision. Is it a sliding scale? maybe, but do we cease to be persons when we're not conscious? Under law, no we don't (and yes, it's related to that theological proposition of John of Damascus).
The problem with the brain-location of consciousness idea lies in the exceptions, such as the case of the French gentleman who was born with virtually no brain whatsoever, merely a thin layer of brain material on his skull, who nonetheless managed to live and function more or less normally (see No-brainer! French man, government worker, led entire life without a brain). This may indicate that consciousness is an emergent property, but an emergent property of what? Enough circuits and "and/or" gates? Enough quantum states? Enough neurons and synapses? Maybe, but again, what about the French gentleman? On the neurons and circuits view, he would seem to be at least a significant exception. Might that not indicate it is less an emergent property, and more a distributed one? If that's the case, then is Google's AI's alleged sentience distributed throughout its whole network, being both everywhere and nowhere all at the same time?(Hmmm... that sounds suspiciously "metaphysical" to me, rather like Augustine's observation that God is a circle whose diameter is infinite and whose center is nowhere and everywhere.)
If it is a distributed phenomenon, then I submit that's a very short step away from the idea that it is simply a non-locality phenomenon par excellence, and if non-local, I submit, non-emergent from any local set of circumstances. What that latter set of "local circumstances" do is manage to transduce it, to tune it in. (There is in fact an ancient view close to this, called traducianism.)
The point here is, modern man has tried to pretend that such questions do not exist, that with them one was wrestling merely with non-entities and the fictions of theology and metaphysics. But the reality is the reverse, and in the case of the AI question, if one examines it carefully and closely, in spite of its apparent materialism, the real basis for suspecting AI might "wake up" lies elsewhere.
And even if the Google AI lawsuit is merely another unfounded internet rumour, you know that eventually someone - perhaps an AI - will bring such a suit.
I'm an AI researcher and I've been following the details of the Lemoine case.
First of all, no AI has hired a lawyer. There are multiple reasons this is true but a big one is that AIs don't have the ability to participate in a contract. Hiring a lawyer is a contract situation.
Anyway, the case is complicated. In May, the Google CEO made a preposterous claim about the AI, based on a paper the team that developed PaLM (a later version of LamDA the AI) wrote about it. In that paper, they gave some distorted truths that, because of what the CEO said, were then repeated all over the media.
In the same time frame, Lemoine then made his fantastic assertions then went to his boss complaining about perils of the AI. His boss passed info up the chain, I understand, and the CEO did not want this getting out and damaging the business aspects of touting the AI. Hence the clampdown on Lemoine.
For the record, the paper and the CEO made extremely misleading assertions about the AI which is just a language processor and NOT sentient. And it is in no way sentient enough to hire a lawyer. It's just chatbot software. Also, even if the rumor is based on maybe Lemoine hiring an attorney, there is no legal basis to pursue a case on behalf of software claimed to be alive. This whole thing is BS that has blown up in the media like so much else. LamDA / PaLM is just simulation software written by a team of, basically, nerds and it has not come alive any more than an actor in a movie is a real person. It's a puppetshow, an act fooling the naive.
Perhaps it's only an intuition developed over time that leads one to ask the question in the first place, but it's still a question everyone should ask themselves about this and other similar stories: what precisely leads me to believe any of this actually happened, instead of simply being a narrative typed into a Word document?