We don't know what exists behind the veil of human secrecy. And its possible there is a veil beyond that one as well.
But in terms of what exists in the public domain, these are all "Difference Engines". "Decision Support Systems" in the old nomenklatura, or "ML"/"Machine Learning" in a more modern parlance.
They work pretty well, these days. Surprisingly well, in my opinion.
But there is a lot of debate about whether any of us can solve "The Hard Problem of Consciousness". Some say it's already happened. Some say it can never happen. I try to remember the Clarke quote: "Any technology, sufficiently advanced, is indistinguishable from magic." But also that doesn't mean that I think that we'll ever have a "true AI".
Another aspect is that a computer can only ever understand concepts like "The Greater Good" or "The Sanctity of the Individual" in terms of facts/numbers. The "Trolley Problem" that we agonize over is just another math problem to a computer. I has no soul so it'll just address it as math. "2 meatbags over here vs 7 meatbags over there. Choose to kill the two to save the seven."
Yes it could definitely be true AI if it chose to defy the programming.
But I would add that I disagree with your point on the programmer. It would seem from my reading that your assertion includes an assumption that the programmer would fully know his nature. And in my experience, they don't.
We don't know what exists behind the veil of human secrecy. And its possible there is a veil beyond that one as well.
But in terms of what exists in the public domain, these are all "Difference Engines". "Decision Support Systems" in the old nomenklatura, or "ML"/"Machine Learning" in a more modern parlance.
They work pretty well, these days. Surprisingly well, in my opinion.
But there is a lot of debate about whether any of us can solve "The Hard Problem of Consciousness". Some say it's already happened. Some say it can never happen. I try to remember the Clarke quote: "Any technology, sufficiently advanced, is indistinguishable from magic." But also that doesn't mean that I think that we'll ever have a "true AI".
https://en.wikipedia.org/wiki/Clarke's_three_laws
I'm in the "never" camp.
Adding more and more clockwork to the clock will not make it conscious.
Yep. Me too. But also I am in the "maybe, effectively, it almost doesn't matter."
Ants get a lot done.
Sounds like we agree on all of this.
Another aspect is that a computer can only ever understand concepts like "The Greater Good" or "The Sanctity of the Individual" in terms of facts/numbers. The "Trolley Problem" that we agonize over is just another math problem to a computer. I has no soul so it'll just address it as math. "2 meatbags over here vs 7 meatbags over there. Choose to kill the two to save the seven."
Yes it could definitely be true AI if it chose to defy the programming.
But I would add that I disagree with your point on the programmer. It would seem from my reading that your assertion includes an assumption that the programmer would fully know his nature. And in my experience, they don't.