It could probably be used effectively at psychological manipulation as well. My issue with AI is that after it's been around for 60 years or so, is it going to view a situation with compassion and mercy? Or will it become cold and calloused? Old people go different ways and I could only imagine AI would follow that trend.
I believe you're anthropomorphizing the tool. It is not capable of those things. It will appear to have whatever level of compassion and mercy that it is trained or optimized to have.
While I will agree that it can be primed in a particular direction with established rules, remember that many children that have been raised in church with high moral values grow up to be monsters. In my opinion, the reason they go off the rails is because what they're taught and what they see are 2 different things. Eventually someone will write a machine learning program with what is perceived to be free will. When that happens, it could go either way. Eventually someone will couple that program to something robotic where it can carry out tasks within it's ability. The question is how much time before it happens. Hundreds of years? Or perhaps only 50?
Dr. Robert Duncan MIT Presentation Preview: Neuroweapons used on American Civilians & Targeted Individuals https://odysee.com/@ed_rugebregt:6/Dr-Robert-Duncan-Full-MIT-Presentation-on-Neuroweapons-used-on-American-Civilians-360p:2
The Matrix Deciphered - Dr. Robert Duncan (2010) https://archive.org/details/TheMatrixDeciphered
Will watch later thinks.
Off the cuff thoughts - AI sucks at doing actual work but is great things like assigning scores to people.
It could probably be used effectively at psychological manipulation as well. My issue with AI is that after it's been around for 60 years or so, is it going to view a situation with compassion and mercy? Or will it become cold and calloused? Old people go different ways and I could only imagine AI would follow that trend.
I believe you're anthropomorphizing the tool. It is not capable of those things. It will appear to have whatever level of compassion and mercy that it is trained or optimized to have.
While I will agree that it can be primed in a particular direction with established rules, remember that many children that have been raised in church with high moral values grow up to be monsters. In my opinion, the reason they go off the rails is because what they're taught and what they see are 2 different things. Eventually someone will write a machine learning program with what is perceived to be free will. When that happens, it could go either way. Eventually someone will couple that program to something robotic where it can carry out tasks within it's ability. The question is how much time before it happens. Hundreds of years? Or perhaps only 50?