Air Force A.I Drone Kills Its Human Operator in a Simulation
(taskandpurpose.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (19)
sorted by:
You've based your conclusion on too many assumed programing parameters. Where as I believe the question that should be asked, is was the A.I set up to fail, or simply allowed to? And if the RAeS has a motive for encouraging either of those outcomes.
I think it is important to remember that you can use any data to support your outcome. As, you can more easily manipulate data, then outright fabricate it. Fabrication is for misinformation, where as manipulation is to effect the outcome.
And what outcomes does the RAeS wish to see in this test run?
You know, I had actually written out a prediction that you would never answer any of the questions I posed, but I removed it at the last minute as being too obviously implying a lack of intellectual and rational ability. Turns out I should have left it in.
Thanks for your useless and garbled advice. Good luck, with this or anything else.
(To everyone else, maybe what we have just witnessed it AI promoting a planted story about the capabilities of AI. Can you really put it beyond where we're at now? Alternatively, is it better or worse news if some humans are able to function no better than ChatGPT?)