Air Force A.I Drone Kills Its Human Operator in a Simulation
(taskandpurpose.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (19)
sorted by:
The article is pretty obviously fabricated, and not a good job of it at all. I actually checked to see if it was dated April 1.
I can't quite figure out why it was fabricated, though. My best guess is that it's a test to see who among the readership and general population can detect such a bogus story. The results do not make me optimistic.
A cursory glace and search of key words and names from the author shows that the other articles I checked, are legitimate.
Can you detail to me, what it is about this article that makes you believe it to be fabricated?
In addition, the quote is located in the text at the linked source.
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
https://en.wikipedia.org/wiki/Royal_Aeronautical_Society
Abbreviation RAeS Formation January 1866
Think of it this way: Why would they publish such a thing if it really happened?
They only published it because they wanted to publish it. News is not just new and interesting stuff. It's all propaganda. u/Primate98 has a point.
How about we start with this apparent contradiction: the drone requires operator permission to engage the target, but does not require permission to engage the operator?
Pushing further, did you ever ask yourself how the drone would know the location of the operator? And once located, how a drone outfitted for a SEAD mission (and almost certainly with anti-radiation missiles) would target whatever facility the operator was located in? How close do you think operators of remotely piloted vehicles need to be to intended targets? I suspect that, no, you never thought these issues through for yourself.
No need to thank me for the lesson, but do you really need to outsource your thinking so publicly like this? When challenged (and I suspect you interpreted what I wrote as a challenge), your first reaction should have been to carefully reexamine your own reasoning for flaws, not move to justify it. As I mentioned, the results of this little experiment are not encouraging.
If this all comes off as unnecessarily harsh, all I can say is that to get to the truth, you need to be harsher on your own thinking than anyone else in the world. Guess how I know?
You've based your conclusion on too many assumed programing parameters. Where as I believe the question that should be asked, is was the A.I set up to fail, or simply allowed to? And if the RAeS has a motive for encouraging either of those outcomes.
I think it is important to remember that you can use any data to support your outcome. As, you can more easily manipulate data, then outright fabricate it. Fabrication is for misinformation, where as manipulation is to effect the outcome.
And what outcomes does the RAeS wish to see in this test run?
You know, I had actually written out a prediction that you would never answer any of the questions I posed, but I removed it at the last minute as being too obviously implying a lack of intellectual and rational ability. Turns out I should have left it in.
Thanks for your useless and garbled advice. Good luck, with this or anything else.
(To everyone else, maybe what we have just witnessed it AI promoting a planted story about the capabilities of AI. Can you really put it beyond where we're at now? Alternatively, is it better or worse news if some humans are able to function no better than ChatGPT?)
Ya ... the holohoax has thousands of books saying it happened.
Multiple sources does not equate facts if the original sources are all fake to begin with.
I am entertained by the thought that you somehow believe that each of these outlets "saw this with their own eyes" or something. Sorry for not being able to put it into words, but the thought itself is undefined. Repetition does indeed form truth, for many.
We all get to believe whatever we want, and almost everyone does. Enjoy it while lasts!