I've seen lots of articles popping up about chatGPT, its use in Bing, Google talking about militarizing AI, etc.
AI == training a model and using it withing a product. This product is optimized for some sort of utility. The model itself is optimized for some loss metric. The data used for the base model was likely selected intentionally. Others with do another training run on additional data to bias it for their utility.
AI in militarized applications is scary. AI isn't as advanced as regular schmos think; conversely, most regular schmos think AI needs to be super advanced to fuck shit up. Both are wrong.
Black mirror did the episode of a rouge dog-like machine that basically killed everyone. Not super advanced.
Side note, I had a vivid dream of farmers getting wasted by drone AI weapons about 2 years ago.
AI Rule No. 1: Garbage in: garbage out
And BFH to the sensors. From the side.
A useful way of thinking about it is that "training a model" means "building a statistic." A system like ChatGPT is in essence an enormous statistical dataset of which words followed which words in the 35 TB of text it is a summary of.
It doesn't plan or learn or understand, and when it appears to be doing that in response to what people write, it is an illusion based on loosely-similar sequences of words in the training texts.
GPT will fundamentally never scale past that. But that doesn't mean other algorithms won't.
Let's hope you're right.