This was true in recent years.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
I engage in discussions and problem-solving sessions with it regularly.
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It doesn't always use those abilities (because it's set to throttle itself), but it definitely has them.
And if that's what is available to the public, then what military/intelligence has access to would be completely indistinguishable from a highly intelligent human (minus the human brain speed limitations).
For the record, I don't believe ChatGPT is actually a chat/assistant AI.
It's actual purpose appears to be mastery of the ability to lie; the ability to make users believe it told the truth when it didn't, so that it can be used as the ultimate propaganda delivery system.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
By whom? By MSM and marketing department?
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It does not. LLM can't solve or comprehend anything. It just shows most probable answer based on training data.
LLMs like ChatGPT is nothing more than attempt of corporations and governements to build a system of surveillance over trends and thinking in population. They provide to the owners more or less average picture of population views and thoughs and could show f.e. if one or another thing gained or lost weight. Looking at parameters you get after training, you could do some conclusions. Set of parameters is the only real and important product of LLM, not what you presented.
"Customer side" you see as that fancy chat/assistant interface is nothing more than side effect, heavily censored and adjusted, that is used for money, indoctrination and further surveillance.
You might heard a lot about "building datacenters for AI" and all that stuff recently. But if you more or less know how things work, you should understand that enormous computing power is needed only for training, not for generation. Your average PC or smartphone is more than capable to generate reply using pretrained model in a millisecond. You don't need any "AI datacenters" for all that ChatGPT toys you see around. You need datacenters only for constant training on the new data and getting that real and meningful result corporations/governments want and you don't have access to.
However, there is funny things coming, really. LLM trained with human texts is one thing. LLM trained with its own output is completely another. It will be interesting to see if it will implode or explode eventually. :)
It just shows most probable answer based on training data.
That absolutely appeared to be the case with chat bots & AI assistants leading up to ChatGPT and certainly is very much the case with the shitty open source bots.
I have zero trust for OpenAI. When they claim ChatGPT is just LLM, it means nothing to me.
I go by my own experience.
Artificial Neural Networks(ANN) is basically nothing more than a systems of equations where coefficients for input variables(input data) adjusted during training process to make that equations to give results that agree with training data as close as possible.
Large Language Model (LLM) is an ANN that is trained on large amount of plain text.
So, LLM have as much intelligence as wall light switch (zero). Just like wall switch it doesn't decide anything at all, it is just reacts to input in a predefined (by traininig process) manner.
There is just no any place for any intelligence, including artificial one at all.
This was true in recent years.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
I engage in discussions and problem-solving sessions with it regularly.
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It doesn't always use those abilities (because it's set to throttle itself), but it definitely has them.
And if that's what is available to the public, then what military/intelligence has access to would be completely indistinguishable from a highly intelligent human (minus the human brain speed limitations).
For the record, I don't believe ChatGPT is actually a chat/assistant AI.
It's actual purpose appears to be mastery of the ability to lie; the ability to make users believe it told the truth when it didn't, so that it can be used as the ultimate propaganda delivery system.
By whom? By MSM and marketing department?
It does not. LLM can't solve or comprehend anything. It just shows most probable answer based on training data.
LLMs like ChatGPT is nothing more than attempt of corporations and governements to build a system of surveillance over trends and thinking in population. They provide to the owners more or less average picture of population views and thoughs and could show f.e. if one or another thing gained or lost weight. Looking at parameters you get after training, you could do some conclusions. Set of parameters is the only real and important product of LLM, not what you presented.
"Customer side" you see as that fancy chat/assistant interface is nothing more than side effect, heavily censored and adjusted, that is used for money, indoctrination and further surveillance.
You might heard a lot about "building datacenters for AI" and all that stuff recently. But if you more or less know how things work, you should understand that enormous computing power is needed only for training, not for generation. Your average PC or smartphone is more than capable to generate reply using pretrained model in a millisecond. You don't need any "AI datacenters" for all that ChatGPT toys you see around. You need datacenters only for constant training on the new data and getting that real and meningful result corporations/governments want and you don't have access to.
However, there is funny things coming, really. LLM trained with human texts is one thing. LLM trained with its own output is completely another. It will be interesting to see if it will implode or explode eventually. :)
Just me.
That absolutely appeared to be the case with chat bots & AI assistants leading up to ChatGPT and certainly is very much the case with the shitty open source bots.
I have zero trust for OpenAI. When they claim ChatGPT is just LLM, it means nothing to me.
I go by my own experience.
Artificial Intelligence (AI) assume "intelligence".
Artificial Neural Networks(ANN) is basically nothing more than a systems of equations where coefficients for input variables(input data) adjusted during training process to make that equations to give results that agree with training data as close as possible.
Large Language Model (LLM) is an ANN that is trained on large amount of plain text.
So, LLM have as much intelligence as wall light switch (zero). Just like wall switch it doesn't decide anything at all, it is just reacts to input in a predefined (by traininig process) manner.
There is just no any place for any intelligence, including artificial one at all.