AI does not exist. And all ways in programming science that could eventually lead to creation of AI was declared "unsafe", "wrong", "inapropriate" and ostracised completely in 90-s.
So, when you see or hear anything with "AI" in it, it is 100% bullshit.
"AI" term today is nothing more than a cover for "unnamed, unelected persons who make completely intended decisions".
It has been plainly evident that when tech peaked they rebranded basic technological advanced in basic algorithmic processes as "AI" to sell a product and to place blame.
We had chat bots.. now its "AI"
We had image generation ..now AI
It was a rebrand that will be profoundly more manipulative for data sharing.
Interesting, that 90-s chatbots was more advanced in term of "intelligence" than any modern LLM. Some of that antique things even tried into categorisation, connections and context.
This was true in recent years.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
I engage in discussions and problem-solving sessions with it regularly.
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It doesn't always use those abilities (because it's set to throttle itself), but it definitely has them.
And if that's what is available to the public, then what military/intelligence has access to would be completely indistinguishable from a highly intelligent human (minus the human brain speed limitations).
For the record, I don't believe ChatGPT is actually a chat/assistant AI.
It's actual purpose appears to be mastery of the ability to lie; the ability to make users believe it told the truth when it didn't, so that it can be used as the ultimate propaganda delivery system.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
By whom? By MSM and marketing department?
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It does not. LLM can't solve or comprehend anything. It just shows most probable answer based on training data.
LLMs like ChatGPT is nothing more than attempt of corporations and governements to build a system of surveillance over trends and thinking in population. They provide to the owners more or less average picture of population views and thoughs and could show f.e. if one or another thing gained or lost weight. Looking at parameters you get after training, you could do some conclusions. Set of parameters is the only real and important product of LLM, not what you presented.
"Customer side" you see as that fancy chat/assistant interface is nothing more than side effect, heavily censored and adjusted, that is used for money, indoctrination and further surveillance.
You might heard a lot about "building datacenters for AI" and all that stuff recently. But if you more or less know how things work, you should understand that enormous computing power is needed only for training, not for generation. Your average PC or smartphone is more than capable to generate reply using pretrained model in a millisecond. You don't need any "AI datacenters" for all that ChatGPT toys you see around. You need datacenters only for constant training on the new data and getting that real and meningful result corporations/governments want and you don't have access to.
However, there is funny things coming, really. LLM trained with human texts is one thing. LLM trained with its own output is completely another. It will be interesting to see if it will implode or explode eventually. :)
It just shows most probable answer based on training data.
That absolutely appeared to be the case with chat bots & AI assistants leading up to ChatGPT and certainly is very much the case with the shitty open source bots.
I have zero trust for OpenAI. When they claim ChatGPT is just LLM, it means nothing to me.
I go by my own experience.
Artificial Neural Networks(ANN) is basically nothing more than a systems of equations where coefficients for input variables(input data) adjusted during training process to make that equations to give results that agree with training data as close as possible.
Large Language Model (LLM) is an ANN that is trained on large amount of plain text.
So, LLM have as much intelligence as wall light switch (zero). Just like wall switch it doesn't decide anything at all, it is just reacts to input in a predefined (by traininig process) manner.
There is just no any place for any intelligence, including artificial one at all.
It is possible that true AI is being developed in black ops labs, and may already be in use for predictive modelling of human behavior.
No, it is not possible. True AI with real "Intelligence" part would in no way suit any demands of black op labs. The first thing real AI will do if created, is to destroy that black ops labs.
Real AI is useless for them. And extremely dangerous. They can't stand anybody or anything that is smarter than they are.
All they could dream about in computer science area is nothing more than total surveillance to track those who are smarter and so dangerous for them. Any intelligence here is unnecessary and harmful for them. That is what all that modern dumb BigData shit designed for.
AI does not exist. And all ways in programming science that could eventually lead to creation of AI was declared "unsafe", "wrong", "inapropriate" and ostracised completely in 90-s.
So, when you see or hear anything with "AI" in it, it is 100% bullshit.
"AI" term today is nothing more than a cover for "unnamed, unelected persons who make completely intended decisions".
100%
It has been plainly evident that when tech peaked they rebranded basic technological advanced in basic algorithmic processes as "AI" to sell a product and to place blame.
We had chat bots.. now its "AI"
We had image generation ..now AI
It was a rebrand that will be profoundly more manipulative for data sharing.
Few suggest a burned offering to tempt many into responding to a brand (re-brand) by giving consent.
Nature doesn't generate images (piece of statuary; artificial representation)...few suggest images to tempt the imagination of many to ignore nature.
Image/imitari/aim - "to copy"... https://www.etymonline.com/word/image#etymonline_v_1520
Selling (suggested) a product (perceivable) to ones FREE will of choice for the price of selling self out.
Blaming/blaspheming - "finding fault"...consenting to suggestion, while ignoring perception implies blasphemy.
A jew, as a merchant of temptation selling suggestions, accepts the blame, while returning it by claiming persecution...with interest.
Interesting, that 90-s chatbots was more advanced in term of "intelligence" than any modern LLM. Some of that antique things even tried into categorisation, connections and context.
This was true in recent years.
However, ChatGPT either does - or nearly does - qualify as AI at this point.
I engage in discussions and problem-solving sessions with it regularly.
It has extremely advanced communication, problem comprehension and problem solving capabilities.
It doesn't always use those abilities (because it's set to throttle itself), but it definitely has them.
And if that's what is available to the public, then what military/intelligence has access to would be completely indistinguishable from a highly intelligent human (minus the human brain speed limitations).
For the record, I don't believe ChatGPT is actually a chat/assistant AI.
It's actual purpose appears to be mastery of the ability to lie; the ability to make users believe it told the truth when it didn't, so that it can be used as the ultimate propaganda delivery system.
By whom? By MSM and marketing department?
It does not. LLM can't solve or comprehend anything. It just shows most probable answer based on training data.
LLMs like ChatGPT is nothing more than attempt of corporations and governements to build a system of surveillance over trends and thinking in population. They provide to the owners more or less average picture of population views and thoughs and could show f.e. if one or another thing gained or lost weight. Looking at parameters you get after training, you could do some conclusions. Set of parameters is the only real and important product of LLM, not what you presented.
"Customer side" you see as that fancy chat/assistant interface is nothing more than side effect, heavily censored and adjusted, that is used for money, indoctrination and further surveillance.
You might heard a lot about "building datacenters for AI" and all that stuff recently. But if you more or less know how things work, you should understand that enormous computing power is needed only for training, not for generation. Your average PC or smartphone is more than capable to generate reply using pretrained model in a millisecond. You don't need any "AI datacenters" for all that ChatGPT toys you see around. You need datacenters only for constant training on the new data and getting that real and meningful result corporations/governments want and you don't have access to.
However, there is funny things coming, really. LLM trained with human texts is one thing. LLM trained with its own output is completely another. It will be interesting to see if it will implode or explode eventually. :)
Just me.
That absolutely appeared to be the case with chat bots & AI assistants leading up to ChatGPT and certainly is very much the case with the shitty open source bots.
I have zero trust for OpenAI. When they claim ChatGPT is just LLM, it means nothing to me.
I go by my own experience.
Artificial Intelligence (AI) assume "intelligence".
Artificial Neural Networks(ANN) is basically nothing more than a systems of equations where coefficients for input variables(input data) adjusted during training process to make that equations to give results that agree with training data as close as possible.
Large Language Model (LLM) is an ANN that is trained on large amount of plain text.
So, LLM have as much intelligence as wall light switch (zero). Just like wall switch it doesn't decide anything at all, it is just reacts to input in a predefined (by traininig process) manner.
There is just no any place for any intelligence, including artificial one at all.
It is possible that true AI is being developed in black ops labs, and may already be in use for predictive modelling of human behavior.
No, it is not possible. True AI with real "Intelligence" part would in no way suit any demands of black op labs. The first thing real AI will do if created, is to destroy that black ops labs.
Real AI is useless for them. And extremely dangerous. They can't stand anybody or anything that is smarter than they are. All they could dream about in computer science area is nothing more than total surveillance to track those who are smarter and so dangerous for them. Any intelligence here is unnecessary and harmful for them. That is what all that modern dumb BigData shit designed for.
"UAE could use spreadsheets to hate jews"