The AI industry has already proven itself to be full of smoke and mirrors. It is at Elon Musk levels of bsing, so I am putting it in the "just another 10 years bro" category.
So far the experiments where they used coding agents to try and build something serious were predictably awful. They had agents try to autonomously build a web browser, and it just ripped off code from a known open source browser and made a mess of it. They regurgitate, they can follow old instructions (imperfectly), they can summarize consensus views, but they don't think.
The singularity can think and reason, if it were to ever exist, which is not guaranteed.
EDIT: And speaking of Elon Musk he's at the forefront of pushing the autonomy hype. Do I even need to list his many lies? Why should I trust these charlatan's when they say "we're so scared of the AI we've created!"?
Every year they claim to be freaked out over the latest tool. A couple years ago it was "Devin the autonomous software engineer". What happened there? Right it was a bunch of hype that didn't work.
But every year I must accept that "this time is different, we're telling the truth now"
Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
At some point it might reach diminishing returns
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
no law of the universe that prevents AI from getting smarter than humans
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
The AI industry has already proven itself to be full of smoke and mirrors. It is at Elon Musk levels of bsing, so I am putting it in the "just another 10 years bro" category.
So far the experiments where they used coding agents to try and build something serious were predictably awful. They had agents try to autonomously build a web browser, and it just ripped off code from a known open source browser and made a mess of it. They regurgitate, they can follow old instructions (imperfectly), they can summarize consensus views, but they don't think.
The singularity can think and reason, if it were to ever exist, which is not guaranteed.
EDIT: And speaking of Elon Musk he's at the forefront of pushing the autonomy hype. Do I even need to list his many lies? Why should I trust these charlatan's when they say "we're so scared of the AI we've created!"?
Every year they claim to be freaked out over the latest tool. A couple years ago it was "Devin the autonomous software engineer". What happened there? Right it was a bunch of hype that didn't work.
But every year I must accept that "this time is different, we're telling the truth now"
Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.