Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
At some point it might reach diminishing returns
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
Believing the boy who cried wolf is just silly
I already said to stop listening to that boy. Why do you insist on going back to him then?
no law of the universe that prevents AI from getting smarter than humans
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
I already said to stop listening to that boy. Why do you insist on going back to him then?
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.