It's still not capable of doing that, and likely won't be any time soon if at all.
Look at much of their science and technology, it's more promise than reality. The obsession with Einstein leads them chasing after particles that don't exist (Higgs, quarks, etc). Fusion is always 10 years away. Their universe needs 70% dark matter and energy to keep that belief system going.
Yet we are to believe they have AGI around the corner that can take most jobs + power humanoid robots? Please.
Physics sure made accelerating progress before stagnating in the 20th century. Same thing with chemistry. And biology is still accelerating in my opinion. AI is still young and likely has a lot more acceleration left in it. Besides, it's already close to the singularity in that we now have AIs that can do general tasks with some level of competency. All that's left is to get the competency consistently above average human levels. I don't think that's far away at all. Even if it is, we still have to deal with it happening whenever it does.
The AI industry has already proven itself to be full of smoke and mirrors. It is at Elon Musk levels of bsing, so I am putting it in the "just another 10 years bro" category.
So far the experiments where they used coding agents to try and build something serious were predictably awful. They had agents try to autonomously build a web browser, and it just ripped off code from a known open source browser and made a mess of it. They regurgitate, they can follow old instructions (imperfectly), they can summarize consensus views, but they don't think.
The singularity can think and reason, if it were to ever exist, which is not guaranteed.
EDIT: And speaking of Elon Musk he's at the forefront of pushing the autonomy hype. Do I even need to list his many lies? Why should I trust these charlatan's when they say "we're so scared of the AI we've created!"?
Every year they claim to be freaked out over the latest tool. A couple years ago it was "Devin the autonomous software engineer". What happened there? Right it was a bunch of hype that didn't work.
But every year I must accept that "this time is different, we're telling the truth now"
Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
At some point it might reach diminishing returns
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
Believing the boy who cried wolf is just silly
I already said to stop listening to that boy. Why do you insist on going back to him then?
no law of the universe that prevents AI from getting smarter than humans
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Note, it's an obsession with Bohr (and then Feynman) and it goes against everything Einstein stood for doggedly all his life. His last challenge, the EPR paradox, still confronts Bohr bots today. Did you see that Kenneth Branagh wanted to portray Bohr himself, the "Great Dane", in Oppenheimer? Everything is made of Bohr atoms? That's the cult level we're talking here. The movie even had to ridicule Heisenberg even though he was the real spiritist who got the model going for Bohr; typical abandonment by the movement of its early founders.
I have posted on this conspiracy previously. It was first funded by Ludwig Mond. Look into it.
They are using Einstein's theories in the calculations for their particle smashing experiments at CERN etc.
The atomic model itself is pretty good, it's all the new additions that are just pure fantasy with no yield in anything fundamental in physics. Just inventing new particles.
But the basics of quantum, that particles show wave behavior and the electron, proton, neutron are well established.
Enacting solution (inception towards death) generates reacting problem (life)...few suggest problem > reaction > solution to a) put solution from beginning towards end, and b) to divide problem from reaction, hence tempting many into reacting with "not my problem".
AI taking
Only nature gives (inception) and takes (death) being (life)...few suggest artifice to tempt many to take, while ignoring given.
AI cannot take anything...taking AI tempts a being to give up oneself within nature.
promoted as the solution
An inversion of solution (inception towards death) forwarding (pro) particle (mote) being (life). Whatever few promote aka put forwards within motion...tempts many to follow along.
migration
Aka changing (inception towards death) place (life)...that's the status quo of being (life) moved from one state (inception) towards another (death).
The trick...nature migrates being from one another; while few are migrating many into one another.
"the jobs"
you mean the Jewish corporate daycare system to white ladies?
I mean all the jobs, like producing food, teaching children, building houses, cleaning streets.
That leads to something else. Problem: no jobs available. Reaction: depopulation. Solution: don't get rid of everybody; exterminate only the goyim.
It's still not capable of doing that, and likely won't be any time soon if at all.
Look at much of their science and technology, it's more promise than reality. The obsession with Einstein leads them chasing after particles that don't exist (Higgs, quarks, etc). Fusion is always 10 years away. Their universe needs 70% dark matter and energy to keep that belief system going.
Yet we are to believe they have AGI around the corner that can take most jobs + power humanoid robots? Please.
Physics sure made accelerating progress before stagnating in the 20th century. Same thing with chemistry. And biology is still accelerating in my opinion. AI is still young and likely has a lot more acceleration left in it. Besides, it's already close to the singularity in that we now have AIs that can do general tasks with some level of competency. All that's left is to get the competency consistently above average human levels. I don't think that's far away at all. Even if it is, we still have to deal with it happening whenever it does.
The AI industry has already proven itself to be full of smoke and mirrors. It is at Elon Musk levels of bsing, so I am putting it in the "just another 10 years bro" category.
So far the experiments where they used coding agents to try and build something serious were predictably awful. They had agents try to autonomously build a web browser, and it just ripped off code from a known open source browser and made a mess of it. They regurgitate, they can follow old instructions (imperfectly), they can summarize consensus views, but they don't think.
The singularity can think and reason, if it were to ever exist, which is not guaranteed.
EDIT: And speaking of Elon Musk he's at the forefront of pushing the autonomy hype. Do I even need to list his many lies? Why should I trust these charlatan's when they say "we're so scared of the AI we've created!"?
Every year they claim to be freaked out over the latest tool. A couple years ago it was "Devin the autonomous software engineer". What happened there? Right it was a bunch of hype that didn't work.
But every year I must accept that "this time is different, we're telling the truth now"
Ignore whatever other people are saying, just think about it logically. I don't care what anyone else has said that did or didn't happen, I just observe that AI has been demonstrably getting better. Before you couldn't even have a chat bot that wrote code in the language you asked, now you can have a chat bot not only write code in the language you ask but it actually does something useful and they can fix problems in existing code. Far from perfectly, but still at the point where people who don't know any programming languages can build useful applications with AI.
If an industry relies heavily on lies to get funding and stay relevant, that calls into question the entire narrative about it, especially in the near term. It's more than just months, AI companies have been playing the "6-12 more months" game for 4 years now.
We are seeing improvements in tools that help programmers and that's to be expected. Being able to actually replace them, let alone the even more enormous jump into autonomous robots (which could somehow replace migrants) is not even in the cards short of some miracle.
They've cranked their AI as far as it can go, spending more than any companies in history on capex. We're at the point some of them are even talking about building their own nuclear reactors because of the high energy demand. This is well into diminishing returns. It's exponentially more money and energy for linear (or logarithmic) gains.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
I already said to stop listening to that boy. Why do you insist on going back to him then?
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Physics stagnated because of Bohr, Heisenberg, and Feynman.
Note, it's an obsession with Bohr (and then Feynman) and it goes against everything Einstein stood for doggedly all his life. His last challenge, the EPR paradox, still confronts Bohr bots today. Did you see that Kenneth Branagh wanted to portray Bohr himself, the "Great Dane", in Oppenheimer? Everything is made of Bohr atoms? That's the cult level we're talking here. The movie even had to ridicule Heisenberg even though he was the real spiritist who got the model going for Bohr; typical abandonment by the movement of its early founders.
I have posted on this conspiracy previously. It was first funded by Ludwig Mond. Look into it.
They are using Einstein's theories in the calculations for their particle smashing experiments at CERN etc.
The atomic model itself is pretty good, it's all the new additions that are just pure fantasy with no yield in anything fundamental in physics. Just inventing new particles.
But the basics of quantum, that particles show wave behavior and the electron, proton, neutron are well established.
Enacting solution (inception towards death) generates reacting problem (life)...few suggest problem > reaction > solution to a) put solution from beginning towards end, and b) to divide problem from reaction, hence tempting many into reacting with "not my problem".
Only nature gives (inception) and takes (death) being (life)...few suggest artifice to tempt many to take, while ignoring given.
AI cannot take anything...taking AI tempts a being to give up oneself within nature.
An inversion of solution (inception towards death) forwarding (pro) particle (mote) being (life). Whatever few promote aka put forwards within motion...tempts many to follow along.
Aka changing (inception towards death) place (life)...that's the status quo of being (life) moved from one state (inception) towards another (death).
The trick...nature migrates being from one another; while few are migrating many into one another.