I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
At some point it might reach diminishing returns
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
Believing the boy who cried wolf is just silly
I already said to stop listening to that boy. Why do you insist on going back to him then?
The entire industry is the boy who cried wolf, Elon was just one famous example. We could look at Altman saying "we know how to create AGI", CEOs saying "50% of our workforce will soon be AI" when no such thing happened the last 2 years, the various faked demos by major tech companies (highly misleading / deceptive tech demos), and so on. Elon is merely emblematic of the industry.
Now they're using robots with AI that make their own decisions...
Where, who? And working with humans? All I've seen are some laughable optimus demos, a crappy PR stunt by Boston Dynamics with a humanoid robot moving parts at Hyundai (in a very controlled low stakes area), and Boston Dynamics robots doing backflips. I'm yet to see robots making their own decisions and working with humans in manufacturing.
I think the huge amounts of money for energy are mainly to service billions of users.
If it's not worth the money then don't service those users, simple as that. Meanwhile OpenAI recently admitted they are implementing ads, which just a year prior Sam Altman said would mean they are getting desperate / looking for a last resort.
no law of the universe that prevents AI from getting smarter than humans
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Science is not chess. Science involves making observations then coming up with the chess board itself and testing if that board approximates reality well enough to be useful. A computer can be designed to calculate moves on an 8x8 grid, a very controlled environment that requires no deeper understanding. Science requires actually understanding concepts, which AIs can't do.
LLMs particularly are consensus driven machines. Consensus science is just the status quo.
At best an LLM will end up "coming up" with a new concept by ripping off some actual scientist in an unknown journal which it scraped data from.
If you're talking ML in general, they could design systems based on existing molecular models looking for certain types of new chemicals to synthesize. Ok, but big deal IMHO. Not going to cure cancer or make humans extinct. In fact the funny thing about cancer is (good) doctors have already figured out the cure, people just won't listen. It's not a magic drug , it's a holistic approach with fasting.
I would say everyone in business lies to get more funding. But AI would still get a lot of funding without the lies because of what it can do, especially for businesses. It's already taking out jobs in customer service, programming, writing and art. Amazon already use autonomous robots (humanoid and non-humanoid) in their warehouses. At some point it might reach diminishing returns, but there's no law of the universe that prevents AI from getting smarter than humans, so it seems almost certain that's where it's going to end up.
Manufacturers have been using robots for many decades now. What's new?
As for the humanoid portion, what percentage of that are they? 0.01% for beta testing perhaps?
That's where we are at. When you're adding in exponentially more money and electricity for linear improvements in the product the returns are diminishing.
And if you want to talk about actual dollar returns, there are no profits except for those selling chips. But every so often they come out with another "just wait 6-12 months, we're replacing most of your jobs". Believing the boy who cried wolf is just silly. These are companies who still can't make a profit saying this to stay relevant.
Manufacturers were using robots that only did specific procedures in controlled environments. Now they're using robots with AI that make their own decisions and can work with each other or humans and perform more complex or delicate tasks.
I think the huge amounts of money for energy are mainly to service billions of users. The actual technology itself doesn't need that many resources for a single user. And the resources needed per unit of intelligence or output will go down with more research. And I wouldn't call say 2015-2023 a time of linear improvement in AI capabilities relative to the amount of effort put into AI development. Capabilities exploded without a huge rise in effort.
I already said to stop listening to that boy. Why do you insist on going back to him then?
The entire industry is the boy who cried wolf, Elon was just one famous example. We could look at Altman saying "we know how to create AGI", CEOs saying "50% of our workforce will soon be AI" when no such thing happened the last 2 years, the various faked demos by major tech companies (highly misleading / deceptive tech demos), and so on. Elon is merely emblematic of the industry.
Where, who? And working with humans? All I've seen are some laughable optimus demos, a crappy PR stunt by Boston Dynamics with a humanoid robot moving parts at Hyundai (in a very controlled low stakes area), and Boston Dynamics robots doing backflips. I'm yet to see robots making their own decisions and working with humans in manufacturing.
If it's not worth the money then don't service those users, simple as that. Meanwhile OpenAI recently admitted they are implementing ads, which just a year prior Sam Altman said would mean they are getting desperate / looking for a last resort.
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Computers beat humans at chess a long time ago.
Science is not chess. Science involves making observations then coming up with the chess board itself and testing if that board approximates reality well enough to be useful. A computer can be designed to calculate moves on an 8x8 grid, a very controlled environment that requires no deeper understanding. Science requires actually understanding concepts, which AIs can't do.
LLMs particularly are consensus driven machines. Consensus science is just the status quo.
At best an LLM will end up "coming up" with a new concept by ripping off some actual scientist in an unknown journal which it scraped data from.
If you're talking ML in general, they could design systems based on existing molecular models looking for certain types of new chemicals to synthesize. Ok, but big deal IMHO. Not going to cure cancer or make humans extinct. In fact the funny thing about cancer is (good) doctors have already figured out the cure, people just won't listen. It's not a magic drug , it's a holistic approach with fasting.