no law of the universe that prevents AI from getting smarter than humans
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Science is not chess. Science involves making observations then coming up with the chess board itself and testing if that board approximates reality well enough to be useful. A computer can be designed to calculate moves on an 8x8 grid, a very controlled environment that requires no deeper understanding. Science requires actually understanding concepts, which AIs can't do.
LLMs particularly are consensus driven machines. Consensus science is just the status quo.
At best an LLM will end up "coming up" with a new concept by ripping off some actual scientist in an unknown journal which it scraped data from.
If you're talking ML in general, they could design systems based on existing molecular models looking for certain types of new chemicals to synthesize. Ok, but big deal IMHO. Not going to cure cancer or make humans extinct. In fact the funny thing about cancer is (good) doctors have already figured out the cure, people just won't listen. It's not a magic drug , it's a holistic approach with fasting.
We already have AI that suggests new chemicals as well as AlphaFold that predicts protein folding better than humans. And AI such as Kosmos that can simply be given a research question and it will spend hours designing experiments, writing programs to perform them and generate a paper presenting the method and results. It needs better accuracy, but this is how AI could replace human scientists. With a robot body physical experiments can also be performed. "Self-driving labs" are already using AI to design and perform experiments on physical materials.
I'm sure Kosmos can do some great bs-ing that looks correct. It sounds like all it will be is a tool to get some scaffolds going which a real scientist can fill in.
"Self-driving labs" are already using AI to design and perform experiments
It doesn't seem to really be designing the experiments. The entire system is human designed by real scientists / engineers. The computer is processing data and then saying which (likely predefined) experiment it should "conduct next". And most of the innovation there seems to be the dynamic flow approach speeding things up.
That's a useful pattern recognition tool (ML), but it's not doing the same work that went into designing the self-driving system itself.
LLMs by their nature are dependent on human knowledge. They are not thinking, they process words that humans wrote and create a statistical model from it, to emulate that knowledge. It's a statistical model predicting outcomes based on accumulated human data.
If it feeds off its own data, it will go off into fantasy land due to its own hallucinations corrupting its training data.
AIs are capable of outperforming humans despite the fact they are created by humans. AIs can play chess and generate images better than any human can. It helps that in areas such as these there are ways to accurately measure success and AIs can experiment trying to improve their measure of success and don't simply have to copy human data. In hard sciences there's usually a metric of success as well, meaning AIs will be able to get better than humans at science and thus will end up making low budget WMDs capable of making humans extinct.
Computers beat humans at chess a long time ago.
Science is not chess. Science involves making observations then coming up with the chess board itself and testing if that board approximates reality well enough to be useful. A computer can be designed to calculate moves on an 8x8 grid, a very controlled environment that requires no deeper understanding. Science requires actually understanding concepts, which AIs can't do.
LLMs particularly are consensus driven machines. Consensus science is just the status quo.
At best an LLM will end up "coming up" with a new concept by ripping off some actual scientist in an unknown journal which it scraped data from.
If you're talking ML in general, they could design systems based on existing molecular models looking for certain types of new chemicals to synthesize. Ok, but big deal IMHO. Not going to cure cancer or make humans extinct. In fact the funny thing about cancer is (good) doctors have already figured out the cure, people just won't listen. It's not a magic drug , it's a holistic approach with fasting.
We already have AI that suggests new chemicals as well as AlphaFold that predicts protein folding better than humans. And AI such as Kosmos that can simply be given a research question and it will spend hours designing experiments, writing programs to perform them and generate a paper presenting the method and results. It needs better accuracy, but this is how AI could replace human scientists. With a robot body physical experiments can also be performed. "Self-driving labs" are already using AI to design and perform experiments on physical materials.
I'm sure Kosmos can do some great bs-ing that looks correct. It sounds like all it will be is a tool to get some scaffolds going which a real scientist can fill in.
It doesn't seem to really be designing the experiments. The entire system is human designed by real scientists / engineers. The computer is processing data and then saying which (likely predefined) experiment it should "conduct next". And most of the innovation there seems to be the dynamic flow approach speeding things up.
That's a useful pattern recognition tool (ML), but it's not doing the same work that went into designing the self-driving system itself.