You know how Monte-Carlo tree search coupled with ML state evaluation completely conquered board- and card-games and large classes of adversarial situations in general. That was basically one simple but fundamental algorithm. It's not thinking, and its never going to move outside of its domain, but it has solved that domain.
Now Transformers and variants, also a simple algorithm, extract patterns from text, or really any sequence of symbols, and is able to synthesize new sequences that conform to those patterns. It's not thinking, and people read way too much into it because they can make it emit impressive or clever-seeming sequences (ignoring that they basically fed it those sequences to begin with), and it's never going to play board games well or move outside of its domain. But it has solved its ill-defined domain of playing back something not unlike knowledge.
There's a couple more I can think of, but lets skip those. These algorithms plateau because they've solved their domain. What comes after that, is someone inventing one more fundamental algorithm, and another, etc.
The thing is, there's only so many domains that humans operate in, and "general intelligence", at least as displayed by humans, is the ability to work across those domains. We don't have all the algorithms we need for that, and it's impossible to put a timeline on new ideas. People have historically loved to indirectly argue that we'll never have new ideas, everything that can be known is known, and if we can't do it now we never will. Those people tend to lose bets.
Same, but my projection is different.
You know how Monte-Carlo tree search coupled with ML state evaluation completely conquered board- and card-games and large classes of adversarial situations in general. That was basically one simple but fundamental algorithm. It's not thinking, and its never going to move outside of its domain, but it has solved that domain.
Now Transformers and variants, also a simple algorithm, extract patterns from text, or really any sequence of symbols, and is able to synthesize new sequences that conform to those patterns. It's not thinking, and people read way too much into it because they can make it emit impressive or clever-seeming sequences (ignoring that they basically fed it those sequences to begin with), and it's never going to play board games well or move outside of its domain. But it has solved its ill-defined domain of playing back something not unlike knowledge.
There's a couple more I can think of, but lets skip those. These algorithms plateau because they've solved their domain. What comes after that, is someone inventing one more fundamental algorithm, and another, etc.
The thing is, there's only so many domains that humans operate in, and "general intelligence", at least as displayed by humans, is the ability to work across those domains. We don't have all the algorithms we need for that, and it's impossible to put a timeline on new ideas. People have historically loved to indirectly argue that we'll never have new ideas, everything that can be known is known, and if we can't do it now we never will. Those people tend to lose bets.
Overall well said. I think we just disagree both on the timelines and the number and complexity of algorithms to meaningfully replicate human ability.