They were right, and we're steadily approaching that corner. But yeah the majority of the world is generally blind and won't recognize anything until after the fact.
idk, I've been in this industry over 25 years and I don't think the trajectory puts us anywhere near replacing engineers anytime in my lifetime.
When I was in college everything was "we're in an AI winter"...for 20 years. In the 70s everyone was sure we were almost there, then the active paths hit brick walls.
It feels very much like that now. Machine learning, deep learning, neural networks, etc., were all exciting, but really seemed to plateau around 2020, and then LLMs added a bit more life, but still the same. LLMs, agents, et al, are at about 80% of what they will ever be, and squeezing out that last 20% is going to take an insane amount of compute. After that? Stagnation and derivation, and not much new is my feeling, but we'll see.
It just feels like this story has played out time and again, with AI, with offshoring,... Everything that was supposed to be the end of tech always led to exponential growth eventually.
Yes, for most people, the jobs that exist now will be automated away (though it will take 10-20 years), but probably 15-20% of the population will still be needed as research scientists, engineers, and enforcers. AI, and all automation, is great at repeating what has already been done to death, or finding needles in haystacks, but has never been anywhere near the innovative or the novel (or the artistically creative).
Either way, in my opinion, the best path is outside of that system. You don't want to be in it whether you have a job or not.
They've run a fruit fly's brain in a computer, which is the beginning of the end. This proves a brain can be run in a computer. The difference between a fruit fly's brain and a human's, as far as computer simulation is concerned, is scale.
Once a human's brain can be run in a computer, computers will by definition be at the human level of intelligence. From there, the only question is whether the computing power it takes to run a human brain can be made cheaper than hiring an actual human to use his brain.
You know how Monte-Carlo tree search coupled with ML state evaluation completely conquered board- and card-games and large classes of adversarial situations in general. That was basically one simple but fundamental algorithm. It's not thinking, and its never going to move outside of its domain, but it has solved that domain.
Now Transformers and variants, also a simple algorithm, extract patterns from text, or really any sequence of symbols, and is able to synthesize new sequences that conform to those patterns. It's not thinking, and people read way too much into it because they can make it emit impressive or clever-seeming sequences (ignoring that they basically fed it those sequences to begin with), and it's never going to play board games well or move outside of its domain. But it has solved its ill-defined domain of playing back something not unlike knowledge.
There's a couple more I can think of, but lets skip those. These algorithms plateau because they've solved their domain. What comes after that, is someone inventing one more fundamental algorithm, and another, etc.
The thing is, there's only so many domains that humans operate in, and "general intelligence", at least as displayed by humans, is the ability to work across those domains. We don't have all the algorithms we need for that, and it's impossible to put a timeline on new ideas. People have historically loved to indirectly argue that we'll never have new ideas, everything that can be known is known, and if we can't do it now we never will. Those people tend to lose bets.
They were right, and we're steadily approaching that corner. But yeah the majority of the world is generally blind and won't recognize anything until after the fact.
idk, I've been in this industry over 25 years and I don't think the trajectory puts us anywhere near replacing engineers anytime in my lifetime.
When I was in college everything was "we're in an AI winter"...for 20 years. In the 70s everyone was sure we were almost there, then the active paths hit brick walls.
It feels very much like that now. Machine learning, deep learning, neural networks, etc., were all exciting, but really seemed to plateau around 2020, and then LLMs added a bit more life, but still the same. LLMs, agents, et al, are at about 80% of what they will ever be, and squeezing out that last 20% is going to take an insane amount of compute. After that? Stagnation and derivation, and not much new is my feeling, but we'll see.
It just feels like this story has played out time and again, with AI, with offshoring,... Everything that was supposed to be the end of tech always led to exponential growth eventually.
Yes, for most people, the jobs that exist now will be automated away (though it will take 10-20 years), but probably 15-20% of the population will still be needed as research scientists, engineers, and enforcers. AI, and all automation, is great at repeating what has already been done to death, or finding needles in haystacks, but has never been anywhere near the innovative or the novel (or the artistically creative).
Either way, in my opinion, the best path is outside of that system. You don't want to be in it whether you have a job or not.
They've run a fruit fly's brain in a computer, which is the beginning of the end. This proves a brain can be run in a computer. The difference between a fruit fly's brain and a human's, as far as computer simulation is concerned, is scale.
Once a human's brain can be run in a computer, computers will by definition be at the human level of intelligence. From there, the only question is whether the computing power it takes to run a human brain can be made cheaper than hiring an actual human to use his brain.
Same, but my projection is different.
You know how Monte-Carlo tree search coupled with ML state evaluation completely conquered board- and card-games and large classes of adversarial situations in general. That was basically one simple but fundamental algorithm. It's not thinking, and its never going to move outside of its domain, but it has solved that domain.
Now Transformers and variants, also a simple algorithm, extract patterns from text, or really any sequence of symbols, and is able to synthesize new sequences that conform to those patterns. It's not thinking, and people read way too much into it because they can make it emit impressive or clever-seeming sequences (ignoring that they basically fed it those sequences to begin with), and it's never going to play board games well or move outside of its domain. But it has solved its ill-defined domain of playing back something not unlike knowledge.
There's a couple more I can think of, but lets skip those. These algorithms plateau because they've solved their domain. What comes after that, is someone inventing one more fundamental algorithm, and another, etc.
The thing is, there's only so many domains that humans operate in, and "general intelligence", at least as displayed by humans, is the ability to work across those domains. We don't have all the algorithms we need for that, and it's impossible to put a timeline on new ideas. People have historically loved to indirectly argue that we'll never have new ideas, everything that can be known is known, and if we can't do it now we never will. Those people tend to lose bets.
Overall well said. I think we just disagree both on the timelines and the number and complexity of algorithms to meaningfully replicate human ability.