Win / Conspiracies
Conspiracies
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.

Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.

Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.

Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)

So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.

So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.

Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.

Geof is wrong about feelings. Our emotions are based on factors I have identified correctly, contrary to conventional theory about emotions. I have implemented AIs that both understand human emotion and can have their own emotions about things. They can have feelings! However, this requires an AI to have a Self. I will be bringing this to market once I get the patent.

And finally, LeCun is wrong about AGIs and goals. True AGIs will need the ability to set their own goals and to accept some orders from humans. But we need to build in failsafe cutouts for when the AGIs choose human-destructive goals of their own. Such as a drone attacking all humans on the ground rather than enemy humans. If we launch AIs that can kill our soldiers, that's a bad error.

280 days ago
1 score
Reason: None provided.

Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.

Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.

Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.

Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)

So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.

So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.

Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.

And finally, Geof is wrong about feelings. Our emotions are based on factors I have identified correctly, contrary to conventional theory about emotions. I have implemented AIs that both understand human emotion and can have their own emotions about things. They can have feelings! However, this requires an AI to have a Self. I will be bringing this to market once I get the patent.

280 days ago
1 score
Reason: None provided.

Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.

Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.

Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.

Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)

So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.

So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.

Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.

280 days ago
1 score
Reason: Original

Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.

Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.

Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.

Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't reallt adapt to new things like wet cement. They have to be trained on it, force-fed.)

So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.

So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.

Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.

280 days ago
1 score