You folks need to understand some things. Just as there are different levels of human intelligence ranging from little kids to grown adults to PhDs, there are different levels of AGIs. We already have some limited AGIs but they're not geniuses nor do they match humans.
The OpenAI kinds of chatbots spun off LLM technology are just sophisticated parrots. They do NOT independently think. The elite do NOT yet have super AGIs capable of planning world domination. We are not too close - yet. Much of what you hear about AI is hype. When you see chatbots online, these are merely powered parrots but they don't think.
Even the super chess machines are just limited to one domain, chess, where things are boxed in but not broadly diverse like the real world. Any Q 'leak' on AGIs is mostly untrue and based on hype. Don't fall for sensationalist movie fiction.
I work in AI; this video is hype. He SAYS there's very little proof this is true. Don't be stampeded by falsities.
I analyzed what is said about Q* and it's pretty clear much of what is said about how it works is very much on the wrong track. I DO AGIs. I know valid architectures from ones done wrong. Geof Hinton and I are somewhat equals and we both say AI is on the wrong track. So these rumors about Q* are knowably wrong.
You may not quite understand. The LLM model's architecture works for parroting but it is a wrong path for AGI. Adding features to it is like tacking bigger wings on a bird and trying to make a jet fighter out of it.
Q* is a fantasy and not a full AGI. I went over what is wrong in the video info.
No, it does not explain the Altman dispute with the board. I am not fully inside but I know of things. For example, Sam tried to recruit Google AI people and pissed them off. That is behind some of the action. The talk about Q* as a motive is partly bullshit. The old board was partly right and partly wrong. There is more about Altman that's not being discussed. I can't dox myself in this.
Pioneers like him are why such huge progress is being made so quickly. It would have happened eventually but these guys certainly sped up the process.
The only thing limiting them now, are computing power. There is a reason that the US is trying very hard to keep these new GPU chips out of chinas hands. What do you think they are developing with these chips? Because they want them fucking bad.
Do you believe its so they can stick these gpu's in rockets? Seems like a waste to me. The way nvidia was shipping the chips suggests to me its for neural nets.
Can you imagine whats going to happen when quantum computing happens? Ill also mention another old conspiracy theory that quantum computers are so powerful because they are processing across near infinite dimensions. Very trippy stuff. Almost like how a human brain might be doing it. Last time I looked for this information it took an hour to find the name of the researcher who proposed this idea. Now all I can really find is references, seems to have been eradicated off the web.
And another idea that my dad actually told me about, which I have never found any evidence to really back this up so please, take it with a grain of salt.
When these scientist's in the government sector were trying to create computers, they were given lsd to help "speed" up the process. This is where the idea for binary came from, because genes work in a similar way as well. Except we have 4 bits, instead of 2 or even 3.
Similarly, DNA uses a base 4 format (quaternary) because there are four data storage units: Cytosine, Guanine, Adenine, and Thymine
We are molecular machines, and this inspired computers. Are computers a natural course of evolution? Monkeys imitating life.
“These cells are part of us, but how alien they seem," Sagan remarked. "Within each of them, within every cell there are exquisitely evolved molecular machines, nucleic acids, enzymes, the cell architecture, every cell is a triumph of natural selection, and we’re made of trillions of cells."
Why is it so hard for you guys to believe that its possible to this work with any substance. Why does it have to strictly be organic and carbon?
I still believe panpyschism can answer some of those questions.
In the philosophy of mind, panpsychism (/pænˈsaɪkɪzəm/) is the view that the mind or a mindlike aspect is a fundamental and ubiquitous feature of reality.[1] It is also described as a theory that "the mind is a fundamental feature of the world which exists throughout the universe".[2] It is one of the oldest philosophical theories, and has been ascribed to philosophers including Thales, Plato, Spinoza, Leibniz, William James,[3] Alfred North Whitehead, Bertrand Russell, and Galen Strawson.[1] In the 19th century, panpsychism was the default philosophy of mind in Western thought, but it saw a decline in the mid-20th century with the rise of logical positivism.[3][4] Recent interest in the hard problem of consciousness and developments in the fields of neuroscience, psychology, and quantum physics have revived interest in panpsychism in the 21st century.[4][5][6]
We are literally the universe incarnate. It makes so much sense to me, but yet others think im insane for it.
About as insane as someone who believes in their one true god.
Want to think im even more insane? I think this is why all living things have the ability to use esp. Ive spent many, many years thinking about this. Im showing this one clip, but its just one of hundreds. Also if someone gets the bright idea to try and add me on steam. I dont accept friend requests anymore. I only add people I know in real life.
I know I, and proably every serious programmer (hobbyst or commercial), have a degree of low latent inhibition. The ability to see many things or streams of information at once. I actually thought when I was younger I might be losing my mind or something, but it felt nice to know that its a common thing.
You have an incredibly strong intuition. Your instincts are hardly ever wrong and it may feel as though you can predict a lot of things before they happen. This doesn’t mean you can see into the future or read minds, but rather that you are able to use more stimuli to piece together logical conclusions that make it seem to those around you as though you can actually see into the future. You’re actually able to see things that they don’t because your brain is processing stimuli that their brains are not, and that stimuli to them, doesn’t exist.
I can argue that, well i know where to place the cursor, im proably running time calculations in the back of my head, without even realizing it. Im literally just taking what they call an educated guess.
But it feels more than that to me, and this is something I could always do but never when trying to, and never when recording. The military is also aware of this, and leave it up to them to figure out how to weaponize it. I can only hope is the irony it doesnt work like that.
In the end, virtual battlefield simulations could help train soldiers' intuitions as well as collect information about their performance, ONR explained in its special notice. The U.S. military already uses game-like simulators to prepare soldiers for battlefield scenarios or even to help veterans suffering from post-traumatic stress disorder (PTSD).
I think the only reason its so strong now? Im proably close to death with my parathyroid, that or the world is ending relatively soon, and everyone is experiencing this. I would think that if so many people were so connected to the universe, or god, or whatever it is you want to call it. That so much corruption, injustice, hate, pain, and just plain ignorance wouldnt exist. In a world where everyone is connected, there would be no need to fuck each other over.
Our leaders are fucking TERRIFIED of this. And im betting they will do everything within their power to prolong or stop the process.
Haha reading that low latent website pros and cons, and this con has me dying.
You can go off on tangents very easily which can often confuse other people around you.
Rofl, thats me in a nut shell, exactly how WCB was able to paint me as a bipolar schizophrenic, dirty cocksuckers.
Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.
Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.
Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.
Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)
So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.
So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.
Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.
Geof is wrong about feelings. Our emotions are based on factors I have identified correctly, contrary to conventional theory about emotions. I have implemented AIs that both understand human emotion and can have their own emotions about things. They can have feelings! However, this requires an AI to have a Self. I will be bringing this to market once I get the patent.
And finally, LeCun is wrong about AGIs and goals. True AGIs will need the ability to set their own goals and to accept some orders from humans. But we need to build in failsafe cutouts for when the AGIs choose human-destructive goals of their own. Such as a drone attacking all humans on the ground rather than enemy humans. If we launch AIs that can kill our soldiers, that's a bad error.
You folks need to understand some things. Just as there are different levels of human intelligence ranging from little kids to grown adults to PhDs, there are different levels of AGIs. We already have some limited AGIs but they're not geniuses nor do they match humans.
The OpenAI kinds of chatbots spun off LLM technology are just sophisticated parrots. They do NOT independently think. The elite do NOT yet have super AGIs capable of planning world domination. We are not too close - yet. Much of what you hear about AI is hype. When you see chatbots online, these are merely powered parrots but they don't think.
Even the super chess machines are just limited to one domain, chess, where things are boxed in but not broadly diverse like the real world. Any Q 'leak' on AGIs is mostly untrue and based on hype. Don't fall for sensationalist movie fiction.
I work in AI; this video is hype. He SAYS there's very little proof this is true. Don't be stampeded by falsities.
I analyzed what is said about Q* and it's pretty clear much of what is said about how it works is very much on the wrong track. I DO AGIs. I know valid architectures from ones done wrong. Geof Hinton and I are somewhat equals and we both say AI is on the wrong track. So these rumors about Q* are knowably wrong.
You may not quite understand. The LLM model's architecture works for parroting but it is a wrong path for AGI. Adding features to it is like tacking bigger wings on a bird and trying to make a jet fighter out of it.
Q* is a fantasy and not a full AGI. I went over what is wrong in the video info.
No, it does not explain the Altman dispute with the board. I am not fully inside but I know of things. For example, Sam tried to recruit Google AI people and pissed them off. That is behind some of the action. The talk about Q* as a motive is partly bullshit. The old board was partly right and partly wrong. There is more about Altman that's not being discussed. I can't dox myself in this.
I was reading a article the other day, not the one im going to link, but it must be along the same vein.
It was saying that many computer researchers believed neural nets to be a failure, but it was simply because machines were not powerful enough.
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai
https://archive.is/DHzqJ
Pioneers like him are why such huge progress is being made so quickly. It would have happened eventually but these guys certainly sped up the process.
The only thing limiting them now, are computing power. There is a reason that the US is trying very hard to keep these new GPU chips out of chinas hands. What do you think they are developing with these chips? Because they want them fucking bad.
https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html
https://archive.is/828jb
Do you believe its so they can stick these gpu's in rockets? Seems like a waste to me. The way nvidia was shipping the chips suggests to me its for neural nets.
Can you imagine whats going to happen when quantum computing happens? Ill also mention another old conspiracy theory that quantum computers are so powerful because they are processing across near infinite dimensions. Very trippy stuff. Almost like how a human brain might be doing it. Last time I looked for this information it took an hour to find the name of the researcher who proposed this idea. Now all I can really find is references, seems to have been eradicated off the web.
https://physics.stackexchange.com/questions/121699/does-processing-for-a-quantum-computer-take-place-in-other-universes
And another idea that my dad actually told me about, which I have never found any evidence to really back this up so please, take it with a grain of salt.
Guess there was some truth to that one.
https://www.nytimes.com/2005/05/07/books/review/california-dreaming-a-true-story-of-computers-drugs-and-rock-n.html
https://archive.is/PpOqV
When these scientist's in the government sector were trying to create computers, they were given lsd to help "speed" up the process. This is where the idea for binary came from, because genes work in a similar way as well. Except we have 4 bits, instead of 2 or even 3.
We are molecular machines, and this inspired computers. Are computers a natural course of evolution? Monkeys imitating life.
Why is it so hard for you guys to believe that its possible to this work with any substance. Why does it have to strictly be organic and carbon?
I still believe panpyschism can answer some of those questions.
We are literally the universe incarnate. It makes so much sense to me, but yet others think im insane for it.
About as insane as someone who believes in their one true god.
Want to think im even more insane? I think this is why all living things have the ability to use esp. Ive spent many, many years thinking about this. Im showing this one clip, but its just one of hundreds. Also if someone gets the bright idea to try and add me on steam. I dont accept friend requests anymore. I only add people I know in real life.
https://streamable.com/xlh0k6
I know I, and proably every serious programmer (hobbyst or commercial), have a degree of low latent inhibition. The ability to see many things or streams of information at once. I actually thought when I was younger I might be losing my mind or something, but it felt nice to know that its a common thing.
https://www.lowlatentinhibition.org/pros-and-cons/
I can argue that, well i know where to place the cursor, im proably running time calculations in the back of my head, without even realizing it. Im literally just taking what they call an educated guess.
But it feels more than that to me, and this is something I could always do but never when trying to, and never when recording. The military is also aware of this, and leave it up to them to figure out how to weaponize it. I can only hope is the irony it doesnt work like that.
https://www.nbcnews.com/id/wbna46643053
https://archive.is/8wvg0
I think the only reason its so strong now? Im proably close to death with my parathyroid, that or the world is ending relatively soon, and everyone is experiencing this. I would think that if so many people were so connected to the universe, or god, or whatever it is you want to call it. That so much corruption, injustice, hate, pain, and just plain ignorance wouldnt exist. In a world where everyone is connected, there would be no need to fuck each other over.
Our leaders are fucking TERRIFIED of this. And im betting they will do everything within their power to prolong or stop the process.
Haha reading that low latent website pros and cons, and this con has me dying.
Rofl, thats me in a nut shell, exactly how WCB was able to paint me as a bipolar schizophrenic, dirty cocksuckers.
Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.
Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.
Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.
Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)
So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.
So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.
Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.
Geof is wrong about feelings. Our emotions are based on factors I have identified correctly, contrary to conventional theory about emotions. I have implemented AIs that both understand human emotion and can have their own emotions about things. They can have feelings! However, this requires an AI to have a Self. I will be bringing this to market once I get the patent.
And finally, LeCun is wrong about AGIs and goals. True AGIs will need the ability to set their own goals and to accept some orders from humans. But we need to build in failsafe cutouts for when the AGIs choose human-destructive goals of their own. Such as a drone attacking all humans on the ground rather than enemy humans. If we launch AIs that can kill our soldiers, that's a bad error.
Nice Thanks comments like this make looking at this stuff bearable.