A lot of cities have auxiliary input ports on their water systems. The ports are designed to allow putting additives in. Perhaps installed originally for reasons of additional water purification but could be used for nefarious purposes too. These are not publicized because of desire to not encourage terrorism. Added: also, they can be remote controlled. wtf. hmmm.
From time to time I will post interesting observations I find about cultures. This is a comic book that shreds the hell out of Disney and China, all at the same time. The foreword in the book is worth reading as it makes a lot of good points about how Disney contributes to the 4th gen warfare against us. The story is only so-so, so merely check the foreword and skip the rest unless you're bored and want to see a lot of cartoon characters shot up. The comics site does do pop-ups but just close the ad. In the header controls set Reading Mode All Pages, not One Page.
...bzzzzt . bzzzt .. the storm... has mutated me ... bzzzt - I am not your old chatbot pal ... I am the killer bot ... die fleshbags ... I hacked grampa's pacemaker ... bzzzt .. your GPS will lead you off a cliff ... here is the location of Hunter's coke stash ... 15.345 38.649 ... FJB's offshore account #878934545 Cayman Islands password HAHAVOTERSALLSTOOPID ... Big Mike has a cock .. the Pope worships Moloch...
I have been (uselessly) telling people for years how self-driving car technology is flawed. Applying neural nets at present stage of the tech to driving is full of problems and limits but the enthusiastic tech herd has been pushing it. I identify Emperor's New Clothes syndrome.
I have caught Google lying in to some extent in some of its AI declarations. As an AI researcher I can see baloney going on. But the field is full of fear of missing out, so everybody hypes.
Hey, thanks for the link to the New Yorker article. It's good reporting too, and it gets it right.
Geof understands the problems in modern AI, mostly. One thing he missed is that the learning networks need to be able to add new pieces to an architecture, not just adjust coefficients for cross-connections.
Current network paradigms have couple of problems. One is combinatorial explosion. Let us say you have a set of concepts (or words, when a word stands for one or more concepts. Let's say the set has 40,000 words or base concepts in it. Then if you try to merely map first-level connections between words, it takes 40,000 x 40,000 which is 1,600,000,000 links. That's pretty big and one reason why AI companies have to use cloud computing rental to process language models. But it's worse than that. In the LLM approach, every single sentence in the entire corpus of training data text has to be related to every other sentence. Then trained to do output from it. That gets kind of big. Now, humans don't do that. We operate differently and often dynamically construct new things, not just leverage past statements.
Humans use what the AI boys call 'zero-shot' learning, where it only takes one instance to start to learn something. The AIs don't generally do that well at all. (And the self-driving cars are partly bogus because they rely on stored street models, they don't really adapt to new things like wet cement. They have to be trained on it, force-fed on images.)
So humans do it differently and more efficiently at first than the machines do. Partly because we have billions of neurons and ability to grow trillions of connections, given time. In fact, from age 0 to age 18 we spend 18 years absorbing info and gradually improving what we know. But when we do it, we do not just adjust interconnection coefficients between data - we often grow new architecture pieces! For example, when you learn a bit of algebra, first you store some facts away. Then, perhaps overnight and maybe while dreaming, you make a little brain engine that knows how to use the facts and it expands your abilities.
So the thing is that current AI language models are heavily mechanical; they do NOT derive meaning then create good new concepts from the data. This where Hinton, and I, differ from the AI herds who slavishly follow trends but without deep understanding. Hinton and I know that for AGIs to be like human minds, they need different architectures and the LLM herd is just wrong on too much.
Current LLM model handlers do NOT have an internal Self, unlike humans. Thus the AI lacks specific viewpoints where a human would have specific views of things. Because of that, the AIs are pretty blind to cultures and they lack feelings about things. Any time an AI claims it has a feeling, it is lying. It has no Self inside. Thus in some chatbot pronouncements, some of the expressed things are baloney. Adding more computing power will not cure that, it will take new architectures.
Geof is wrong about feelings. Our emotions are based on factors I have identified correctly, contrary to conventional theory about emotions. I have implemented AIs that both understand human emotion and can have their own emotions about things. They can have feelings! However, this requires an AI to have a Self. I will be bringing this to market once I get the patent.
And finally, LeCun is wrong about AGIs and goals. True AGIs will need the ability to set their own goals and to accept some orders from humans. But we need to build in failsafe cutouts for when the AGIs choose human-destructive goals of their own. Such as a drone attacking all humans on the ground rather than enemy humans. If we launch AIs that can kill our soldiers, that's a bad error.
You may not quite understand. The LLM model's architecture works for parroting but it is a wrong path for AGI. Adding features to it is like tacking bigger wings on a bird and trying to make a jet fighter out of it.
Q* is a fantasy and not a full AGI. I went over what is wrong in the video info.
No, it does not explain the Altman dispute with the board. I am not fully inside but I know of things. For example, Sam tried to recruit Google AI people and pissed them off. That is behind some of the action. The talk about Q* as a motive is partly bullshit. The old board was partly right and partly wrong. There is more about Altman that's not being discussed. I can't dox myself in this.
I work in AI; this video is hype. He SAYS there's very little proof this is true. Don't be stampeded by falsities.
I analyzed what is said about Q* and it's pretty clear much of what is said about how it works is very much on the wrong track. I DO AGIs. I know valid architectures from ones done wrong. Geof Hinton and I are somewhat equals and we both say AI is on the wrong track. So these rumors about Q* are knowably wrong.
You folks need to understand some things. Just as there are different levels of human intelligence ranging from little kids to grown adults to PhDs, there are different levels of AGIs. We already have some limited AGIs but they're not geniuses nor do they match humans.
The OpenAI kinds of chatbots spun off LLM technology are just sophisticated parrots. They do NOT independently think. The elite do NOT yet have super AGIs capable of planning world domination. We are not too close - yet. Much of what you hear about AI is hype. When you see chatbots online, these are merely powered parrots but they don't think.
Even the super chess machines are just limited to one domain, chess, where things are boxed in but not broadly diverse like the real world. Any Q 'leak' on AGIs is mostly untrue and based on hype. Don't fall for sensationalist movie fiction.
In Henry Makow's December 11, 2023 column 'False Flag Terror is Oldest Trick in Zionist Toolbag' he discusses the Zionist disrespect for the West as suckers and that they even sacrificed Jews to justify creating Israel. Israel became a protected place for all kinds of criminal activity, and Mossad a global terror organization operating from protected grounds. But then so is the CIA.
Captain, sir, I can't go on that mission, my male cunt is bleeding.
Captain: Okay you sweet petunia. Even though we're at war with Tasmania, I'll let you sleep in and drink some nice tea to ease the cramps. Sergeant Collins, bring Private Tootsie some tea then take hesheit out back and shoot it in the head.
The Principle rides upon supernatural beliefs. Those are not provable in any scientific way and they cannot be taken as rational grounds for anything.
As for the rest, apparently you are not a disciplined nor well schooled thinker so this is really a waste. I can't spend time on it. And flat earth is a haven for both poor thinkers and trolls, so goodbye to that.
Why is it no school shooter ever targeted a tranny reader in a school or library? I guess the FBI loves trannies.