Obviously not. But if you crack RSA, you dont't go bragging about it on social media. If people took it seriously, every intelligence agency and criminal organisation in the world would like up to kidnap him and be the only ones to be able to exploit it for a while, and to keep everyone else from doing the same. I assume this hasn't happened.
It would also be simple to demonstrate the ability without giving away any information about how it was done, which would instantly force the world to react. I assume he neglected to do this.
There are other reasons than greed. If you want a cure for cancer, or practical fusion energy, or spreading life to other planes, or any other desirable goodies, you're going to need geniuses working on them. There's such a thing as the IQ Bell curve, which means that genius is only a small fraction of the population. If you want the world to contain geniuses, you will need it to also contain an awful lot of "normal" people.
Yes, it's complete hyperbole, on multiple levels.
"AI" like what people are hyping now, isn't telling anyone to do anything, on account of it being merely a statistical summary of a large set of texts or images. It doesn't understand anything in any meaningful way. It doesn't think. It doesn't have opinions. It's not aware. And the systems we have now fundamentally do not scale to that.
What they can do is you give them some input, and then their internal mathematical model provides you with some output that statistically seems to relate to what you provided. Maybe the response is grammatically correct - the system doesn't know, or even haven the capability to be able to know. Maybe it's semantically valid - the system doesn't know or have the capability to ever know. The system also can't know if the response is true or false or gibberish - only that there's some statistical correlation to the words it has been fed earlier.
They also don't learn or remember what you or they said. In order to appear to be engaging a conversation, they are not given an input of what you just wrote, but an input that also contains all the previous things you wrote in that "conversation" and what the system replied back. Then it gives you something that statistically relates to all that input.
If you initially give them a lot of text like "KILL YOURSELF MUH CLIMATE", they are likely to repeat some of that back. If you give them the bible, they are likely to repeat some of that back. As mindlessly as your calculator computing 2+3 for you.
There are certainly entities out there that could be said to be satanic, and want you to off yourself, for the climate or for any excuse they can sell you. But those are not overhyped statistical text models called "AI".
A language model like Bing is a statistical summary of a large set of texts mostly scraped from the internet. All it does is regurgitate variations of what people wrote. People have been shitposting for ages about what AI's might do, and this is it.
Now imagine what a vicious circle it will be when articles like this get scraped and put into the dataset later models will be built on, making those models more likely to repeat the "journalists" worst fears back at them.
It looks like it's been in the market for a while.
https://www.youtube.com/watch?v=UWqMRJZEqRA&ab_channel=macabespeed
At some point it does turn into mere work, and I think that's the natural way of things. I'm running a couple of little software projects to pay the bills, and is working on a more ambitious project that isn't out yet, and to be honest it's only the latter one I'm still passionate about.
A useful way of thinking about it is that "training a model" means "building a statistic." A system like ChatGPT is in essence an enormous statistical dataset of which words followed which words in the 35 TB of text it is a summary of.
It doesn't plan or learn or understand, and when it appears to be doing that in response to what people write, it is an illusion based on loosely-similar sequences of words in the training texts.
GPT will fundamentally never scale past that. But that doesn't mean other algorithms won't.
Not serious, since an even remotely comprehensive rebuttal would be long enough to read as a leftist meme.
The regime would basically have to shut down the internet, and possibly electricity, to make it game over for crypto. And then the world would have larger problems.
Meanwhile their own fiat currency scam has been decoupled from gold, is being inflated out of any semblance of value as fast as they can get away with, and they routinely prohibit bank transfers from and to individuals as well as organizations and countries. And their current project is to shut down fiat cash. These problems can only be solved with an insane number of impalements.
| In exchange the Anunnaki seize the gold mined by humanity.
Advanced space-faring civilization. Craves gold. Never thinks to send robotic miners to the asteroid belt which contains far more easily-accessible gold than we've manage to extract from the thin layer of the Earth's crust that we're able to mine.