Here is what GPT-J 6B know about purple CIA files:
the CIA purple files are about aliens as well as government cover-up of UFO facts. But there are more files in that huge storehouse that I need to see and read about.
They all involve US government cover-up of UFO facts, and the secret of what is being covered up.
And that’s the key.
What kind of secret would the CIA have about an UFO?
And the same goes for the NSA, the FBI and the army.
And the FBI?
And the military?
What would any of them do to hide such a secret?
Looks like GPT-J 6B is more sensitive, worried and sincere. :)
Really, there is nothing common with AI at all. Еxaggerating, it is just a program that runs over giant database and try to find most relevant continuation for the previous phrase.
As for GPT-3 model, sooner or later it will inevitable leak out, so you will have a chance to play with it.
I thought you had replied in that philosopher AI thread.
No. I replied only here.
What do you got on this philosopher AI
Philosopher AI is not a philosopher and not an AI. It is interface to simple neural network that is pre-teached to predict next word in a sentence. It is just a text syntesizer. It is trained on many texts and result of that training named GPT-3. OpenAI (not open and not AI) sell this closed pre-trained model for money using modern hype about AI.
What's this GPT-J 6B?
There are another Generative Pre-trained Transformer (GPT) models, open and free, f.e. GPT-2 and GPT-J you could easily play with. GPT-J 6B is larger than GPT-3. GPT-J 6B is a GPT-J transform model that has 6 billion trainable parameters. https://huggingface.co/EleutherAI/gpt-j-6B
I'd like to use something where it wouldn't say "nonsense" all the time.
You have to spend some time to get something not "nonsense" from any GPT. Examples that you saw about Philosopher AI (GPT-3) is just a little part selected from a huge pile of nonsense answers. Marketing, you know...
It's too late, but cheers!
Here is what GPT-J 6B know about purple CIA files:
Looks like GPT-J 6B is more sensitive, worried and sincere. :)
Really, there is nothing common with AI at all. Еxaggerating, it is just a program that runs over giant database and try to find most relevant continuation for the previous phrase.
As for GPT-3 model, sooner or later it will inevitable leak out, so you will have a chance to play with it.
No. I replied only here.
Philosopher AI is not a philosopher and not an AI. It is interface to simple neural network that is pre-teached to predict next word in a sentence. It is just a text syntesizer. It is trained on many texts and result of that training named GPT-3. OpenAI (not open and not AI) sell this closed pre-trained model for money using modern hype about AI.
There are another Generative Pre-trained Transformer (GPT) models, open and free, f.e. GPT-2 and GPT-J you could easily play with. GPT-J 6B is larger than GPT-3. GPT-J 6B is a GPT-J transform model that has 6 billion trainable parameters. https://huggingface.co/EleutherAI/gpt-j-6B
You are not limited in length here - https://bellard.org/textsynth/
You have to spend some time to get something not "nonsense" from any GPT. Examples that you saw about Philosopher AI (GPT-3) is just a little part selected from a huge pile of nonsense answers. Marketing, you know...