OP if you're interested in LLMs, I suggest taking a look at (unfortunately Plebbit) https://old.reddit.com/r/localllama. You can run your own LLMs locally and some of the front ends even support multi-character chat (where the LLM chats with "itself" as different characters).
IMO ChatGPT and Bing both are so overtuned for "safety" they're borderline useless for creative things or really just giving "interesting" responses.
Does this mean that there is some platform where they can be placed so they can directly speak to each other???
How many AIs are there, and what would be the limits to all of them being brought together on a single platform or thread where they could ALL just have a random Communication???
The subplebbit I linked is all about running LLMs on your local machine (and has a nice community of model creators and tuners, e.g., to remove censorship and shitlib bias). Technically you can set it up however you want, that said it's challenging to run multiple at once.
By default it's not set up to run multiple LLMs and have them talk to eachother though.
You don't need to know how to code, just how to follow instructions. Putting commands into a command line isn't programming. Once upon a time that was how all interaction with computers worked.
OP if you're interested in LLMs, I suggest taking a look at (unfortunately Plebbit) https://old.reddit.com/r/localllama. You can run your own LLMs locally and some of the front ends even support multi-character chat (where the LLM chats with "itself" as different characters).
IMO ChatGPT and Bing both are so overtuned for "safety" they're borderline useless for creative things or really just giving "interesting" responses.
Does this mean that there is some platform where they can be placed so they can directly speak to each other???
How many AIs are there, and what would be the limits to all of them being brought together on a single platform or thread where they could ALL just have a random Communication???
The subplebbit I linked is all about running LLMs on your local machine (and has a nice community of model creators and tuners, e.g., to remove censorship and shitlib bias). Technically you can set it up however you want, that said it's challenging to run multiple at once.
By default it's not set up to run multiple LLMs and have them talk to eachother though.
How difficult is it to do in Linux???
I'm not like an Expert in Linux, but I can add and remove Progs and such....
Most of this stuff works the same or better on Linux. If you can install python packages you'll be fine.
You don't need to know how to code, just how to follow instructions. Putting commands into a command line isn't programming. Once upon a time that was how all interaction with computers worked.
I'm not gonna read 144,000 characters worth of ChatGPT generated help instructions, bro.
If you don't know how to work it look up a tutorial on YouTube or something, it's straight up just following the instructions.