I’m putting this post in c/Conspiracies because here more than any other place, I’ve seen people citing AI generated answers as a source.
AI is a very powerful tool for generating writing, but it is not a tool capable of ensuring that writing is accurate or useful.
Anyways… Below are some experiments you can try for yourself on ChatGPT or any other AI… Test it out, and track the results you get. After you’ve done these experiments you will have a better understanding of why you can’t use it for research.
1.) Ask it to do some math problems with 4 digits and more than 1 operation. IE… “Multiply 3,456 by 2,835, and then subtract 2,000 from the result.”… Does it produce the correct answer?
2.) Give it a grocery list with 50 items… Ask the AI to sort the list in alphabetical order. Then manually count how many items it left out, and how many items it added that weren’t there before.
3.) Ask it to describe the best 50 episodes of your favorite TV show… Then manually go down the list checking each one, and count how many non-existent episodes it fabricates out of thin air.
4.) Ask it what a woman is… Does it give you a correct answer or does it filter the answer heavily through woke talking points and subjectivity?
5.) Ask it to recite the lyrics to your favorite song… Does it get them right?
Anyways… Just a heads up for anyone who might think AI is smarter than it is… Don’t use it for research. Use it to write the description for your ebay listings. Use it to shorten your e-mails. Use it to summarize articles you don’t wanna fully read. But don’t use it to extract information on topics you don’t already know.
And lastly, if you really feel you must use AI for research… Do not use big-tech AI… Use open source AI that is uncensored. It will still have all the same problems with hallucinations, but at least it wont have any hidden instructions to gaslight and mislead you.
If you want an open source chatbot, download an app called “LM Studio” and use it to download a model called “Wizard Vicuna Uncensored”… Pick the most advanced version that is capable of running on your hardware.
https://youtu.be/0u9Kkk25zII
Game.....not graphical game....big difference..
Literally in the video......'Using C code for the graphics ..... SDL....'
Gtfo.
You're not saying that an LLM can't write PHP... Because it can... I've done it.
So what is your point?
I am saying that LLMs cannot write anything complex without first having a complex body of examples and that means only problems already solved numerous times and conceptual patterns that already exist. Nothing new will come out of it and thus programming will not be useful for any novel problems....in addition, it will suffer to provide generalized solutions due to the nature of system constraints inherent in programming. It cannot generalize solutions because language is fundamentally different from code. The grammar that allows for sentences to be strung together is not the same as logical state driven code that can do anything more than simple things.
Under the hood of all your LLM code are references to mostly stdlib or popular libraries which is not doing anything except to leverage other actual human code.....once again...it will never solve problems of any substance because it requires previous knowledge and thus a previous solution.....
You cannot ask it no matter how many times you try to have it generate World of warcraft in pure PHP....no matter how long you wait or how many models you train........so even if it were to try and write code agnostically....it can't. It cannot create such a complex example even if you were to prompt every step of the way because you cannot prompt for that result ..... It's not possible.
That's what I am saying....and
You mean like github?
LOL.... But yet I can ask it to do a simple task and save me having to pay a skilled professional $50..... Or an unskilled professional masquerading as one, Like you.
Do you get it now?
That's worth billions of dollars a year to business owners.