I’m putting this post in c/Conspiracies because here more than any other place, I’ve seen people citing AI generated answers as a source.
AI is a very powerful tool for generating writing, but it is not a tool capable of ensuring that writing is accurate or useful.
Anyways… Below are some experiments you can try for yourself on ChatGPT or any other AI… Test it out, and track the results you get. After you’ve done these experiments you will have a better understanding of why you can’t use it for research.
1.) Ask it to do some math problems with 4 digits and more than 1 operation. IE… “Multiply 3,456 by 2,835, and then subtract 2,000 from the result.”… Does it produce the correct answer?
2.) Give it a grocery list with 50 items… Ask the AI to sort the list in alphabetical order. Then manually count how many items it left out, and how many items it added that weren’t there before.
3.) Ask it to describe the best 50 episodes of your favorite TV show… Then manually go down the list checking each one, and count how many non-existent episodes it fabricates out of thin air.
4.) Ask it what a woman is… Does it give you a correct answer or does it filter the answer heavily through woke talking points and subjectivity?
5.) Ask it to recite the lyrics to your favorite song… Does it get them right?
Anyways… Just a heads up for anyone who might think AI is smarter than it is… Don’t use it for research. Use it to write the description for your ebay listings. Use it to shorten your e-mails. Use it to summarize articles you don’t wanna fully read. But don’t use it to extract information on topics you don’t already know.
And lastly, if you really feel you must use AI for research… Do not use big-tech AI… Use open source AI that is uncensored. It will still have all the same problems with hallucinations, but at least it wont have any hidden instructions to gaslight and mislead you.
If you want an open source chatbot, download an app called “LM Studio” and use it to download a model called “Wizard Vicuna Uncensored”… Pick the most advanced version that is capable of running on your hardware.
You aren't using it correctly. Like any tool, its about knowing how to properly use it.
Use it for summarizing things, writing code, getting feedback on your own writing, etc. Don't use it to write for you it will sound obviously generated. You can also very successfully use it to write code to access data for you. For example, once I wanted to verify if the guy who claimed you can predict earthquakes with planetary alignments. I had it write a script to get the planetary location data for me. Had something working in a couple hours what normally would have taken much longer to figure out where to find the data, how to process it, etc.
having it write scripts is great advice.
and it's the only way an LLM can help you process data with any level of reliability.
I already have a massive and still growing collection of scripts it has written me for doing various tasks. it's Python skills have rendered pretty much every obscure freeware app for any kind of bulk file manipulation completely obsolete.
need to convert an mp3? need to change the file paths in a folder full of shortcuts? need to combine 50 JPEGs into a single PDF?
forget Google and go straight to chat GPT to ask it to write Python for you.
so yeah that is one area where it's very useful... but the point still stands that it's not useful at all for doing research or asking questions you don't already know the answer to.
True, I just think you have to think of it as a tool which has certain valid use cases and others that aren't valid. Thinking of it as "intelligence" is an incorrect way of looking at it born out of false advertising.