I’m putting this post in c/Conspiracies because here more than any other place, I’ve seen people citing AI generated answers as a source.
AI is a very powerful tool for generating writing, but it is not a tool capable of ensuring that writing is accurate or useful.
Anyways… Below are some experiments you can try for yourself on ChatGPT or any other AI… Test it out, and track the results you get. After you’ve done these experiments you will have a better understanding of why you can’t use it for research.
1.) Ask it to do some math problems with 4 digits and more than 1 operation. IE… “Multiply 3,456 by 2,835, and then subtract 2,000 from the result.”… Does it produce the correct answer?
2.) Give it a grocery list with 50 items… Ask the AI to sort the list in alphabetical order. Then manually count how many items it left out, and how many items it added that weren’t there before.
3.) Ask it to describe the best 50 episodes of your favorite TV show… Then manually go down the list checking each one, and count how many non-existent episodes it fabricates out of thin air.
4.) Ask it what a woman is… Does it give you a correct answer or does it filter the answer heavily through woke talking points and subjectivity?
5.) Ask it to recite the lyrics to your favorite song… Does it get them right?
Anyways… Just a heads up for anyone who might think AI is smarter than it is… Don’t use it for research. Use it to write the description for your ebay listings. Use it to shorten your e-mails. Use it to summarize articles you don’t wanna fully read. But don’t use it to extract information on topics you don’t already know.
And lastly, if you really feel you must use AI for research… Do not use big-tech AI… Use open source AI that is uncensored. It will still have all the same problems with hallucinations, but at least it wont have any hidden instructions to gaslight and mislead you.
If you want an open source chatbot, download an app called “LM Studio” and use it to download a model called “Wizard Vicuna Uncensored”… Pick the most advanced version that is capable of running on your hardware.
No it's not......it's code is laughable. It's not even great except for atomic operations.
It will literally hallucinate variables, make up functions that are slightly different names then they are supposed to be, it is not even great at making comments for code.
It struggles to optimize anything, and while it can spot syntax errors it will typically fail to recognize scale problems or issues with structural problems. It also loves adding things that do nothing or unnecessary stuff.
When asked for specifics it fails to deliver and usually the only thing it can do is auto complete after some repetition.
It helps speed up tedious tasks and boiler plate....but it fails to deliver logic. I have also not been able to successfully optimize my code and I am sure it's not fully optimized.
It also fails to build anything larger than a chat bot conceptually. It cannot fathom all the interconnectedness....most programmers can't either. So the prompts are inadequate....
Also.....code is typically around a typical context - ie Domain. So it tends to be information dense and also can be multiple states based on conditions of the situation, whereas the LLM are outputting based on a closed solution. They will typically generalize and cannot provide the necessary insight that would be needed...and how would the programmer know unless they also have the knowledge.....
We are back to TDD now....we need to have all the tests for the AI to be checked against.....
See, I don't disagree with any of that...
But what I'm saying is that it's your job to fix that shit, or more specifically help it find and fix that shit it's self, via highly specific and tailored instructions and multiple iterations.
Which does count as real work for the user, but still ultimately can be used to save time in a lot of applications. That gives it real value.
And another thing that's important to note is that an LLM is inherently designed for understanding and generating language, and since that perfectly describes what coding is, you can expect it to improve at that as time goes on.
It's not ready for big coding jobs with thousands or even hundreds of lines... But it will get there, and faster than you think I suspect.
And lastly, I haven't tried them yet, but in a deep dive into AI youtubers, someone was showing off LLMs you can run locally or on rented cloud space that scored way better than chatgpt4 when it came to coding, with different models for specific programming languages.
And there was also some extension you could set-up where it could read and write files in a local folder, which seems like a big game changer too.
But coding is not language.....lol.
It is written as instructions....but not the same way as language is used. (Coding is performing actions.....the computer does stuff....)
Language like prose and poetry follow grammar that is used to convey ideas.....not perform actions.
You do not speak and have doors open without first developing an entire 'smart system' that alone can only 'open', 'close', 'swing' etc.
But those actions require a physical set of motors, actuators, controllers, a power source, a BIOS, and some kind of training.....then and only with the creative foresight of putting those things together does the word do anything....
Just like software.... It's a massive system and not just some words....
You cannot convert any program to another language.....that's a myth.....
It's based on the physical computer and the underlying instruction set and attached paraphernalia before those instructions mean anything.......
Language is incidental....but its not the primary element.....
Language is for humans.....coding is not language....
Those are arbitrary distinctions you are drawing based on subjective criteria you are pulling out of your ass.
"Just like a book... It's a massive system and not just some words..."
See?
No.....you don't seem to understand the difference. Coding is system specific..... constraints based on physical systems. It does not exist alone.
At the same time, language is defined in different structures from code. You could technically define a code for people.....those are imperatives that tend to rely on societal norms and cultural common ground.
For example. Sing the national anthem. Is an instruction, but it requires a system of ideas, but code cannot create the systems.....ideas are self replicating, but code cannot make a chair.....