To call it retarded is still giving too much credit. It's a word predictor. It predicts words based on what words came before it in your conversation and what words it's been trained on (really, it's tokens, not words exactly but words, word parts, word combinations, depending on what the semantic intent is). You can do some very impressive things with it if you use it correctly, but it will inevitably be wrong frequently unless what you're asking it is something it's been trained on repeatedly and does not have other collisions in its training data. I work on this stuff and it bothers me how much people who know deeply how it works still humanize the algorithm, so, apologies if this came off a bit lecture-like. I just see this mass use of humanizing language as it relates to LLMs as part of people giving up more and more control to them
The purpose of AI is just another layer of control. That's it.
This is true, but it's very likely a high percentage of these discrepancies is due to the way LLMs work
I read some of the thought processes my LLM went through to give me answers on stuff and they're literally retarded.
To call it retarded is still giving too much credit. It's a word predictor. It predicts words based on what words came before it in your conversation and what words it's been trained on (really, it's tokens, not words exactly but words, word parts, word combinations, depending on what the semantic intent is). You can do some very impressive things with it if you use it correctly, but it will inevitably be wrong frequently unless what you're asking it is something it's been trained on repeatedly and does not have other collisions in its training data. I work on this stuff and it bothers me how much people who know deeply how it works still humanize the algorithm, so, apologies if this came off a bit lecture-like. I just see this mass use of humanizing language as it relates to LLMs as part of people giving up more and more control to them