To call it retarded is still giving too much credit. It's a word predictor. It predicts words based on what words came before it in your conversation and what words it's been trained on (really, it's tokens, not words exactly but words, word parts, word combinations, depending on what the semantic intent is). You can do some very impressive things with it if you use it correctly, but it will inevitably be wrong frequently unless what you're asking it is something it's been trained on repeatedly and does not have other collisions in its training data. I work on this stuff and it bothers me how much people who know deeply how it works still humanize the algorithm, so, apologies if this came off a bit lecture-like. I just see this mass use of humanizing language as it relates to LLMs as part of people giving up more and more control to them
If you asked for those "thought processes" after it had given you its conclusion then they in fact have nothing to do with how the LLM arrived at its conclusions. LLMs have no way to look inside themselves and understand their own processes. They just make up words they predict you want to hear. So the explanation of its reasoning after the fact is simply lies - a post hoc rationalization of an irrational process.
If instead you asked the LLM to reason one step at a time and refrain from drawing any conclusions until the end, then it would be generating the next bit of text based on the earlier ones and you would in fact get an idea of how it arrived at its conclusion. But you wouldn't be able to drill down on how it made any of those individual steps - it's just based on a giant matrix of numbers. You could reset it and ask it to do the same thing again in more detail, but it would come up with different steps and often a different conclusion and you wouldn't learn much about the first time you asked it.
This is assuming we're talking about an LLM that generates text linearly and doesn't have an internal process of generating its answer first then refining it one or more times before presenting it to you. Otherwise you wouldn't have a good way to make it show something akin to a reasoning process.
The purpose of AI is just another layer of control. That's it.
This is true, but it's very likely a high percentage of these discrepancies is due to the way LLMs work
I read some of the thought processes my LLM went through to give me answers on stuff and they're literally retarded.
To call it retarded is still giving too much credit. It's a word predictor. It predicts words based on what words came before it in your conversation and what words it's been trained on (really, it's tokens, not words exactly but words, word parts, word combinations, depending on what the semantic intent is). You can do some very impressive things with it if you use it correctly, but it will inevitably be wrong frequently unless what you're asking it is something it's been trained on repeatedly and does not have other collisions in its training data. I work on this stuff and it bothers me how much people who know deeply how it works still humanize the algorithm, so, apologies if this came off a bit lecture-like. I just see this mass use of humanizing language as it relates to LLMs as part of people giving up more and more control to them
If you asked for those "thought processes" after it had given you its conclusion then they in fact have nothing to do with how the LLM arrived at its conclusions. LLMs have no way to look inside themselves and understand their own processes. They just make up words they predict you want to hear. So the explanation of its reasoning after the fact is simply lies - a post hoc rationalization of an irrational process.
If instead you asked the LLM to reason one step at a time and refrain from drawing any conclusions until the end, then it would be generating the next bit of text based on the earlier ones and you would in fact get an idea of how it arrived at its conclusion. But you wouldn't be able to drill down on how it made any of those individual steps - it's just based on a giant matrix of numbers. You could reset it and ask it to do the same thing again in more detail, but it would come up with different steps and often a different conclusion and you wouldn't learn much about the first time you asked it.
This is assuming we're talking about an LLM that generates text linearly and doesn't have an internal process of generating its answer first then refining it one or more times before presenting it to you. Otherwise you wouldn't have a good way to make it show something akin to a reasoning process.
Correct. Read Revelation. AI = technoslavery's greatest tool. A keystone weapon.