If you asked for those "thought processes" after it had given you its conclusion then they in fact have nothing to do with how the LLM arrived at its conclusions. LLMs have no way to look inside themselves and understand their own processes. They just make up words they predict you want to hear. So the explanation of its reasoning after the fact is simply lies - a post hoc rationalization of an irrational process.
If instead you asked the LLM to reason one step at a time and refrain from drawing any conclusions until the end, then it would be generating the next bit of text based on the earlier ones and you would in fact get an idea of how it arrived at its conclusion. But you wouldn't be able to drill down on how it made any of those individual steps - it's just based on a giant matrix of numbers. You could reset it and ask it to do the same thing again in more detail, but it would come up with different steps and often a different conclusion and you wouldn't learn much about the first time you asked it.
This is assuming we're talking about an LLM that generates text linearly and doesn't have an internal process of generating its answer first then refining it one or more times before presenting it to you. Otherwise you wouldn't have a good way to make it show something akin to a reasoning process.
The purpose of AI is just another layer of control. That's it.
This is true, but it's very likely a high percentage of these discrepancies is due to the way LLMs work
I read some of the thought processes my LLM went through to give me answers on stuff and they're literally retarded.
If you asked for those "thought processes" after it had given you its conclusion then they in fact have nothing to do with how the LLM arrived at its conclusions. LLMs have no way to look inside themselves and understand their own processes. They just make up words they predict you want to hear. So the explanation of its reasoning after the fact is simply lies - a post hoc rationalization of an irrational process.
If instead you asked the LLM to reason one step at a time and refrain from drawing any conclusions until the end, then it would be generating the next bit of text based on the earlier ones and you would in fact get an idea of how it arrived at its conclusion. But you wouldn't be able to drill down on how it made any of those individual steps - it's just based on a giant matrix of numbers. You could reset it and ask it to do the same thing again in more detail, but it would come up with different steps and often a different conclusion and you wouldn't learn much about the first time you asked it.
This is assuming we're talking about an LLM that generates text linearly and doesn't have an internal process of generating its answer first then refining it one or more times before presenting it to you. Otherwise you wouldn't have a good way to make it show something akin to a reasoning process.