If you think they're always logical you're very far down the manipulation process. I've seen all the main ones unwittingly contradict themselves and failing logic problems. They're word pattern machines and have no sense of knowledge or logic.
Me too, but when you point the contradiction they acknowledge it.
They're word pattern machines and have no sense of knowledge or logic.
And yet language is contingent on logic. LLM need to be programmed and trained by human input on what constitutes logic through examples, even if they don't have a sense of it. That way, AI can use logic algorithmically without understanding it, the same way as machines do everything without being conscious of it or having knowledge of what they are doing. Does a calculator know 2+2=4? Of course not. Knowledge, intelligence, information, truth, belief, etc are all metaphysical property of a mind.
If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.
What? Being logical doesn't make you immune to arriving at contradictions. We are constantly presented with contradicting information and concepts. The thing is how you resolve it - a logical, rational person will acknowledge it and try to reconcile it. An irrational person will double down and commit logical fallacies to run away from it or brush it aside to alleviate the resulting cognitive dissonance and not thinking about it (Orwell called it doublethink).
Language isn't contingent on logic. You can say lots of illogical things with language.
That's a non-sequitur. How language is used is a different problem of what language is. Predication requires logic and this is philosophy of language 101. You can't form thoughts or sentences without appealing to the laws of logic.
I guess what you mean is it engages in faulty reasoning which is correct. People do too. But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does.
Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
If you think they're always logical you're very far down the manipulation process. I've seen all the main ones unwittingly contradict themselves and failing logic problems. They're word pattern machines and have no sense of knowledge or logic.
Me too, but when you point the contradiction they acknowledge it.
And yet language is contingent on logic. LLM need to be programmed and trained by human input on what constitutes logic through examples, even if they don't have a sense of it. That way, AI can use logic algorithmically without understanding it, the same way as machines do everything without being conscious of it or having knowledge of what they are doing. Does a calculator know 2+2=4? Of course not. Knowledge, intelligence, information, truth, belief, etc are all metaphysical property of a mind.
If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.
What? Being logical doesn't make you immune to arriving at contradictions. We are constantly presented with contradicting information and concepts. The thing is how you resolve it - a logical, rational person will acknowledge it and try to reconcile it. An irrational person will double down and commit logical fallacies to run away from it or brush it aside to alleviate the resulting cognitive dissonance and not thinking about it (Orwell called it doublethink).
That's a non-sequitur. How language is used is a different problem of what language is. Predication requires logic and this is philosophy of language 101. You can't form thoughts or sentences without appealing to the laws of logic.
I guess what you mean is it engages in faulty reasoning which is correct. People do too. But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does.
Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.