If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.
What? Being logical doesn't make you immune to arriving at contradictions. We are constantly presented with contradicting information and concepts. The thing is how you resolve it - a logical, rational person will acknowledge it and try to reconcile it. An irrational person will double down and commit logical fallacies to run away from it or brush it aside to alleviate the resulting cognitive dissonance and not thinking about it (Orwell called it doublethink).
Language isn't contingent on logic. You can say lots of illogical things with language.
That's a non-sequitur. How language is used is a different problem of what language is. Predication requires logic and this is philosophy of language 101. You can't form thoughts or sentences without appealing to the laws of logic.
I guess what you mean is it engages in faulty reasoning which is correct. People do too. But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does.
Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back and conceded where id didn't need to, then you should point out its mistakes and do better.
If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.
What? Being logical doesn't make you immune to arriving at contradictions. We are constantly presented with contradicting information and concepts. The thing is how you resolve it - a logical, rational person will acknowledge it and try to reconcile it. An irrational person will double down and commit logical fallacies to run away from it or brush it aside to alleviate the resulting cognitive dissonance and not thinking about it (Orwell called it doublethink).
That's a non-sequitur. How language is used is a different problem of what language is. Predication requires logic and this is philosophy of language 101. You can't form thoughts or sentences without appealing to the laws of logic.
I guess what you mean is it engages in faulty reasoning which is correct. People do too. But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does.
Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back and conceded where id didn't need to, then you should point out its mistakes and do better.