Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
But again what matters here is not to have the model reason for me, but to make it acknowledge a contradiction in the presented information, which it does
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back and conceded where id didn't need to, then you should point out its mistakes and do better.
Being logical means recognizing contradictions. LLMs don't recognize when they contradict themselves (unless you point it out, but they also notice things that aren't there if you point them out too).
While language (like all of reality follows logical rules, that doesn't mean that the ability to use language makes one logical. Plenty of people use language to say illogical things, and some people have made a living from saying absurdly illogical things eloquently.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back and conceded where id didn't need to, then you should point out its mistakes and do better.