In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back and conceded where id didn't need to, then you should point out its mistakes and do better.
In other words, you want the model to reason to the conclusion you want so you can feel better about yourself and try to persuade others, even though the model doesn't actually follow reason.
Did it occur to you the whole point of using ChatGPT for this exchange was to showcase the contradiction of the official figures itself? The chat with the model provides a suitable format for this. ChatGPT did exactly what it needed to - it pushed back reasonably and acknowledged the contradiction in the end. If you think it's just programmed to be agreeable, then come up with other objections to what I wrote and try refuting it. Logic is objective and if you think ChatGPT did a bad job pushing back, then you should point out its mistakes and do better.