The AIs will say whatever you want if you keep directing them a certain way. You're being manipulated if you use them, even more if you trust them at all.
I know what you mean but I don't think you've prompted them on "sensitive" subjects. They have specific programming on those. The only reason it conceded here is because I used the stats it provided me with and gave clear cut official figures. Those models may lie and skew data but they are always logical and will acknowledge contradictions unlike arguing with people.
I use them with purpose and I'm aware of what they are and what they are not. This chat here is an experiment on the AI itself - I had already had the information and made my conclusions prior to engaging with it.
Did you even give it a read or you just saw ChatGPT chat and assumed all that?
If you think they're always logical you're very far down the manipulation process. I've seen all the main ones unwittingly contradict themselves and failing logic problems. They're word pattern machines and have no sense of knowledge or logic.
Me too, but when you point the contradiction they acknowledge it.
They're word pattern machines and have no sense of knowledge or logic.
And yet language is contingent on logic. LLM need to be programmed and trained by human input on what constitutes logic through examples, even if they don't have a sense of it. That way, AI can use logic algorithmically without understanding it, the same way as machines do everything without being conscious of it or having knowledge of what they are doing. Does a calculator know 2+2=4? Of course not. Knowledge, intelligence, information, truth, belief, etc are all metaphysical property of a mind.
If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.
The AIs will say whatever you want if you keep directing them a certain way. You're being manipulated if you use them, even more if you trust them at all.
I know what you mean but I don't think you've prompted them on "sensitive" subjects. They have specific programming on those. The only reason it conceded here is because I used the stats it provided me with and gave clear cut official figures. Those models may lie and skew data but they are always logical and will acknowledge contradictions unlike arguing with people.
I use them with purpose and I'm aware of what they are and what they are not. This chat here is an experiment on the AI itself - I had already had the information and made my conclusions prior to engaging with it.
Did you even give it a read or you just saw ChatGPT chat and assumed all that?
If you think they're always logical you're very far down the manipulation process. I've seen all the main ones unwittingly contradict themselves and failing logic problems. They're word pattern machines and have no sense of knowledge or logic.
Me too, but when you point the contradiction they acknowledge it.
And yet language is contingent on logic. LLM need to be programmed and trained by human input on what constitutes logic through examples, even if they don't have a sense of it. That way, AI can use logic algorithmically without understanding it, the same way as machines do everything without being conscious of it or having knowledge of what they are doing. Does a calculator know 2+2=4? Of course not. Knowledge, intelligence, information, truth, belief, etc are all metaphysical property of a mind.
If you have to point out its contradictions then it's not logical.
Language isn't contingent on logic. You can say lots of illogical things with language. LLMs and idiots often do. Sure, a lot of the training data is logical so a lot of the LLM output that parrots it is logical too, but when you go outside the stuff it's specifically trained on I'm sure it's more likely to get illogical. Plus a lot of human writing is illogical too, but that isn't filtered out of the training data. So you're a sucker if you think LLMs are logical, especially if you think so while admitting they unwittingly contradict themselves.