I generally agree with what you're getting at, but you can't take a chat with a language model as proof or even evidence, of something like this. The fact that so many people do is concerning... AI governance will be celebrated
The chat only proves definetly that AI has a bias and that it lacks the ability to discuss abstract subjects that involve logic as it was trained using data from users that are unable to reach such levels of cognition.
you can't take a chat with a language model as proof or even evidence
Taking and giving statements within any court of law is based on an artificial language model as proof or evidence...not on nature.
AI governance
AI can only feed back what those using it have freely given, hence giving artificial authority is based on shirking ones response-ability (free will of choice) within nature.
I generally agree with what you're getting at, but you can't take a chat with a language model as proof or even evidence, of something like this. The fact that so many people do is concerning... AI governance will be celebrated
The chat only proves definetly that AI has a bias and that it lacks the ability to discuss abstract subjects that involve logic as it was trained using data from users that are unable to reach such levels of cognition.
It's just trained as a word predictor, there is no logic happening
If logic implies circular thinking, then predicting the course of a circle implies waiting for what comes around again, and again, and again...
What if few tempt many into logic to establish a predicament (predictable mind)?
https://www.cloudflare.com/learning/ai/what-is-large-language-model/
I don't understand if that's supposed to be a refutation, but LLMs do not use logic
Taking and giving statements within any court of law is based on an artificial language model as proof or evidence...not on nature.
AI can only feed back what those using it have freely given, hence giving artificial authority is based on shirking ones response-ability (free will of choice) within nature.