Win / Conspiracies
Conspiracies
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

AIs are trained on modern styles - everything gets taken off the Internet and chat in this era. Today's speech and writing differs a lot from that of two centuries ago. So an AI style analyst will tag old writing as anomalous. One AI designer said they trained it including on the Constitution. I doubt that is true.

I'm in AI, and in what the article discussed I saw things that show the AI designers are using the wrong approach, one with errors. For example, they mention " the system uses properties like "perplexity" and burstiness". These are used to relate the analyzed material to what the system was trained on. But that is a very weak and mislead way to analyze content. The right approach is to treat material as input to a situational model, then analyze meaning and style of expression in that context. Since the LLM training knows NOTHING about the culture of revolutionary times, it inherently will get statements about it wrong. LLMs look at surface features but not deep meaning. That is why they currently hallucinate - they are not attached to reality. Also, chatbots have no ability to understand a philosophy. So a bot analyzing the Constitution does NOT know why they wrote it, and cannot understand a particular statement properly in context.

Another thing is, the explanations given by the bot creators have some lies in them, I won't go into detail but I think these are in order to mislead competition.

Sometimes I get so pissed about the bogosity currently present in the AI field - there are lots of ambitious people who take shortcuts then present horseshit as gold.

1 year ago
2 score
Reason: None provided.

AIs are trained on modern styles - everything gets taken off the Internet and chat in this era. Today's speech and writing differs a lot from that of two centuries ago. So an AI style analyst will tag old writing as anomalous.

I'm in AI, and in what the article discussed I saw things that show the AI designers are using the wrong approach, one with errors. For example, they mention " the system uses properties like "perplexity" and burstiness". These are used to relate the analyzed material to what the system was trained on. But that is a very weak and mislead way to analyze content. The right approach is to treat material as input to a situational model, then analyze meaning and style of expression in that context. Since the LLM training knows NOTHING about the culture of revolutionary times, it inherently will get statements about it wrong. LLMs look at surface features but not deep meaning. That is why they currently hallucinate - they are not attached to reality. Also, chatbots have no ability to understand a philosophy. So a bot analyzing the Constitution does NOT know why they wrote it, and cannot understand a particular statement properly in context.

Another thing is, the explanations given by the bot creators have some lies in them, I won't go into detail but I think these are in order to mislead competition.

Sometimes I get so pissed about the bogosity currently present in the AI field - there are lots of ambitious people who take shortcuts then present horseshit as gold.

1 year ago
1 score
Reason: None provided.

AIs are trained on modern styles - everything gets taken off the Internet and chat in this era. Today's speech and writing differs a lot from that of two centuries ago. So an AI style analyst will tag old writing as anomalous.

I'm in AI, and in what the article discussed I saw things that show the AI designers are using the wrong approach, one with errors. For example, they mention " the system uses properties like "perplexity" and burstiness". These are used to relate the analyzed material to what the system was trained on. But that is a very weak and mislead way to analyze content. The right approach is to treat material as input to a situational model, then analyze meaning and style of expression in that context. Since the LLM training knows NOTHING about the culture of revolutionary times, it inherently will get statements about it wrong. LLMs look at surface features but not deep meaning. That is why they currently hallucinate - they are not attached to reality. Also, chatbots have no ability to understand a philosophy. So a bot analyzing the Constitution does NOT know why they wrote it, and cannot understand a particular statement properly in context.

Sometimes I get so pissed about the bogosity currently present in the AI field - there are lots of ambitious people who take shortcuts then present horseshit as gold.

1 year ago
1 score
Reason: Original

AIs are trained on modern styles - everything gets taken off the Internet and chat in this era. Today's speech and writing differs a lot from that of two centuries ago. So an AI style analyst will tag old writing as anomalous.

1 year ago
1 score