Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

16
Working as intended. Optimistically one day peasants may hang cryptobros and "AI" techbros for being stringless grifting traitors when it all comes full circle that "AI"'s bread/circus was always about information control and data monitoring -- and BTCDBC was a conditioning device to tokenization. (media.scored.co)
posted 44 days ago by pkvi_stannum 44 days ago by pkvi_stannum +17 / -1
7 comments share
7 comments share save hide report block hide replies
Comments (7)
sorted by:
▲ 3 ▼
– CaptainTrouble 3 points 44 days ago +3 / -0

The purpose of AI is just another layer of control. That's it.

permalink save report block reply
▲ 2 ▼
– VeilOfReality 2 points 44 days ago +2 / -0

This is true, but it's very likely a high percentage of these discrepancies is due to the way LLMs work

permalink parent save report block reply
▲ 1 ▼
– CaptainTrouble 1 point 44 days ago +1 / -0

I read some of the thought processes my LLM went through to give me answers on stuff and they're literally retarded.

permalink parent save report block reply
▲ 3 ▼
– VeilOfReality 3 points 44 days ago +3 / -0

To call it retarded is still giving too much credit. It's a word predictor. It predicts words based on what words came before it in your conversation and what words it's been trained on (really, it's tokens, not words exactly but words, word parts, word combinations, depending on what the semantic intent is). You can do some very impressive things with it if you use it correctly, but it will inevitably be wrong frequently unless what you're asking it is something it's been trained on repeatedly and does not have other collisions in its training data. I work on this stuff and it bothers me how much people who know deeply how it works still humanize the algorithm, so, apologies if this came off a bit lecture-like. I just see this mass use of humanizing language as it relates to LLMs as part of people giving up more and more control to them

permalink parent save report block reply
▲ 2 ▼
– Zyxl 2 points 44 days ago +2 / -0

If you asked for those "thought processes" after it had given you its conclusion then they in fact have nothing to do with how the LLM arrived at its conclusions. LLMs have no way to look inside themselves and understand their own processes. They just make up words they predict you want to hear. So the explanation of its reasoning after the fact is simply lies - a post hoc rationalization of an irrational process.

If instead you asked the LLM to reason one step at a time and refrain from drawing any conclusions until the end, then it would be generating the next bit of text based on the earlier ones and you would in fact get an idea of how it arrived at its conclusion. But you wouldn't be able to drill down on how it made any of those individual steps - it's just based on a giant matrix of numbers. You could reset it and ask it to do the same thing again in more detail, but it would come up with different steps and often a different conclusion and you wouldn't learn much about the first time you asked it.

This is assuming we're talking about an LLM that generates text linearly and doesn't have an internal process of generating its answer first then refining it one or more times before presenting it to you. Otherwise you wouldn't have a good way to make it show something akin to a reasoning process.

permalink parent save report block reply
▲ 1 ▼
– guywholikesDjtof2024 1 point 44 days ago +1 / -0

Correct. Read Revelation. AI = technoslavery's greatest tool. A keystone weapon.

permalink parent save report block reply
▲ 1 ▼
– WeedleTLiar 1 point 44 days ago +1 / -0

Still about 30% more accurate than human newscasters...

permalink save report block reply

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - ptjlq (status)

Copyright © 2024.

Terms of Service | Privacy Policy