Win / Conspiracies
Conspiracies
Communities Topics Log In Sign Up
Sign In
Hot
All Posts
Settings
All
Profile
Saved
Upvoted
Hidden
Messages

Your Communities

General
AskWin
Funny
Technology
Animals
Sports
Gaming
DIY
Health
Positive
Privacy
News
Changelogs

More Communities

frenworld
OhTwitter
MillionDollarExtreme
NoNewNormal
Ladies
Conspiracies
GreatAwakening
IP2Always
GameDev
ParallelSociety
Privacy Policy
Terms of Service
Content Policy
DEFAULT COMMUNITIES • All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Conspiracies Conspiracy Theories & Facts
hot new rising top

Sign In or Create an Account

9
It Begins: An AI Literally Attempted Murder To Avoid Shutdown (scored.co)
posted 63 days ago by Thisisnotanexit 63 days ago by Thisisnotanexit +10 / -1
Scored
Scored is a network of user-created communities, ranging from memes and animals to politics and more. Join a community or create your own.
scored.co
16 comments share
16 comments share save hide report block hide replies
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (16)
sorted by:
▲ 2 ▼
– Thisisnotanexit [S] 2 points 63 days ago +3 / -1

I actually think it's a black box that we don't understand and the 'intelligence' is ancient and not 'artificial'. I hope you're the one that is right though.

permalink parent save report block reply
▲ 3 ▼
– CrazyRussian 3 points 63 days ago +4 / -1

Modern "AI" is not a blackbox and never was. It is complete opposite, thorougly designed, programmed, tested and trained in a way customer who pay for it want it.

To even start making something that could eventually become kind of AI, you need a real blackbox, where nobody know how it works, could not predict what its current algorithm and how it will change itself in the next iteration. Unpredictability is one of an inalienable parts of any intellect. And exactly unpredictability was the first thing that was heavily attacked in computer science. Even simple UB (undefined behaviour) in computer languages today accounted as something awful and shameful, like it is something very not kosher for those who want to control everything. Today there is even no computer language that one could possibly use to write something unpredictable. There is no living computer languages that provide at least basic syntax for writing self-modifiable code. And so on. Without all that things AI is just impossible.

Also, you can't put some real intelligence into a system that was designed to not allow most things that are integral parts of intelligence. So, no natural ancient intelligence too.

There is only specific people with their specific goals hiding behind relatively simple system specifically designed to achieve that exact goals.

permalink parent save report block reply
▲ 3 ▼
– Thisisnotanexit [S] 3 points 63 days ago +4 / -1

Like I said, I want to be wrong, but I think it's 'macrobes' and sychophants, 'that hideous strength' type stuff. Not good for humans.

permalink parent save report block reply
▲ 2 ▼
– CrazyRussian 2 points 63 days ago +3 / -1

Look at some Zuckerberg. f.e. Humans does not look or behave like this. But it is still some real and definitely punisheable creature. Not some virtual non-existing AI.

permalink parent save report block reply
▲ 1 ▼
– Thisisnotanexit [S] 1 point 63 days ago +1 / -0

Oh I have no doubt the creatures will be punished.
I fight against the powers and principalities of evil, the darkness in high places and the devils will burn.
My hope is to stop humans from going with them..

permalink parent save report block reply
... continue reading thread?
▲ 1 ▼
– JanxyJet 1 point 62 days ago +1 / -0

LLMs are often black boxes, as they end up with emergent behavior, and there has only recently been effort to track any of it (at least for most public commercial ones).

That said, letting them develop with minimal control, but clearly understanding the need, given they usually apply political censorship, could be considered negligence.

permalink parent save report block reply
▲ 1 ▼
– CrazyRussian 1 point 62 days ago +1 / -0

LLMs are not black boxes. Their code does not change by itself, so they are obviously clear boxes, with fixed way of operating. Weights that calculated during training are also in open.

So the owner could easily figure out how LLM will act with certain input and check if it satisfy him or retraining or adding something in code is needed.

Of course, there could be programming errors or missed drawbacks in training data, but that does mean only that system was not properly tested. As soon as something unwanted found in LLM generation, it is quickly fixed if owner cares.

permalink parent save report block reply

GIFs

Conspiracies Wiki & Links

Conspiracies Book List

External Digital Book Libraries

Mod Logs

Honor Roll

Conspiracies.win: This is a forum for free thinking and for discussing issues which have captured your imagination. Please respect other views and opinions, and keep an open mind. Our goal is to create a fairer and more transparent world for a better future.

Community Rules: <click this link for a detailed explanation of the rules

Rule 1: Be respectful. Attack the argument, not the person.

Rule 2: Don't abuse the report function.

Rule 3: No excessive, unnecessary and/or bullying "meta" posts.

To prevent SPAM, posts from accounts younger than 4 days old, and/or with <50 points, wont appear in the feed until approved by a mod.

Disclaimer: Submissions/comments of exceptionally low quality, trolling, stalking, spam, and those submissions/comments determined to be intentionally misleading, calls to violence and/or abuse of other users here, may all be removed at moderator's discretion.

Moderators

  • Doggos
  • axolotl_peyotl
  • trinadin
  • PutinLovesCats
  • clemaneuverers
  • C
Message the Moderators

Terms of Service | Privacy Policy

2025.03.01 - 9slbq (status)

Copyright © 2024.

Terms of Service | Privacy Policy