Do you think AI is going too far? It seems like AI is becoming more popular and advanced. You should see the things AI can do now: write poems https://youtu.be/QAnTG9u_7VQ https://youtube.com/shorts/bdsu3Q40wv8?feature=share, make "art", write essays, etc. (you may have heard of these things) It's also more advanced than before.
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (31)
sorted by:
It depends on what you mean by AI. AI will always be an algorithm of some sort at its core. There are advances in allowing these systems to run more autonomously and handle more and more advanced decision trees. To some, this is an AI, to others, it's not. Personally I will consider it AI once the system can ingest data from its own experiences and turn those into meaningful insights. As opposed to now where sanitized data must be manually incorporated into the model
When you began to invent new meanings for some well-established term, or forced to do so, then it should be a giant red flag for any sane person.
AI means Artificial Intelligence, and it imply that there should be some intelligence in it. In all that shit that posed as AI by MSM could not be any signs of any intelligence. It is a dumb automata, that do exactly what it was programed for.
Fuking light switches does not make any "decisions" when user switch the light on or off. So does not all that "AI" crap you fall to.
That is exactly the narrative pushed over humanity. "To some this is a man, to others this is a woman".
Bullshit. It is either AI, either not. Either something artificial have real intelligence property, either not. Anything else is just a bullshit.
It can't. "System" can't have any experiences at all. By definition. This simple thing can do only two things - calculate weights in layers by back-propagation during "training" process, and use that weights to directly calculate result in case of normal propagation. There is no any room for any "expirience" or whatever property those dumbfuck try to impute to that simple system. "Expirience", or "making decisions" just not coded into this simple thing.
All AI you have today is not a bit more intelligent than a light switch on the wall.
Interesting, when WHO had been rewriting definition of vaccine multiple times - do you accepted that shit as easy as you accept that "To some, this is an AI, to others, it's not." bullshit?
I do recognize that people now call those new injections vaccines, yes. To not recognize that would make communication exceedingly difficult. Now I try to avoid calling them vaccines (although I will say vax) but to pretend I don't know what other people are talking about when calling them vaccines would be obtuse
That not the analogy I see.
Say, not-a-vaccine was falsely attributed with properties "safe" and "effective".
I see people who understand that not-a-vaccine in no way safe and effective, and it does not matter how they call it - vaxx, vaccine, jab, shot, whatever. Main thing - they know that it does not have properties "safe" and "effective".
But somehow same people thinks that not-an-AI somehow could have property of deciding something, property of intelligence, property of having kind of will, and so on.
That is the problem that bothers me, not how that not-an-AI is named.
That not-an-AI thing does not decide anything, does not create anything new, does not have any intelligence, could not uncover anything or have kind of will to do something.
It does not have properties MSM attribute to it. It can't have them in principle.
Just like not-a-vaccine can't be safe and effective in principle.
Why one thing is perfectly clear, and another, exactly same thing somehow is not?
Also interesting, that those who got that not-a-vaccine is not what it was declared to be do a lot of digging and enormous efforts to find a truth about not-a-vaccines. Huge and honorable work, really.
And now, the same people somehow don't even want to find out how that not-an-AI really works and why it just can't have any of advertised properties. And unlike with vaccines nobody ban or hide that information. It is publicly available, unlike vaccine data, you could easily not only read everything about how not-an-AI made, but even build and run your own not-an-AI on your own PC and study how it really works by yourself and find out why everything MSM told about not-an-AI is a complete bullshit, exactly like everything they told about not-a-vaccines.
What is that? How is that? Why is that?
I know very well how it works, but this generalized system architecture is being called AI and you would not be able to have a conversation with an average person if you were to refer to it as something else. That being said I think we're generally in agreement. My view is that people's expectation of AI has been seeded for so long by the media that little parlor tricks like an chatGPT can whip people into a frenzy. This can be for multiple reasons, another tech gold rush, an "innocent" incentive for even more massive data collection, and perhaps most importantly using the "AI" as a source of truth or even a god
"AI means Artificial Intelligence, and it imply that there should be some intelligence in it" ->> Call it automata or any other term. How do you start making biorobots that are able to think about themselves and take decisions? By starting rudimentary things like ChatGPT. That's only the beggining kiddo :)
ChatGPT, just like any other GPT models, have absolutely nothing to do with any kind of intelligence at all, including one that could be needed for biorobots or whatever.
It is like starting making omelet and cup of coffee by starting with an USB cable and VHS cassete. Go ahead and good luck.
I'm really fascinated about that insanity around non-existing AI and how many so-called "critical thinkers" fall to it. I thought coronahoax was awful revelation about real situation with critical thinking in humans. Suddenly it appeared to be pretty overestimated. Things seems to be much, much worse.
Why do you say that? By gathering human information and having the power to decide for itself (ChatGPT per si doesn't do that but there's some API that uses it that is improved to make decisions based on choices.
TERM, noun [Latin terminus, a limit or boundary.]...what if ones choice to shape "new meanings" can only exist within the boundary of balance?
What if a sentence (life) can only exist within term (inception towards death)?
That's what I meant when talking about AI.
a) what existed first...suggested meaning or perceivable meaning?
b) if being able to perceive implies by means of perceivable, then perceiving depends on perceivable...not on suggested.
c) can temporary (life) define meaning for ongoing (inception towards death)...while being within?
a) what comes first...suggesting others what is; or being able to perceive what was perceivable?
b) "it's not" aka "it is nothing"...what is nothing? where is nothing? why is nothing? when is nothing? who suggested you that nothing is?
c) is or isn't implies a conflict of reason (is vs isn't)...can one get into such a conflict without consenting by choosing a side?
Reason aka logic aka logos aka suggested words...does nature utter sound in the shape of words or do those within perceivable sound shape suggestible words to each other?
a) are suggested "before" and "after" in opposition to perceivable "now"?
b) can one be outside of "now"?
c) -tion implies action...what represents opposite of action and where does it exist in relation to action?