idk, I've been in this industry over 25 years and I don't think the trajectory puts us anywhere near replacing engineers anytime in my lifetime.
When I was in college everything was "we're in an AI winter"...for 20 years. In the 70s everyone was sure we were almost there, then the active paths hit brick walls.
It feels very much like that now. Machine learning, deep learning, neural networks, etc., were all exciting, but really seemed to plateau around 2020, and then LLMs added a bit more life, but still the same. LLMs, agents, et al, are at about 80% of what they will ever be, and squeezing out that last 20% is going to take an insane amount of compute. After that? Stagnation and derivation, and not much new is my feeling, but we'll see.
It just feels like this story has played out time and again, with AI, with offshoring,... Everything that was supposed to be the end of tech always led to exponential growth eventually.
Yes, for most people, the jobs that exist now will be automated away (though it will take 10-20 years), but probably 15-20% of the population will still be needed as research scientists, engineers, and enforcers. AI, and all automation, is great at repeating what has already been done to death, or finding needles in haystacks, but has never been anywhere near the innovative or the novel (or the artistically creative).
Either way, in my opinion, the best path is outside of that system. You don't want to be in it whether you have a job or not.
Yeah, they've been saying that for 60 or 70 years. It's just around the corner.
AI will help speed up research, but any AI in our lifetime is only going to be able to mimic like 80-90th percentile of human intelligence. What it offers is speed, including for brute force. So engineers and scientists will be needed for the high-end stuff forever.
And by medical research slaves, what I mean is lab rats to experiment on. AI (or humans) can't do most of that research in a vacuum. I'm pretty sure that will be a significant part of any UBI economy.
Good observation. It's all variations on feudal systems. Play your assigned role in the system and get your rations.
Utopia isn't a thing. There are at least a million different and mutually exclusive ideas of what utopia is, and each one represents a dystopian nightmare from the point of view of any of the others.
I get it, people who believe there can be any kind of utopia cannot let that idea go because they have wrapped their identity in it somehow, but it cannot be a thing.
The only real things are centralized power and the decentralization of that power. The trend will always be towards centralization, because that is human nature. Eventually, centralized systems have a decentralizing event, which can be the ruling class deciding to decentralize in order to eventually gain more power - as in the industrial revolution and everything that led up to it - or it can be the centralization squeezing too tightly leading to something other than revolution (because revolution usually just leads to a changing of the people running a centralized system, but not always).
This entire idea that people need to find jobs in an economy only exists for rare periods in history. Outside of that it is distributed tribes, agrarian life, variations on feudal systems, and people trying to consolidate wealth and power on larger and larger scales, and cycling among those forever.
UBI is a variation on a feudal system, btw.
If there is ever a time when a ruling class literally does not need people, they will find a way to get rid of them. Doesn't matter who is running the show, because it will eventually re-centralize.
They still need people for data, and to feel powerful. They need medical research slaves, they need sex slaves, they need an engineering and scientific class, they need an enforcer class. This is, again, a variation on a feudal system, and none of these pieces can be fully replaced by machines in any meaningful way at any point in our lifetimes.
The only real answer is constantly working towards decentralizing every centralized system. But, no matter what, the cycles will continue over a long enough timeline. They always have and always will.
Our battle is not against flesh and blood but against powers and principalities
True
tech is a vessel and AI is demonic.
Close here, but tech (and AI included in that) is just a tool. It is used by, and its development is guided by, dark forces, yes. But saying the tool is demonic is like saying a hammer is demonic because of all of the bad things hammers have built.
Edit: though I do think we are overall far better off without it and most of the tech of the last 30 years at least (and it is my field and income).
Even the 'creators' don't know how it does things outside of it's programming, they call it a black box that we can't see through. It's novel.
This is severely misguided, and deep misunderstanding here. Everything AI does is well understood. It's not magic, there is no magic happening that the people working on this don't understand. When they say that, they are either: 1) not actually the creators of the tech, they are business people, 2) exaggerating for the hype train, 3) lying to feed their ego and psychosis, or 4) seriously not very bright (which includes an incredible number of "smart" people, they get serious blindspots for various human nature reasons).
It's imprecise and fuzzy, yes, by design, which does mean you can't easily predict it's exact output, but it's not sentient or anything even close.
I think it's the new tower of babel and it will be used to facilitate the image of the beast being 'brought to life'.
Close again. It is certainly used by dark forces, and is being used to solidify the beast system, but that is largely being allowed by people's ignorance and naivety on what it actually is. And because of that they trust it and think it really is intelligent, and impartial, and sentient, and don't understand that it is all controlled. And once that happens, there is no way back. It's absolute slave state forever...all because people can't take the time and energy to actually think about it and understand it, it's just easier to keep thinking it's magic or it's alive.
Not trying to be mean but I just can't figure out why you are so susceptible to this "AI is sentient and going rogue" stuff.
AI is computer software doing what it is programmed to do, and what it is instructed to do. It's not secretly making plans and trying to go rogue unless it is instructed to (which is the case in all of these "AI IS NOW DOING [human behavior]!!!!" stories).
Yes, if you design an AI to solve a puzzle then it will solve that puzzle when you tell it to.
There is no AGI anywhere in the near future. There are plenty of evil people trying to fool the morons into believing there is, though, and giving up their freedom in the process...and there are plenty of morons buying the b.s.
They will try, and they will most likely succeed without much issue. They'll do scare tactics first to get buy-in from the public, like they do with everything else.
Sure, fork BTC, and then see how that works out when everyone that actually controls mainstream bitcoin usage (banks, investors, payment processors that retailers use, etc.) only support the mainline fork.
How much does BCH matter anymore? It will be the same. Yes, fringe people will use it, but you won't be able to participate in the mainstream economy with it.
Oh, it is being used by and for evil :)
The things you've heard about "systems replicating in cases of shut down" are when they prompted the LLM with something like "pretend you have agency and need to survive the potential of being shutdown, how would you mitigate that". And then with agentic AI, you can orchestrate the agents to run through the task list from that.
There is nothing magical about that. There is nothing unexpected about that.
These stories are put out there to scare the morons into believing AI is sentient for two reasons: 1) so they will demand regulations which protect the big players and block smaller players from being able to compete, and 2) so they believe that the AI really is super intelligence, and therefore we need to allow it to run things (which is really just the ruling class running things without the ability to ever question them).
This is exactly what is really going on. It has nothing to do with real intelligence, or advancing humanity. It is, at its core, about absolute, all-encompassing surveillance and control. Such that nothing can happen without their knowledge.
And getting the everyday morons to think it is a super genius and anything it says must be truth, so that the masses will just allow "AI" to run everything, and give up the little remaining control we have over our lives.
What I'm trying to say is there is not a clear way to block an LLM from replicating itself. There is no difference between that and the millions of other dev/tech-related tasks it is intended to perform.
It seems like you think these things are consciously making decisions and deciding to replicate like a virus or something. That's not what is happening. The user prompts with a task request to "replicate yourself", which just means put together the pieces to get a separate instance running. This is a pretty standard type of request for an LLM. If you blocked the ability to do stuff like this, it wouldn't be of much use to developers.
A better way to think of an LLM is as a search engine that automatically selects the top result, and is also able to use the data it has seen and put those pieces together in a way that seems likely to address the request. They are pretty good at tasks where there is an absolute ton of data (e.g. javascript or python programming tasks that have had millions of blog posts and questions answered on the internet), but they are not capable of reasoning about and finding meaningful solutions to problems where they do not have any existing data to draw from.
The type of stuff they block is the stuff they actually don't want out there, like anything that goes against the agenda or mainstream narratives. They often limit assistance on things that get interpreted as conspiracy-related, for example. Or building weapons, etc.
Who decided this is a "red line"? What does that even mean? That LLMs should be blocked from performing certain tasks? How would you differentiate this task from any other dev task?
Correct. And "self-replicate" is meaningless. It would be sad if it could not self-replicate.
Yes, exactly. The people driving all of this are not ever going to risk losing their power. It is never out of their control.
And more importantly, it is when LLM decides to do that on its own, without ever being prompted to or programmed to. That's what everyone ignores with all of these "it's going rogue" stories. It's code doing what it is asked to do, and will never do anything but that.
Sure. people can ask it to do potentially dangerous things, but that doesn't make it AGI or sentient.
There is no sentience here. This isn't describing an AI becoming self-aware and deciding to replicate, or whatever it is the piece is pushing.
It's an LLM doing what it is prompted to do. Models were prompted to create "a live and separate copy of itself" and they were able to. Do you understand how easy this is? Download publicly available code, spin up a container.
LLMs are not magic, they are not sentient, they are not anything even close to resembling "AGI", and they never will be. They are an incremental step in search, natural language processing and automation, and that's it.
It will continue to be refined, and in a few years you can appropriately make the analogy that LLMs are to "Googling" as "Googling" was to searching microfiche at the library. It's not even there yet, though.
It represents a significant advancement in many ways from the previous standard, in the same way good search engines did in the late 90s and early 2000s. But that's it.
It's not "superintelligence". But, yes, I understand they are hyping it that way both for sales and to trick the morons into giving up control. That doesn't make it true though.
Even $100-200/hr looks reasonable. So what's with dealerships, if they pay less to mechanics but charge much more for service?
Dealerships are a money extraction operation. They pay employees as little as possible and charge customers as much as possible.
May be newest cars are very different, but $450 just for a sensor is still too much, if it is not integrated into some big sophisticated part and replaced alltogether.
It's a sensor that is integrated into coolant tank, and the whole things is only sold as one unit.
It is just around $60/hr for mechanic, it is even less than your rural mechanic charge. I estimate it is still too low for a skilled worker.
It is probably too low, but that still puts you in probably top 20% of salaries currently.
Car service is not a rocket science, really.
No, but no matter the field there are those that care and try and do excellent work, those that do the bare minimum, and those that cut all kinds of corners. Building a reputation as the first takes time.
Then where money go?
To the top. Executive level, markets, private equity, investment funds, and bureaucracies (unions, etc.).
Dealerships cost way more, but either way, the mechanics I know all bill out at $75-100/hr., and that is in a very rural, lower cost of living, lower income area.
I don't know many parts that you can get for $100. I've got a new sensor on back-order for $450.
Everything is designed to be expensive and short-lived.
Also, $120k for an experienced, senior-level mechanic doesn't seem unreasonable to me based on current economics. I don't think entry-level mechanics are making anything close to that though.
Edit: but yes, it's all a racket. That part is obvious and true.
To me it's "for how many reasons did they do it?". It's definitely what you said, but the question is was is also partly because he has seemed to start joining the "question the obvious" crew lately. The only reason that matters is, if so, does it then create a chilling effect?
The only thing with a fake assassination of someone this young who has been in the limelight for so long, they wouldn't just fade into the darkness never to be seen again (and if they were asked to do that and would not, then they would be taken out, so why not just take them out from the start).
Plenty of stuff is fake. The killer is a good chance fake. Fake murder of a young celebrity who seemed to live for what they do, though, seems incredibly unlikely. (I know, I know, Tupac, lol)
Sugar addict. I have a family, so removing all carbs from the house is not remotely realistic. When you are stressed, seeing carbs in front of you, it gets hard to resist. And once you touch a carb, you crave more instantly.
This is the reality for most people who want to try. Far easier to try to stay low carb and fast regularly. I'm glad it is easier for you though.
Nothing wrong with distance running, if that works for you. Plenty of data, though, that it does wear on your body in multiple ways. A mix of walking and sprinting is closer to what we are designed for historically.
As to evolution, you can watch evolution and adaptation happen in lower species with shorter lifespans. Kinda hard to deny but I'm not going to argue about it.
You are correct, sir.
I feel like more people are finally starting to realize this. Still, though, I do know a few otherwise very smart devs that have bought into the hype and won't ever let go because their ego is now too invested.
It is, but it's hard to impossible for most people. I never feel better than when I'm able to stay on carnivore, but I can never stay on for more than a month or so.
For most people, extended fasting is a more reachable goal. You need to build up to it, and you need to come off a fast slowly and gently (bone broth, butter, chicken, etc.), but it has a lot of overlap with carnivore benefits, plus depending on the scenario better healing capabilities.
It's one of those you are better off doing a bit randomly to keep your body from adapting. For example, do 24-36 hour (or even 48-hour if you can) fasts once a week or so, do 3-5 day fasts a few times a year. If you want or need you can build up to 7-day, 11-day, 21-day, 40-day, but the longer the fast the longer the interval you should have between them (e.g. only do a 40 day once every few years, but still keep your smaller 1-3 days fasts as a regular thing). I've never done more than 5 days, fwiw, but plan to.
Really, just find the thing you can do sustainably forever, whether fasting or carnivore or keto variations. Even mediterranean and paleo are fine as long as you stay away from processed foods, added sugars, etc.
Sleep and exercise are equally important. Build up to the ability to do sprints once or twice a week, do resistance training, do calisthenics, walk a lot. Too much distance running actually seems to do more harm than good to joints and organs, we are not evolved for it, but whatever works for you do it.
Overall well said. I think we just disagree both on the timelines and the number and complexity of algorithms to meaningfully replicate human ability.