This is great info. I don’t get how people don’t consider for 2 seconds the necessary processing requirements for all this crap. They are literally programming us, by having us program the machine.
A couple of years ago phones were equipped with npu's and they started to be able to detect subject types when making pictures. as in recognize something is a person and the shot is probably a portrait so skin tones need to be used etc.
The software that is running on that npu was trained on huge datasets with colossal computers. now it's running on $200 phones
That phone can't learn new things by it self, it can be fed new software made by one of the huge machines so it can detect more things but that's it.
Same is true for Chatgpt and every other bot like it.
the "learning" phase takes place on an expensive supercomputer.
the user end can run on far more anemic machines, multiple machines that can be copied and placed all over the world in data centers so it can cope with 100 million concurrent users.
if you want you can run something like Dal-e on your home computer if you have a decent graphics card with plenty of memory.
My son told me that this has the capability of taking out a whole level of programming. What used to take a group of programmers weeks, now it's done in a flash. Is this a non issue?
This is great info. I don’t get how people don’t consider for 2 seconds the necessary processing requirements for all this crap. They are literally programming us, by having us program the machine.
How about the bandwidth? How do you move enough data for that kind of computing power?
A couple of years ago phones were equipped with npu's and they started to be able to detect subject types when making pictures. as in recognize something is a person and the shot is probably a portrait so skin tones need to be used etc.
The software that is running on that npu was trained on huge datasets with colossal computers. now it's running on $200 phones That phone can't learn new things by it self, it can be fed new software made by one of the huge machines so it can detect more things but that's it.
Same is true for Chatgpt and every other bot like it. the "learning" phase takes place on an expensive supercomputer.
the user end can run on far more anemic machines, multiple machines that can be copied and placed all over the world in data centers so it can cope with 100 million concurrent users.
if you want you can run something like Dal-e on your home computer if you have a decent graphics card with plenty of memory.
My son told me that this has the capability of taking out a whole level of programming. What used to take a group of programmers weeks, now it's done in a flash. Is this a non issue?