A couple of years ago phones were equipped with npu's and they started to be able to detect subject types when making pictures. as in recognize something is a person and the shot is probably a portrait so skin tones need to be used etc.
The software that is running on that npu was trained on huge datasets with colossal computers. now it's running on $200 phones
That phone can't learn new things by it self, it can be fed new software made by one of the huge machines so it can detect more things but that's it.
Same is true for Chatgpt and every other bot like it.
the "learning" phase takes place on an expensive supercomputer.
the user end can run on far more anemic machines, multiple machines that can be copied and placed all over the world in data centers so it can cope with 100 million concurrent users.
if you want you can run something like Dal-e on your home computer if you have a decent graphics card with plenty of memory.
A couple of years ago phones were equipped with npu's and they started to be able to detect subject types when making pictures. as in recognize something is a person and the shot is probably a portrait so skin tones need to be used etc.
The software that is running on that npu was trained on huge datasets with colossal computers. now it's running on $200 phones That phone can't learn new things by it self, it can be fed new software made by one of the huge machines so it can detect more things but that's it.
Same is true for Chatgpt and every other bot like it. the "learning" phase takes place on an expensive supercomputer.
the user end can run on far more anemic machines, multiple machines that can be copied and placed all over the world in data centers so it can cope with 100 million concurrent users.
if you want you can run something like Dal-e on your home computer if you have a decent graphics card with plenty of memory.