We are the phase of exponential fidelity growth. Yes, it still won't be as good or as flawed/realistic as real videography of real people. Yet considering that most people already want to beauty-filter all their social media shots and all magazine/online shots are edited (and that's what people actually prefer), we are nearing the point of AI-generated preferable for most really fast.
It's just a matter of economics, if training computational time comes down fast enough (no guarantees on that, the whole world economy is teetering on an edge of a cliff).
When the jews controlling Pornhub can make more money by Algos, they'll stop paying to real life content producers. Everybody else, incl. OnlyFans will follow eventually.
You are confusing tools for consumer (Dall-E, Midjourney, free LMM) image tools to what the best of research algos do. They are not the same.
The results are improving at the speed of training.
The following sample is c. 1 year behind the state of the art:
https://www.youtube.com/watch?v=ecHioH8fawE
It already passes the cut for most ordinary people. My parents don't even recognize it's computer generated.
The following is just a recent paper, the amount of training on this minimal (and this aims to mimic movement alone):
https://www.youtube.com/watch?v=AnCsmHrMPy0
We are the phase of exponential fidelity growth. Yes, it still won't be as good or as flawed/realistic as real videography of real people. Yet considering that most people already want to beauty-filter all their social media shots and all magazine/online shots are edited (and that's what people actually prefer), we are nearing the point of AI-generated preferable for most really fast.
It's just a matter of economics, if training computational time comes down fast enough (no guarantees on that, the whole world economy is teetering on an edge of a cliff).
When the jews controlling Pornhub can make more money by Algos, they'll stop paying to real life content producers. Everybody else, incl. OnlyFans will follow eventually.