AI’s takeover of all content markets is inevitable and already underway. At Harem Token, formerly TNDNE (These Nudes Do Not Exist) we are first movers in content automation who have successfully created photorealistic AI generated nude photos and videos. Here you can see a picture of Anna, the girl from our logo.
Anna is not a real person, she does not exist. She was created by our GAN (AI) out of thin air. The owner of Anna’s NFT can insert her into videos using Harem’s technology.. The owner will also be able to stake her NFT, allowing others to use her in videos and receiving profits from the video creation charges. Or Anna’s owner can sell her to another user if they like.
Let’s take a look at where we started and the technical milestones we have achieved so far.
October- December 2019
Above you can see our first generation results. Fine facial and anatomical features were either blurry or inaccurately rendered. However, the results were “human-ish” enough to serve as a positive initial result.
January – April 2020
After training with a larger dataset and more thoroughly labeling data, finer features became more recognizable. Our full body photos had a consistent “double shoulder” problem, which we interpreted as resulting from inconsistently proportioned data inputs.
We continued with a goal of hitting full photorealism in the next batch.
May – July 2020
After training with a larger dataset, we reached full photorealism with both torso and full body pictures. At this point a new problem emerged. While our models were now indistinguishable from the real thing, they were uniformly white. Our next step was to build off the foundation we had laid, but extend our photorealistic results to creating torso shots of non-white girls.
A suitable level of photorealism in our non-white torso shots was achieved.
The short period of time in which this could be achieved and the relatively minimal amount of data that it took, compared to what our first successful photorealistic renderings required, proved concept that the “foundation” we had built with our white torsos and full bodies was accelerating the training of the AI in similar subject matters.
Our next goal was a basic transfer of our still photo images to video content.
September – October 2020
The above video marked our first successful transfer from a still image to video content.
The quality of the resulting video was very encouraging. Upon seeing isolated images of our models, users who did not know they were GAN generated consistently reported thinking they were real, meaning we had successfully traversed the “Uncanny Valley”.
As you can see above, we were also able to replicate this effect with drawn characters. At this time our video capabilities were limited. Models had to be facing directly at the camera, moving slowly, and not moving their head too far in any direction. The next goal was to replicate this with a less “simple” video.