Harem Token Deepfake Beta Update

Hey all,

With our Deepfake closed beta around the corner I wanted to take a moment to drop some samples for you guys, as well as some updates. This article’s goal is to accomplish four things:

  1. Output Demos: Demonstrate what the output of the deepfake API is going to look like for beta testers, including the kind of progress that can be expected as the AI improves over time.

Output Demos:

One of the great things about our system is that it gets better with time. To illustrate that, let me provide some samples that show how our model looked as we trained it on our town vampiress Amber Sparkx’s face.

Before Model Training:

These photos below show the very first test we did using Amber’s face, which we did as a control before we trained our AI specifically on her face. As you can see it comes out okay but wonky. The shots where she is far back look fine quality wise, but her face isn’t completely recognizable. The close up shots look like what you see when you go home with a girl from the bar after four long island ice teas. Not acceptable.

Fine from far back, no noticeable errors. But she’s not really recognizable. Note specifically that her piercings (nose and lip rings aren’t showing up).
Same as above, maybe slightly better but still not ideal.
You shouldn’t have driven home tbh. Lucky you didn’t get pulled over.

Halfway Through Model Training:

This is a test we ran on the same video with half our goal iteration training amount. A noticeable improvement. The face is much easier to recognize, specifically the piercings are showing through very well, and the close up shots have lost that blurry quality that ruined them before. We considered this a ‘pass’ on this video in that if we received it as a result for the beta we’d be happy with it, and so moved on to another video with slightly more difficulty for the third test.

Features starting to get more recognizable. Notice the presence of the nose ring and lip ring.
Same as above.
Much better on the close up face as well. We considered this a ‘pass’ on the video even halfway trained, and moved on to testing a new video once model was finished training to our preset iteration goal.

Completed Model Training:

For the final test after the iteration count was reached, we picked a different video with more extreme angles and some examples of partial face occlusion. Our intention for V1 is to have a system that works for POV videos with no occlusion, so this was more to test the limitations of our system than anything else.

Here’s what the girl looks like in the original source video.
Much higher quality, way easier to recognize as Amber, and her piercings are showing as well.
This even worked when the face was partially occluded, or at a non-straightforward angle, which both were pleasant surprises as we didn’t think they would work for the beta.
However, at extreme side angles it will still often revert to the original face. Handling this is on our to do list, but given that the V1 goal was 5% or less (non-occlusion based) error rate for POV videos, we consider this a great success.
Additionally with other occlusion instances like blowjobs, there’s the issue of the occluding object ‘morphing’ into the face as you can see here. This is also an issuewith long hair, but not regularly.
Another ‘morph blowjob’ instance. Who knows, maybe it’s someone’s fetish?
However, overall it was a positive result. We would consider this video a very strong pass.

Johnny Cash:

But all these videos are tests where we are putting Amber’s face over the faces of other young women. What happens if we try to deepfake her onto an older male? Different skull shape, wrinkles, skin sag, etc. are all issues which can negatively effect the transfer. So as any of you who are in our telegram know, we made a video using Amber’s face over Johnny Cash as he sang “The Man in Black”. Unfortunately it was copyright struck off youtube so you’ll have to join our TG to see it. The results are that it still looks pretty good, the piercings show through somewhat regularly, but it’s not quite up to snuff compared to the previous video. Amber is still recognizable, but less so, and there’s an issue with eye tracking. Ultimately this is not what our AI was meant for so no big surprise there, and we were surprised that it worked at all.

Not bad!
Eye tracking error with the right eye (stage left eye).
Bottom teeth are a little blurry, however that may be an issue with the original video being low resolution.

So the AI is passing with flying colors when we use a real person’s staked face, but what about with one of our AI generated avatars? Using the AI generated avatars is a significantly more difficult technical challenge, so the REAL test will be how the AI performs with that. The videos here are much more finnicky, they’re fine if the conditions are just right but the real test will be how they react to non-straightforward angles and occlusion. We knew the videos wouldn’t pass without training, so we skipped straight to the end and will catalogue results by video.

Completed Model Training:

This is an example of how well the AI does with straightforward angles without occlusion. Facial features looking good, no obvious issues with merging and it’s recognizable as Anna. Teeth are still a problem, but a minor one, shit if you’re from the UK this is probably a huge upgrade on dental quality.
The AI does okay with extreme side angles but not great. It’s a bit too blurry.
No surprise here, this is about as good of an angle (facing straight at camera with virtually no occlusion) as you can get. Model works perfectly on this.

Johnny Cash:

The facial expressions are matching correctly, and it is recognizably Anna, however the blend isn’t as well done as it did with Amber. On the plus side, the issues with eye tracking did not persist with Anna’s video.
On the closer up images, Anna’s face was blurry which was an error not encountered with Amber’s. Still not terrible, this would probably be a low pass for our goals for V1.


There’s no doubt that the staked faces of performers are going to be much higher quality than our AI generated girls during V1. But we have been able to see consistent quality improves on AI generated girls as well by training for more iterations, and as we improve their facesets. By Q2 I think we have a good shot at reaching parity with IRL performers. But both IRL performers and AI generated avatars should work with good quality for properly selected videos.
Given that the stuff looks goddamn top notch for IRL faces, we are probably going to focus our marketing push around the ‘stake your face’ program. So expect as we transition from beta to just open use, that we push pretty hard on trying to on board performers. To that end, the ‘stake your face’ form now has a referral section. If you get someone to stake their face, they can write down your addy, and you’ll get 10% of Harem’s cut of the profits from the use of their face forever. We will also be launching several other programs to incentivize people to stake their face, including purchasing faces for auctions, and releasing a ‘bounty’ list of all your favorite celebs. Stay tuned.

What Videos Will and Won’t Work for Beta:

With that all said the most important thing is knowing what videos will and won’t work for beta. The videos of staked IRL faces will have a lot more flexibility and generally be higher quality than the videos of AI generated avatars. Regardless, make sure the video you select answers yes to the following:

  • Is the person in the video facing more or less straight at the camera? (POV or doggystyle-facing-camera videos will work best)
  • Is the ‘mask area’ of the person’s face un-occluded (no foreign objects, hair, or other body parts between the person’s face and the camera)?
The ‘mask area’ of a person’s face.
  • Does the video star a person of the same approximate age/sex as the model you want to deepfake in?

How to do the typeform.

Okay there were two common errors on the typeform I want to take some time to address so people have time to fix it before launch:

  • When we ask “what’s your addy” what we mean is “what is your metamask or trustwallet address” not what is your email address. CLICK HERE FOR EXPLAINER

If you made one of these mistakes, go back and re-submit the typeform.

Schedule Update

We will be taking one extra week to tie everything up and give people some time to re-submit the typeform. Apologies for this delay, but ultimately it will allow us to deliver a higher quality beta and we need the time. Thanks for staying patient!