With our Deepfake closed beta around the corner I wanted to take a moment to drop some samples for you guys, as well as some updates. This article’s goal is to accomplish four things:
- Output Demos: Demonstrate what the output of the deepfake API is going to look like for beta testers, including the kind of progress that can be expected as the AI improves over time.
- Give beta testers an idea of what kind of videos will and won’t work well with the API, and what the experience is going to be like, so they can most effectively use their free minutes to make the coolest content possible.
- To provide some color and instruction on the typeform sign up, since we had nearly half of submissions have at least one error that made them unusable.
- To provide a schedule update (which yes unfortunately includes a delay on launch) for the beta test.
One of the great things about our system is that it gets better with time. To illustrate that, let me provide some samples that show how our model looked as we trained it on our town vampiress Amber Sparkx’s face.
Before Model Training:
These photos below show the very first test we did using Amber’s face, which we did as a control before we trained our AI specifically on her face. As you can see it comes out okay but wonky. The shots where she is far back look fine quality wise, but her face isn’t completely recognizable. The close up shots look like what you see when you go home with a girl from the bar after four long island ice teas. Not acceptable.
Halfway Through Model Training:
This is a test we ran on the same video with half our goal iteration training amount. A noticeable improvement. The face is much easier to recognize, specifically the piercings are showing through very well, and the close up shots have lost that blurry quality that ruined them before. We considered this a ‘pass’ on this video in that if we received it as a result for the beta we’d be happy with it, and so moved on to another video with slightly more difficulty for the third test.
Completed Model Training:
For the final test after the iteration count was reached, we picked a different video with more extreme angles and some examples of partial face occlusion. Our intention for V1 is to have a system that works for POV videos with no occlusion, so this was more to test the limitations of our system than anything else.
But all these videos are tests where we are putting Amber’s face over the faces of other young women. What happens if we try to deepfake her onto an older male? Different skull shape, wrinkles, skin sag, etc. are all issues which can negatively effect the transfer. So as any of you who are in our telegram know, we made a video using Amber’s face over Johnny Cash as he sang “The Man in Black”. Unfortunately it was copyright struck off youtube so you’ll have to join our TG to see it. The results are that it still looks pretty good, the piercings show through somewhat regularly, but it’s not quite up to snuff compared to the previous video. Amber is still recognizable, but less so, and there’s an issue with eye tracking. Ultimately this is not what our AI was meant for so no big surprise there, and we were surprised that it worked at all.
So the AI is passing with flying colors when we use a real person’s staked face, but what about with one of our AI generated avatars? Using the AI generated avatars is a significantly more difficult technical challenge, so the REAL test will be how the AI performs with that. The videos here are much more finnicky, they’re fine if the conditions are just right but the real test will be how they react to non-straightforward angles and occlusion. We knew the videos wouldn’t pass without training, so we skipped straight to the end and will catalogue results by video.
Completed Model Training:
There’s no doubt that the staked faces of performers are going to be much higher quality than our AI generated girls during V1. But we have been able to see consistent quality improves on AI generated girls as well by training for more iterations, and as we improve their facesets. By Q2 I think we have a good shot at reaching parity with IRL performers. But both IRL performers and AI generated avatars should work with good quality for properly selected videos.
Given that the stuff looks goddamn top notch for IRL faces, we are probably going to focus our marketing push around the ‘stake your face’ program. So expect as we transition from beta to just open use, that we push pretty hard on trying to on board performers. To that end, the ‘stake your face’ form now has a referral section. If you get someone to stake their face, they can write down your addy, and you’ll get 10% of Harem’s cut of the profits from the use of their face forever. We will also be launching several other programs to incentivize people to stake their face, including purchasing faces for auctions, and releasing a ‘bounty’ list of all your favorite celebs. Stay tuned.
What Videos Will and Won’t Work for Beta:
With that all said the most important thing is knowing what videos will and won’t work for beta. The videos of staked IRL faces will have a lot more flexibility and generally be higher quality than the videos of AI generated avatars. Regardless, make sure the video you select answers yes to the following:
- Is the person in the video facing more or less straight at the camera? (POV or doggystyle-facing-camera videos will work best)
- Is the ‘mask area’ of the person’s face un-occluded (no foreign objects, hair, or other body parts between the person’s face and the camera)?
- Does the video star a person of the same approximate age/sex as the model you want to deepfake in?
- Is the video 360x360 resolution or higher?
How to do the typeform.
Okay there were two common errors on the typeform I want to take some time to address so people have time to fix it before launch:
- When we ask “what’s your addy” what we mean is “what is your metamask or trustwallet address” not what is your email address. CLICK HERE FOR EXPLAINER
- We know you don’t want to send a test TX to the address listed because of gas. Go get some free rinkeby faucet ETH, it’s free. We just need a transaction sourced from your addy because otherwise we’d have no idea if you controlled it or not.
If you made one of these mistakes, go back and re-submit the typeform.
We will be taking one extra week to tie everything up and give people some time to re-submit the typeform. Apologies for this delay, but ultimately it will allow us to deliver a higher quality beta and we need the time. Thanks for staying patient!