New Research Shows Deepfakes Are Becoming More Easier To Make

In order to make a compelling deepfake — an AI-generated fake of a video or audio clip — you will generally need a sensory model that’s trained with a lot of reference material. Mostly, the larger your dataset of photos, sound, or video, the more bizarrely accurate the result would be. However now, researchers at Samsung’s AI Center have designed a method to train a model to enliven with an extremely limited dataset: just a single photo, and the results are amazingly good.

The researchers are able to attain this effect, by training its algorithm on “landmark” facial features pushed from a public archive of 7,000 images of celebrities assembled from YouTube.

From there, it can count these features onto a photo to bring it to life. The team proved that its model even works on the Mona Lisa, and other single-photo still portraits.

Like with most deepfakes, it’s pretty easy to see the hem at this stage. Most of the faces are encircled by visual crop. Though, fixing this aspect is potentially easier compared to the feat of correctly faking the Mona Lisa to make it look like she’s a breathing human.

Regardless of some flaws, counterfeit videos and audio are getting more realistic. Those who want proof, they can check out this astonishing AI-generated recreation of Joe Rogan’s voice. As researchers carry on to come up with low-lift methods for creating high-quality fakes, there’s apprehension that they’ll be used against people in the form of indoctrination— or to mimic people in situations they’d complain to, like pornographic videos, which was the software’s original aim. The likely political danger of deepfakes is real, but the worry is presently overblown.

Leave a Reply

Your email address will not be published. Required fields are marked *