How to identify a deepfake

admin
3 Min Read

The past year has seen significant improvements in artificial intelligence-generated images and video of real-life people, but a few key details can help people discern if what they’re seeing is fake.

Some deepfakes are used for innocuous purposes, such as de-aging actors, adding narration after a person’s death, or even resurrecting actors to replay old roles. However, some deepfakes are used for more nefarious purposes, such as scamming elderly citizens, presenting misleading political images, and simulating politicians making false statements about current events.

FDA APPROVES WORLD’S FIRST CRISPR GENE-EDITING DRUG FOR SICKLE-CELL DISEASE

“Up until the last year or so, the technology has made it very obvious to spot a deepfake,” Paul Bleakley, assistant professor of criminal justice at the University of New Haven, told the Washington Examiner. “For the general public, there are ways to spot it, but it’s growing increasingly difficult.”

Many clues for whether a video is a deepfake are in contextual details, Bleakley said. There are some simple things that an observer could watch for in deepfakes:

Too much or too little eye movement Unnatural facial expressions A lack of emotion or expressiveness Awkward body postures Teeth or hair that appear artificial Inconsistencies in movement or audio

Still images may have similar inconsistencies. Several image generators struggle with depicting hands or other body parts.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Deepfake audio is perhaps the hardest to identify. A well-trained AI program can take only a few seconds of a user’s voice and use it to make them say just about anything. There are small indicators that can be identified, according to Vijay Balasubramaniyan, the CEO of the AI voice authentication company Pindrop, such as the way that lip movements or the tongue affect a word. These sounds can only be replicated by a human mouth and are unique to each person.

Some members of Congress are attempting to create legislation to make AI-generated content more identifiable. One suggestion is requiring AI-generated content to have bits of code identifying them as AI-generated, also known as “watermarking.”

Share This Article
By admin
test bio
Please login to use this feature.