Deepfake AI Changes What People Say On Video With Transcript Editing
Harin - Jun 13, 2019
Researchers have developed an algorithm simplifying the process of creating a deepfake to a terrifying degree, making a video’s subject “say” any edits made to the clip’s transcript.
- 6 Cutting-Edge Features of Galaxy AI & Lineup of Supported Samsung Smartphones
- 4 Ways AI Could Change The Mobile Gaming Industry
- New ‘Deep Nostalgia’ AI Allow Users To Bring Old Photos To Life
Creating a convincing deepfake now only requires your typing ability. AI’s recent advances have made it easier to make audio or video clips in which a person would say or do things they did not actually say or do.
Researchers have worked on an algorithm simplifying deepfake creating process to a frightening degree. The algorithm allows the subject of a video to say any edits that are made to the transcript of the video clip. Even its creators are worried about the terrifying outcome if bad people get their hands on this technology.
The researchers who are from Princeton University, the Max Planck Institute for Informatics, Stanford University, and Adobe describes the working mechanism of their new algorithm in a paper which was published on the website of Stanford scientist Ohad Fried.
First, what the AI does is analyzing a source a video with a real person speaking. However, it doesn’t only look at their words. It even identifies the phonemes, the utters, and how the person looks when they speak.
In English, there are about 44 phonemes. The researchers said that the AI only needs 40 minutes of video clips to have all the needed pieces to make a person appear to be saying anything.
The only thing a person has to do is editing the video’s transcript. After that, a deepfake will be generated by the AI, which will then match the rewritten transcript by putting together the mouth movements and the necessary sounds.
Based on the video that shows how the new algorithm works, it appears to be suitable for minor changes. However, even researchers are concerned that some people might use the new algorithm for more destructive uses.
In their paper, they write:
Comments
Sort by Newest | Popular