How AI-generated videos the next big thing in fake news



How dangerous AI videos could spread false news

New concerns about how the artificial intelligence videos could spread false news, and even prompt a war.

Forget the fake news for a moment.

What is there to replace? Artificial intelligence is now able to generate a convincing video of a celebrity or public figure. For illegal purposes, these videos are called deepfakes and a celebrity inside an adult movie. A programmer finds existing video and audio for a well-known figure, then the AI takes over and creates a new version.

However, fake videos with President Trump speak at an event, or a world leader to declare war, or a politician making false claims may be on the near horizon.


A recent example shows Alec Baldwin is doing a Trump impersonation on Saturday Night Live, and then a new version is built with the help of machine learning, that gives the real President Trump making the same witty. It is not very convincing, but you can see how it could evolve.

Last summer, a team of researchers at the University of Washington showed how AI could lead to a life-like digital avatar for President Obama. They used 14 hours of footage to make a new video, usually by adjusting speech patterns to match the new audio.

“It is difficult to assess whether the national security risk or potential disruption that is presented by the threat of AI-built fake videos,” says Michael Fauscette, chief research officer at G2 Crowd, a business software company. According to Fauscette, fake videos are used in the first instance for coercion, public embarrassment, and to manipulate the voting public.


Andrew Keen, entrepreneur and author of “How to fix the future,” says one of the scary things about AI-generated videos is that we don’t know what the difference is. They will look and sound authentic. In the example of President Obama, the average person would never know that it was fake. (At least with many fake news stories, it is easier to feel when sources and facts seem to be invented.)

Fake videos are also more difficult to monitor, says Darren Campo, an adjunct professor at the NYU Stern School of Business. “We are already at a point where the content of certain videos or streams is controlled entirely by programmers with political agendas,” he says. It will become increasingly difficult to use the measure AI routines to spot fake videos.

A shift, says Keen, is that major publishers and social media networks such as Facebook and Twitter will be held responsible for it is in no way a fake video. Facebook, in particular, with their huge economic resources, in the service of an army of AI specialists who will run algorithms to check whether a video. For example, when President Trump is shown in a video announcement of a state of emergency, AI routines can check the video against live resources, other instances of the same video that is on the Internet, and if it appears on the official White House sites.


Keen says that there are many thorny legal issues, since aggrieved politicians and celebrities will be renting of video forensic experts to find out who created it and hosted it AI-created videos.

Luckily, all the experts say that, even if fake videos are more persuasive, their influence is limited.

“Big news organizations that the editorial integrity and trust are the key to the preservation of their business in the long term,” says Campo. “Even if news organizations fail, we don’t know that round reporting of false news might incite to, say, a nuclear war. The nuclear powers with the state-controlled news such as China and North Korea have not yet started global disasters. This is the proof that it takes more than a news story to the activation of a nuclear deployment.”

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular