Animated Mona Lisa was made by the AI, and it is terrible

A new form of artificial intelligence can generate a “living portrait” of just one image.
(Egor Zakharov)

The enigmatic, painted smile of the “Mona Lisa” is known all over the world, but that familiar face recently a surprising new set of expressions, with thanks to the artificial intelligence (AI).

In a video to share on YouTube, on 21 May, three video clips that show disturbing examples of the Mona Lisa as she moves her lips and turns her head. It was made by a convolutional neural network — a type of AI that processes a lot of information as a human brain does, analyzing and processing of images.

Researchers trained the algorithm to understand features of the’ general forms and how they behave relative to each other, and then to apply that information to still images. The result was a realistic video sequence of new facial expressions from a single frame. [Can Machines Be Creative? To 9 AI ‘Artists’]

For the Mona Lisa, the videos, the AI “learned” facial motion data sets of the three subjects, producing three very different animations. While each of the three clips was still recognizable as the Mona Lisa, the variations in the training models’ appearance and behavior lent different “personalities” to the “living portraits,” Egor Zakharov, an engineer with the Skolkovo Institute of Science and Technology, and the Samsung-AI Center (both located in Moscow), explained in the video.

More From LiveScience

  • Can Machines Be Creative? To 9 AI ‘Artists’
  • Albert Einstein
  • Artificial Intelligence: Friendly or Frightening?
  • 5 Intriguing Uses for Artificial Intelligence (not of Robots)

Zakharov and his colleagues have also created a animation of pictures from the 20th-century cultural icons like Albert Einstein , Marilyn Monroe and Salvador Dali. The researchers described their findings, which are not peer-reviewed, in a study published online May 20 in the preprint journal arXiv.

The production of original videos such as this, which is known as deepfakes, is not easy. Human heads are geometrically complex and highly dynamic 3D models of the head “tens of millions of parameters,” the study authors wrote.

What’s more, the human vision system is very good in the identify of “even small errors” in 3D-modeled human heads, according to the study. See something that looks almost human — but not entirely — leads to a feeling of profound unease known as the uncanny valley effect.

AI was previously demonstrated that the production of convincing deepfakes is possible, but requires different corners of the desired topic. For the new study, the engineers introduced the AI to a very large dataset of reference videos in which the human faces in action. The scientists determined face sights that would apply on each face, to teach the neural network how faces act in general.

They are then trained for the AI to use the reference-expressions, to assign the motion of the source of the functions. This has the AI of a deepfake even if it was only a picture of the work of the researchers reported.

And more source images delivered to a more detailed result in the final animation. Videos made of 32 images, instead of just one, achieved “perfect realism” in a user study, the scientists wrote.

  • Artificial Intelligence: Friendly or Frightening?
  • 5 Intriguing Uses for Artificial Intelligence (not of Robots)
  • Recipe for a Replicant: 5 Steps to Building a Blade Runner-Style Android

Originally published on Live Science.

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular