AI face study reveals shocking new tipping point for humans

Computers have become very, very good at generating photorealistic images of human faces.

What could go wrong?

A study published last week in the academic journal Proceedings of the National Academy of Sciences confirms how persuasive the “faces” produced by artificial intelligence can be.

In this study, more than 300 research participants were asked to determine whether a provided image was a photo of a real person or a fake generated by an AI. Human participants were successful less than half the time. It’s worse than flipping a coin.

The results of this study reveal a tipping point for humans that should be shocking to anyone who thinks they are savvy enough to spot a deepfake when opposed to the genuine article.

While the researchers say this engineering feat “should be considered a success for the fields of computer graphics and vision,” they “also encourage those developing these technologies to consider whether the associated risks outweigh their benefits,” citing dangers ranging from disinformation campaigns to the non-consensual creation of synthetic porn.

“[W]We discourage the development of technology simply because it is possible,” they argue.

Most (High and Upper middle) and less (Low and Lower midfielder) accurately classified real (R) and synthetic (S) faces.

Neural networks are getting insanely good

The researchers behind this study started with 400 synthetic faces generated by an open-source AI program made by tech giant NVIDIA. The program is called a generative adversarial network, which means it uses a pair of neural networks to create the images.

The “generator” starts by creating a completely random image. The “discriminator” uses a large set of real photos to give feedback to the generator. As the two neural networks move back and forth, the generator gets better each time, until the discriminator can no longer tell the real images from the fake ones.

Turns out humans aren’t any better.

Three Experiments Show Surprising Results

For this study, psychologists constructed a sample including gender, age, and race of 400 synthetic images created by NVDIA’s AI. It included 200 men and 200 women and included 100 faces divided into four racial categories: Black, White, East Asian and South Asian. For each of these synthetic faces, the researchers chose a demographically similar image from the discriminator training data.

In the first experiment, over 300 participants looked at a sample of 128 faces and said whether they thought each was real or fake. They succeeded only 48.2% of the time.

Participants did not have the same difficulty with all the faces they looked at. They did worse at analyzing white faces, likely because the AI ​​training data included far more photos of white people. More data means better renderings.

In the second experiment, a new batch of humans received a little help. Before judging the images, these participants were given a short tutorial with hints on how to spot a computer-generated face. Then they started looking at pictures. After each, they learned if they guessed right or wrong.

Participants in this experiment did slightly better, with an average score of 59.0%. Interestingly, all the improvements seemed to come from the tutorial, rather than learning from feedback. The participants actually did slightly worse during the second half of the experiment than during the first half.

In the final experiment, participants were asked to rate the reliability of each of the 128 faces on a scale of one to seven. In a stunning result, they said that on average, the artificial faces looked 7.7% more reliable than artificial faces.

Taken together, these results lead to the startling conclusion that AIs”are capable and more trustworthy – than real faces,” the researchers say.

The implications could be huge

These results point to a future with which holds the potential for strange situations on recognition, memory, and a complete flyover of the Uncanny Valley.

They mean that “[a]No one can create synthetic content without expert knowledge of Photoshop or CGI,” says Sophie Nightingale, a psychologist at Lancaster University, co-author of the study.

The researchers list a number of harmful ways people might use these “deep fakes” that are virtually indistinguishable from real images. The technology, which works the same way for video and audio, could enable extremely compelling disinformation campaigns. Take the current situation in Ukraine, for example. Imagine how quickly a video showing Vladimir Putin — or, for that matter, Joe Biden — declaring war on a longtime adversary would circulate on social platforms. It can be very difficult to convince people that what they have seen with their own eyes is not real.

Another major concern is synthetic pornography that shows a person performing intimate acts that they never actually performed.

The technology also has big implications for real photos.

“Perhaps most pernicious is the implication that, in a digital world in which any image or video can be faked, the authenticity of any embarrassing or intrusive recording can be called into question,” the researchers say.

Summary of the study:
Artificial Intelligence (AI) – synthesized text, audio, image and video are weaponized for non-consensual intimate imagery, financial fraud and disinformation campaigns. Our assessment of the photorealism of AI-synthesized faces indicates that the synthesis engines have crossed the strange valley and are able to create faces that are indistinguishable – and more reliable – than real faces.

Comments are closed.