The human face created by the artificial neural network algorithm has gone beyond the realistic one that is indistinguishable from the real one, and has gone on to give more credibility than the real face.
Researchers from the University of Lancaster in the UK and the University of California at Berkeley (UC Berkeley) in the US announced in the Proceedings of the National Academy of Sciences (PNAS) that in an experiment conducted on a face synthesized with the latest software and a real face, people could not distinguish between a synthetic face and a real face. did. The computer-generated virtual faces were so plausible that even those who were taught how to distinguish them were fooled.
The AI algorithm used by the research team was Nvidia’s generative alternative neural network (GAN) software ‘StyleGAN2’. A generative adversarial neural network refers to a computer learning method in which a neural network that mimics the real thing and a neural network that discriminates it compete with each other to create fakes that are identical to the real ones. Since its appearance in 2014, it has rapidly spread and has established itself as a representative technology of deepfakes that are prevalent in various adult films and fake news.
The researchers conducted several experiments with the software. The researchers first created an experimental data set with 400 photos of a synthetic face and a real face. Each set consisted of 100 white, black, East Asian, and South Asian faces.
The researchers then asked 315 participants who were recruited online whether they were real or fake by showing them 128 photos from the data set, one at a time. The participants’ hit rate was 48.2%, which was slightly less than the probability of tossing a coin (50%).
In the second experiment, 219 other participants were taught and practiced on how to identify computer-synthesized faces, and then they asked the same questions as in the first group. The hit rate of this group was rather high at 59%. However, the researchers found that more practice did not lead to a higher hit rate.
As a result of dividing the hit rate by race, it was found that the most difficult face to distinguish between the real and the fake was white. This may be because the software trained more with white faces, the researchers speculated.
People can infer personality traits, such as whether a person is trustworthy or not, just by looking at a person’s face for a very brief moment. The researchers wondered whether synthetic faces could lead to judgments about people’s reliability. If not, the perception of reliability can be a criterion for distinguishing the real from the fake.
The researchers conducted a third experiment to see if the degree of reliability felt by a face could be used as a criterion for distinguishing a fake face.
The researchers presented another group of 223 participants with identical faces and asked them to rate their reliability on a scale of 1 (very distrust) to 7 (very trustworthy). The results were surprising. Participants gave fake faces an average of 8% higher confidence scores than real ones. Real faces scored an average of 4.48 and synthetic faces scored 4.82.
The four faces that the participants considered the most unbelievable were real, and the faces that the participants considered the most believable were fake.
Why? Could it be that there are more smiling expressions on the synthetic face? However, the proportion of smiling expressions was less in synthetic faces (58.8%) than in real faces (65.5%). This means that facial expressions alone cannot explain this phenomenon.
“People are more likely to trust a typical face, possibly because a synthetic face is closer to the face of an ‘average’ person,” said Sophie Nightingale, University of Lancaster, who led the study. For reference, women had higher confidence scores than men.
The researchers said, “People’s evaluation of artificial intelligence-synthesized faces now indicates that the synthesis engine can pass ‘Uncanny Valley’ (meaning ‘uncanny valley’) to create more plausible faces than real faces,” the researchers said. ‘Uncanny Valley’ refers to a phenomenon in which people feel displeased when they see a robot that resembles a human but is crude. The reactions of the people shown in this experiment show that the current deepfake technology has gone far beyond that stage.
The inability to distinguish between real and synthetic images can have serious consequences.
“Such synthetic faces are very effective in fulfilling nefarious purposes, such as pornography and fraud,” said Nightingale.
To reduce the risk, he suggested that developers watermark their photos as a solution, and emphasized that “if you don’t do that, things will get worse.”
As fake images and videos spread out of control, the authenticity of all records may be questioned.