• 1
Tuesday, November 05, 2024 at 07:00

Is it possible to detect AI-generated images of people?

A study carried out by the Ciberimaginario research group has analysed whether the contextual information accompanying an image determines its credibility. The results conclude that, currently, it is impossible to detect false content by viewing the image alone.

Writing / Irene Vega

In a world where artificial intelligence (AI) is the order of the day, there are tools capable of generating credible content that can confuse users when trying to distinguish between what is real and what is created with AI. This is especially relevant in the case of tools that generate realistic images of people, which gives rise to the concept of deepfake. In this situation, the cyberimaginary research group has carried out a study with the aim of checking whether the background information that accompanies an image conditions its credibility. “Media literacy has become essential, since it educates users to identify what is real versus what is false, in addition to alerting them to the possibility of being faced with a deepfake. Since it is currently undetectable, the distortion of reality in the information world becomes a significant problem," says Alberto Sanchez-Acedo, researcher at Ciberimaginario and co-author of the study, published in the journal Communication & Society.

The results obtained show that the participants, in this case young students from the Community of Madrid, prioritized the image over the rest of the contextual elements when recognizing a deepfake“This reinforces the idea that we are constantly exposed to visual information, with the image being the main protagonist,” adds the researcher.

Real pictures versus fake images

The method used to conduct the study consisted of separating two groups of participants, divided equally into a control group and an experimental group. In both groups, participants were exposed to a virtual environment where they could see recreations of newspaper front pages that included an image and its respective contextual information. In the case of the control group, participants viewed real images accompanied by real contextual information, while the experimental group viewed AI-generated images accompanied by real contextual information.

“Next, we carried out a quantitative and descriptive analysis of frequencies and percentages of each of the variables analysed, including the level of education and the method of accessing the information. In addition, we carried out a statistical percentage analysis to measure the degree of importance of the source of information, the headline and the image in the recognition process,” explains Alberto Sanchez-Acedo.

After analyzing the results, the authors of the study emphasize that, currently, it is impossible to detect a deepfake by simply viewing the image. “We urge new researchers to replicate this study with larger samples and different population groups to compare the patterns and trends resulting from this research,” concludes the researcher.

Alberto Sanchez-Acedo, Alejandro Carbonell-Alcocer, Manuel Gertrudix and Jose Luis Rubio-Tamayo, all professors at the URJC and members of the Ciberimaginario research group, participated in this work. In addition, the research presents a series of recommendations for the prevention of deepfakes in the virtual era that can be consulted in this link.