Singapore – A recent study from software development firm iProov revealed that the majority of individuals fail to distinguish deepfakes, including AI-generated visuals and videos that convincingly replicate real people. This implied a concerning issue in the landscape, where people are less aware; the vulnerability to deep fakes is likely even higher.
Findings showed that only 0.1% of respondents correctly identified all deep fakes and real stimuli (e.g., images and videos) in a study where participants were primed to look for deep fakes. This study was tested among 2,000 UK and US consumers, exposing them to a series of real and deepfake content.
The research also found that 30% of individuals between 55-64 and 39% of those 65 and older had never heard of deep fakes, underscoring a considerable lack of familiarity and greater exposure to potential manipulation within these demographics. This figure highlights an increased risk of deepfake manipulation among older age groups.
Deepfake videos further proved more challenging to identify than deepfake images, as participants were 36% less likely to identify them correctly. This increased vulnerability poses a major risk for fraud, especially in video calls and identity verification processes.
While concern about deepfakes is rising, many remain unaware of the technology. In fact, around 22% of consumers had never even heard of deepfakes before the study.
Furthermore, over 60% of individuals, regardless of their actual success rate, remained overly confident in detecting deepfakes, particularly among young adults (18-34). This false sense of security also signalled an alarming issue, making them more susceptible to deception.
Interestingly, social media platforms are seen as breeding grounds for deepfakes, with Meta (49%) and TikTok (47%) seen as the most prevalent locations for deepfakes to be found online.
This, in turn, has led to reduced trust in online information and media, noting 49% trust social media less after learning about deepfakes. Just one in five would report a suspected deepfake to social media platforms.
Approximately 74% of the respondents also expressed concern about the societal impact of deepfakes, particularly the rise of “fake news” and misinformation (68%). This fear is most prevalent among older generations, with 82% of those aged 55+ expressing worry about the spread of misleading information.
Meanwhile, the study also found a need for better awareness and easier reporting channels, with only 29% of people taking action upon encountering a suspected deepfake, 48% unsure of how to report it, and a quarter showing no concern when they spot one.
With most consumers failing to verify information online, their susceptibility to deepfakes increases. Despite the growing issue of misinformation, it was then found that only one in four will seek alternative sources when they suspect a deepfake, and only 11% critically examine the source and context, making the majority highly vulnerable to deception.
Speaking about the report, Andrew Bud, founder and CEO at iProov, said, “Just 0.1% of people could accurately identify the deep fakes, underlining how vulnerable both organisations and consumers are to the threat of identity fraud in the age of deepfakes. And even when people do suspect a deep fake, our research tells us that the vast majority of people take no action at all.”
“Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk. It’s down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritises both security and individual control, ensuring that organisations and users can keep pace and remain protected from these evolving threats,” he further explained.
Professor Edgar Whitley, a digital identity expert at the London School of Economics and Political Science, also said, “Security experts have been warning of the threats posed by deepfakes for individuals and organisations alike for some time. This study shows that organisations can no longer rely on human judgement to spot deepfakes and must look to alternative means of authenticating the users of their systems and services.”