Advanced Television

Ofcom advises how to identify deepfakes

July 11, 2025

A new Ofcom discussion paper explores how different tools and techniques could be used to identify deepfakes.

“Deepfakes are AI-generated videos, images and audio content that are deliberately created to look real. They pose a significant threat to online safety, and we have seen them being used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians,” explained the media regulator.

In July 2024, Ofcom published its first Deepfake Defences paper, and this latest follow-up dives deeper into the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels and context annotations. These four measures are designed to provide information about how AI-generated content has been created, and in some cases – can indicate whether the content is accurate or misleading.

This comes as Ofcom research reveals that 85 per cent of adults support online platforms attaching AI labels to content, although only one in three (34 per cent) have ever seen one.

Strengths and weaknesses of attribution measures

Drawing on its new user research, interviews with experts, a literature review, and three technical evaluations of open-source watermarking tools, Ofcom’s latest discussion paper assesses the merits and limitations of these measures to identify deepfakes.

The analysis reveals eight key takeaways which should guide industry, government and researchers:

  1. Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
  2. Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
  3. Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
  4. Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
  5. Attribution measures can be susceptible to removal and manipulation. Ofcom’s technical tests show that watermarks can often be stripped from content following basic edits.
  6. Greater standardisation across individual attribution measures could boost the efficacy and take up of these measures.
  7. The pace of change means it would be unwise to make sweeping claims about attribution measures.
  8. Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.

Categories: Articles, Content

Tags: , ,