Ofcom investigates X over Grok sexualised imagery
January 12, 2026
By Colin Mann
The UK’s independent online safety watchdog, Ofcom, has opened a formal investigation into X under the UK’s Online Safety Act, to determine whether it has complied with its duties to protect people in the UK from content that is illegal in the UK.
Ofcom notes that there have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.
As the UK’s independent online safety watchdog, Ofcom urgently made contact with X on Monday January 5th and set a firm deadline of Friday January 9th for it to explain what steps it has taken to comply with its duties to protect its users in the UK.
The company responded by the deadline, and Ofcom carried out an expedited assessment of available evidence as a matter of urgency.
Ofcom has decided to open a formal investigation to establish whether X has failed to comply with its legal obligations under the Online Safety Act – in particular, to:
• assess the risk of people in the UK seeing content that is illegal in the UK, and to carry out an updated risk assessment before making any significant changes to their service;
• take appropriate steps to prevent people in the UK from seeing ‘priority’ illegal content – including non-consensual intimate images and CSAM;
• take down illegal content swiftly when they become aware of it;
• have regard to protecting users from a breach of privacy laws;
• assess the risk their service poses to UK children, and to carry out an updated risk assessment before making any significant changes to their service; and
• use highly effective age assurance to protect UK children from seeing pornography.
The legal responsibility is on platforms to decide whether content breaks UK laws, and they can use Ofcom’s Illegal Content Judgements Guidance when making these decisions. Ofcom is not a censor – it does not tell platforms which specific posts or accounts to take down.
Ofcom says its job is to judge whether sites and apps have taken appropriate steps to protect people in the UK from content that is illegal in the UK, and protect UK children from other content that is harmful to them, such as pornography.
The Online Safety Act sets out the process Ofcom must follow when investigating a company and deciding whether it has failed to comply with its legal obligations.
Ofcom’s first step is to gather and analyse evidence to determine whether a breach has occurred. If, based on that evidence, it considers that a compliance failure has taken place, it will issue a provisional decision to the company, who will then have an opportunity to respond to Ofcom’s findings in full, as required by the Act, before it makes its final decision
If the investigation finds that a company has broken the law, Ofcom can require platforms to take specific steps to come into compliance or to remedy harm caused by the breach. It can also impose fines of up to £18 million (€20.7m) or 10 per cent of qualifying worldwide revenue, whichever is greater.
In the most serious cases of ongoing non-compliance, Ofcom can make an application to a court for ‘business disruption measures’, through which a court could impose an order, on an interim or full basis, requiring payment providers or advertisers to withdraw their services from a platform, or requiring internet service providers to block access to a site in the UK. The court may only impose such orders where appropriate and proportionate to prevent significant harm to individuals in the UK.
In any industry, companies that want to provide a service to people in the UK must comply with UK laws. The UK’s Online Safety Act is concerned with protecting people in the UK. It does not require platforms to restrict what people in other countries can see.
There are ways platforms can protect people in the UK without stopping their users elsewhere in the world from continuing to see that content.
An Ofcom spokesperson said: “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning. Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.
“We’ll progress this investigation as a matter of the highest priority, while ensuring we follow due process. As the UK’s independent online safety enforcement agency, it’s important we make sure our investigations are legally robust and fairly decided.”
Ofcom will provide an update on this investigation as soon as possible.
