UK Sets 48 Hour Deadline for Removal of Abusive Images
Britain moves to fine tech platforms up to 10 per cent of global revenue over failure to remove nonconsensual intimate images within 48 hours.
The UK government has unveiled stricter online safety measures requiring major technology platforms to remove non-consensual intimate images within two days of being reported.
The move is a part of a broader effort to combat digital abuse and strengthen protections for women and girls, as lawmakers expand enforcement powers under existing online safety legislation.
UK Introduces 48 Hour Legal Takedown Duty
Reuters reports that the UK will revise legislation, currently moving through parliament, to establish a compulsory 48-hour takedown requirement for major platforms hosting illegal intimate content.
Companies that fail to comply could face penalties of up to 10% of their qualifying worldwide revenue, a regulatory standard used by Ofcom. In extreme cases, services could be blocked in Britain.
While sharing nonconsensual intimate images is already illegal, victims have reported difficulties securing permanent removal. Under the new framework, individuals will only need to report material once. Platforms must then remove identical content across services and prevent reuploads.
Ofcom is also fast-tracking a decision on whether to mandate hash-matching tools to detect and block such content at source, with a ruling expected in May and implementation possible this summer.
Prime Minister Keir Starmer described the online environment as a central battleground in addressing violence against women and girls.
Broader Online Safety Strategy Expands to AI Platforms
According to the Financial Times, the proposed reforms do more than set faster takedown deadlines. The changes, made through updates to the Crime and Policing Bill, would make sharing nonconsensual intimate images a priority offence under the UK’s Online Safety Act. This means it would be treated as seriously as crimes like child exploitation or terrorism-related content.
The government has also warned major AI chatbot makers, including Google’s Gemini, OpenAI’s ChatGPT, and xAI’s Grok, that all platforms must follow the rules. The warning comes after concerns about AI-generated images of minors and shows efforts to bring generative AI services under full regulatory control.
Ministers are simultaneously consulting on whether to restrict social media access for under-16s, echoing recent policy action in Australia and similar discussions across several European nations.
Why This Matters
The policy marks a move from reactive moderation to strict compliance deadlines with heavy financial penalties. By linking fines to global revenue instead of just UK earnings, regulators gain more control over multinational platforms. The rule to stop reuploads also tackles a common problem: harmful content often comes back after being removed.
Bringing AI chatbots into scope reflects growing concern over synthetic image generation tools that can create explicit content without consent. Together, the measures indicate a broader regulatory pivot toward platform accountability in the digital abuse landscape.
Enforcement Outlook
The UK’s proposed 48-hour rule strengthens enforcement under the Online Safety Act while expanding oversight to emerging AI services. With substantial financial penalties and potential service restrictions at stake, technology companies operating in Britain face clearer compliance obligations.
As Ofcom prepares further decisions on detection technology, the coming months will determine how aggressively the new framework is implemented.
Source: Tech Firms Will Have To Take Down Abusive Images in 48 Hours
![Top Tech Stories of 5th week [2026]](https://www.nogentech.org/wp-content/uploads/2026/02/Top-Tech-Stories-of-5th-Week-2026-390x220.webp)


