LONDON (Web Desk)-
Elon Musk-owned social media platform X has agreed to new measures aimed at improving the handling of illegal hate speech and terrorist-related content in the United Kingdom, following sustained pressure from British regulators. The agreement comes after months of scrutiny from Ofcom, the UK’s communications regulator, over how effectively the platform moderates harmful material.
Under the updated commitments, X will be required to speed up its review process for potentially illegal content. On average, the platform is expected to assess suspected hate speech and terrorism-linked posts within 24 hours. In addition, at least 85% of flagged content must be reviewed within 48 hours, according to Ofcom’s announcement. The regulator said these steps are intended to ensure quicker action against content that violates UK laws.
X, which has repeatedly stated that it enforces strict policies against extremist organisations and abusive material, has not issued an immediate public response to the latest agreement. However, the platform has previously defended its moderation systems, arguing that it actively removes content linked to terrorism and hate-based activity.
As part of the new arrangement, the company has also agreed to restrict access in the UK to accounts associated with groups that are officially banned under British terrorism legislation. In addition, X will be required to provide quarterly transparency reports to Ofcom over the next year, detailing its enforcement performance and moderation outcomes.
Regulators have also encouraged the platform to collaborate with external experts to improve its content reporting and review systems. This follows concerns raised by civil society organisations, which have argued that harmful content is not always acted upon quickly or consistently.
Ofcom officials have stressed the importance of stronger enforcement, particularly in light of recent hate-related incidents in the UK. The regulator highlighted that terrorist content and illegal hate speech continue to appear across major social media platforms, raising public safety concerns. Officials pointed to a series of recent crimes targeting the Jewish community, which have intensified calls for tighter online controls.
Campaign groups have welcomed the move but say more action is needed. Organisations such as the Center for Countering Digital Hate said the commitments were the result of long-term pressure following previous violent incidents. Meanwhile, advocacy groups like the Antisemitism Policy Trust described the changes as a positive step, while also warning that significant gaps in enforcement still remain.
The developments in the UK come as X faces growing international scrutiny. Authorities in regions including the European Union, Australia, and Singapore have also raised concerns about illegal content circulating on the platform. The European Commission, in particular, has launched formal investigations into whether X is failing to adequately control hate speech.
Separately, questions have also been raised about X’s artificial intelligence tools, including the Grok chatbot, which has previously been reported to generate inappropriate content in certain cases. Ofcom has confirmed that its broader investigation into X’s moderation systems and AI-related issues is still ongoing.







































