Telegram has announced that it will now provide user IP addresses and phone numbers to authorities when presented with search warrants or other valid legal requests.

CEO Pavel Durov stated that this adjustment to the platform’s terms of service and privacy policy aims to “discourage criminals.”
Durov noted that while 99.999% of Telegram users are not involved in criminal activities, the small percentage that is engages in illicit behavior tarnishes the platform’s reputation and endangers the interests of its nearly one billion users.
This marks a notable change for Durov, the platform’s co-founder, who was detained by French authorities last month at an airport near Paris.
He faces charges of facilitating criminal activities on Telegram, including allegations related to child abuse images and drug trafficking, as well as for not complying with law enforcement requests.
Durov has denied the allegations and criticized authorities for holding him accountable for crimes committed by users on the platform.
Critics have argued that Telegram has become a haven for misinformation, child pornography, and extremist content, partly due to its capacity for large group sizes—up to 200,000 members—compared to WhatsApp’s 1,000-member limit.
The platform faced scrutiny last month for hosting far-right channels that incited violence in cities in England, and earlier this week, Ukraine prohibited the app on state-issued devices to mitigate threats from Russia.
Durov’s arrest has ignited discussions about the future of free speech protections online. Concerns have arisen regarding whether Telegram remains a safe space for political dissidents, particularly in repressive regimes.
Cybersecurity experts have pointed out that while Telegram has removed some problematic groups, it has weaker moderation practices compared to other social media platforms.
Before this policy change, Telegram only provided information on individuals linked to terrorism.
Durov announced that the platform is now utilizing a dedicated team of moderators and artificial intelligence to obscure harmful content in search results.
However, experts warn that this may not meet the legal obligations under French or European law, which require proactive measures against illegal content.
Daphne Keller from Stanford University’s Center for Internet and Society expressed doubts about whether the new policy would adequately satisfy authorities looking for information about investigation targets, including their communications and the content of those messages.