Big Tech Joins Forces to Make the Internet Safer for Kids
February 11, 2025
2 min 04 sec read
The internet can be a wild place, especially for kids. That's why
Google, OpenAI, Roblox, and Discord have teamed up to launch a new non-profit initiative called ROOST (Robust Open Online Safety Tools). Announced this week, ROOST aims to create free, open-source AI tools to help companies detect, review, and report child sexual abuse material (CSAM) more effectively.

The project is a response to the evolving challenges of online child safety, especially in the age of generative AI. Former Google CEO Eric Schmidt, a founding partner of ROOST, highlighted the urgency of the initiative, stating that there's a "critical need to accelerate innovation in online child safety." ROOST is looking to unify and enhance existing safety tools, making them more accessible for platforms that may lack the resources to develop their own robust moderation systems.
The timing of this move isn't random. It comes amid increased regulatory pressure on tech companies to do more to protect young users. Lawmakers worldwide have been cracking down on platforms over child safety concerns, and companies are scrambling to show they can handle the issue without strict government intervention. According to the National Center for Missing and Exploited Children (NCMEC), reports of suspected child exploitation increased by 12% between 2022 and 2023—a statistic that underscores the scale of the problem.
Roblox and Discord, two of the founding members of ROOST, have faced their fair share of criticism regarding child safety. Roblox, a massively popular platform among kids, has struggled with preventing child exploitation and exposure to inappropriate content. It was even named in a 2022 lawsuit, along with Discord, over failing to curb adult interactions with minors. By joining ROOST, these companies are signaling a commitment to improving their safety measures—though the effectiveness of their efforts remains to be seen.
While details on the exact functionality of ROOST's AI-powered moderation tools are still vague, the initiative is set to make existing detection and reporting technologies more user-friendly and widely available. Roblox's vice president of engineering, trust, and safety, Naren Koneru, suggested that companies might be able to integrate ROOST's AI moderation tools through API calls. This could mean faster and more efficient content filtering across platforms, though how these tools will interact with existing solutions like Microsoft's PhotoDNA remains unclear.
Alongside the launch of ROOST, Discord is rolling out a new feature called "Ignore," which lets users mute messages and notifications without alerting the sender. This small but meaningful change aligns with the platform's broader goal of making the internet a safer space, particularly for young users.
ROOST has already secured more than $27 million in funding for its first four years, backed by organizations such as the McGovern Foundation, Knight Foundation, and the AI Collaborative. The initiative also brings in experts across child safety, AI, open-source tech, and even countering violent extremism to ensure a well-rounded approach to online protection.
While it's too early to say how impactful ROOST will be, it's a step in the right direction for online safety. If executed well, it could provide much-needed tools for smaller platforms and help create a more secure digital environment for kids.
Want to read this in Spanish?
Spanish Version >>