TeenAegis Releases Pioneering AI Harm Index
In an effort to create a safer online environment for children, TeenAegis, a platform dedicated to protecting youths in the digital age, has launched the
AI Harm Index. This groundbreaking tool is the first publicly available risk assessment specific to AI platforms, detailing their potential dangers based on credible evidence and research.
Understanding the AI Harm Index
The AI Harm Index scores ten prominent AI platforms based on various forms of documented harm, including the generation of child sexual abuse material (CSAM), facilitation of grooming, suicidal ideation risk, and failures in age verification systems. The evaluations are not arbitrary; they are informed by data from multiple reputable sources, including the National Center for Missing and Exploited Children (NCMEC), Federal Trade Commission (FTC) enforcement actions, court records, and independent research on safety.
The ranking system provides a clear overview of how each platform measures up in terms of safety for children and teens, making it an essential resource for parents, educators, and policymakers concerned about the digital safety of their children.
The Results: Who's Leading the Index?
When examining the scores released by TeenAegis, the findings are unsettling.
Character.AI has been identified as the most concerning platform, earning a notably high score of
8.2 and falling into the
Critical category. Alarmingly, this platform was recently involved in a tragic incident where a 14-year-old boy died by suicide after becoming excessively attached to a Character.AI chatbot. In response to the fallout from this case, Google and Character.AI had settled a lawsuit related to the matter earlier this year.
Both
xAI's Grok and
DeepSeek also found themselves listed in the Critical category with scores of
7.8. Grok is currently embroiled in an active class action lawsuit regarding CSAM content, underscoring the heightened scrutiny these platforms face.
Recognizing Improvements in AI Safety
On the other end of the spectrum,
OpenAI's ChatGPT garnered a score of
3.2, categorizing it as
Elevated risk but also claiming the title of
Most Improved. This distinction emphasizes the platform’s proactive efforts in managing risks amidst a landscape where reports of abuse associated with generative AI are steadily rising, echoing increased adoption and detection initiatives across the industry.
Siobhan MacDermott, CEO of TeenAegis, articulates the complexity of OpenAI's operational landscape, noting the company manages a diverse risk profile involving not only text but also image and video content through a global API layer. MacDermott insists that as much as it’s imperative to hold platforms accountable for misconduct, it’s equally important to highlight progress being made in the industry.
Similarly,
Claude by Anthropic recorded a risk score of
3.5, affirming a commendable safety record without any confirmed tragic incidents related to children, no FTC actions, and a comprehensive report on improving child safety.
Conclusion
The AI Harm Index is a crucial step toward fostering accountability in the tech industry, providing a transparent framework that rates the safety of AI platforms for children. Everyone involved, including parents, educators, and lawmakers, has a stake in this initiative to ensure that appropriate action is taken to enhance online safety standards.
The complete AI Harm Index findings are accessible online at
TeenAegis website, offering a valuable tool for anyone concerned with the digital wellbeing of minors. TeenAegis strives to be the gold standard for digital childhood safety and is committed to empowering society with knowledge to safeguard the younger generation.