Resolver Launches Innovative Service to Combat Child Sexual Abuse Material Online
In a significant move towards enhancing online safety, Resolver, a Kroll business recognized globally for its expertise in trust and safety intelligence, has unveiled its new Unknown CSAM Detection Service. This advanced service utilizes the Roke Vigil AI CAID Classifier to ensure the identification and classification of previously unseen and AI-generated child sexual abuse material (CSAM). As the necessity for proactive measures grows, Resolver's initiative responds to the rising concerns regarding the safety of children online.
According to the Internet Watch Foundation's most recent Annual Data Insights Report, the volume of CSAM reports has reached alarming levels. Additionally, the emergence of AI-generated content poses an unprecedented challenge for technology platforms already struggling with existing CSAM. In light of these findings, the UK's online safety regulator, Ofcom, has recommended implementing proactive technology solutions, emphasizing that platforms can no longer depend solely on established markers for detection.
Resolver’s solution stands out due to its foundation on a new AI machine learning solution that has been rigorously trained on the UK Government's Child Abuse Image Database (CAID). The creation of the Roke Vigil AI CAID Classifier resulted from stringent oversight by the Home Office and is crucial for distinguishing between known, modified, and entirely new images of CSAM. What makes this classifier particularly groundbreaking is its ability to categorize detected material by severity, significantly enhancing the processing of incoming reports for potential risks.
Previously, the capabilities of this classifier were confined to law enforcement agencies, primarily during criminal investigations. Resolver now makes this technology available to a wider audience, facilitating a transition towards more proactive protection mechanisms in the digital realm. By enabling automated detection of unknown CSAM, the service allows high-risk content to be directed to the appropriate teams for rapid assessment.
These specialized teams, equipped with the right training and emotional support, can promptly implement safeguarding actions, thereby better protecting users and reducing the exposure of operational personnel to traumatic content. Resolver’s approach marks a departure from traditional hash-matching methods, which can only identify previously encountered CSAM. The dual capabilities of detection and categorization allow platforms to enhance their response time and improve accuracy in managing harmful content on their interfaces.
The classifier’s precision is partly attributed to the unparalleled quality of training data derived from the CAID dataset, which is garnering attention for managing such sensitive imagery. Every image undergoes a meticulous verification process by three experienced officers to ensure it meets evidential standards. This thorough approach enables the classifier to detect CSAM effectively while offering insights into the severity of abuse, which is categorized into classes A, B, C, or indicative CSAM.
As the landscape of child safety continues to evolve with emerging threats, it is imperative to adopt such advanced technological measures. George Vlasto, the Head of the Trust and Safety Division at Resolver, describes the Roke Vigil AI CAID Classifier as a transformative tool for scalable automated detection of CSAM. The service is being launched as a comprehensive cloud-based solution, providing platforms worldwide with cloud support, integration flexibility, and robust operational assistance.
Resolver’s president, Kam Rawal, noted this launch reflects a steadfast commitment to child safety and supporting Trust and Safety teams. The company’s initiatives are aimed at setting a new industry standard in CSAM detection, ultimately contributing to a safer online environment for children globally. As we strive towards a more secure digital space, such innovations are not just necessary; they are essential in mitigating risks and safeguarding vulnerable users on online platforms.