WDTA Establishes New Global Safety Standards for AI Agents in Testing
WDTA's New Role in AI Agent Safety Testing
On July 11, 2025, the World Digital Technology Academy (WDTA) made waves at the United Nations headquarters in Geneva by unveiling its latest standards for the runtime security testing of single AI agents. This announcement emerged during a pivotal session known as the "Global Consultation on the Social Aspects of Digital Technologies and AI," which was co-organized by the United Nations Research Institute for Social Development (UNRISD).
The unveiling of these groundbreaking standards took place in the presence of notable figures in the tech and policy sectors, including Peter Major, vice-chair of the UN Commission on Science and Technology for Development and Honorary Chairman of WDTA. Major emphasized the importance of integrating effective data governance with ethical considerations in promoting sustainable global development. He asserted that as digital technologies evolve, urgent measures are needed to establish robust legal frameworks and collaborative approaches, ensuring that technological advancements remain equitable and beneficial to society.
WDTA’s Executive Chairman, Yale Li, highlighted the staggering rise of AI agents across multiple sectors, including content generation, knowledge acquisition, and workflow automation in 2025. However, he also noted that this increased deployment has raised security concerns, prompting the necessity for improved safety measures. The newly introduced standards aim to provide a robust framework that acts as a "safety belt" for the fast-expanding AI agent ecosystem.
The initiative is part of WDTA's comprehensive AI STR (Safety, Trust, Responsibility) certification suite, which has previously included standards focusing on Generative AI application security and large language model security testing. The organization seeks to proactively address risks presented by AI in vital domains such as autonomous vehicles, healthcare, manufacturing, and finance.
"The Kolingridge dilemma illustrates how governance becomes more challenging as new technologies are integrated into society," Li remarked. He insisted on the importance of establishing clear testing and certification protocols ahead of these thresholds, ensuring ethics and accountability at every stage of AI development and application.
A globally diverse task force representing various regions—including Asia, Europe, and North America—helped shape the AI STR standards. This initiative not only emphasizes risk assessment but also covers end-to-end lifecycle management, from data governance to model implementation and automated testing tools.
Currently, pilot certifications for these new standards are ongoing in financial services and healthcare sectors, with plans to expand implementation to the Asia-Pacific region in the near future. The WDTA sees this as an essential step toward advancing secure and ethical AI technologies on a global scale, fully aligning with the UN’s Global Digital Compact initiative.
As AI continues to infiltrate different areas of life and enterprise, the establishment of such pioneering safety standards is crucial. It aims to navigate the delicate balance between innovation and safety, ensuring that the implementation of AI profoundly resonates with ethical and societal responsibilities.
WDTA's proactive approach marks a significant leap forward in managing the rapid proliferation of AI technologies. As organizations worldwide begin to understand and adopt these security testing standards, the future of AI can be fortified against potential risks, thus paving the way for a safer, more responsible technological environment. This landmark initiative by WDTA is not just about compliance; it's about shaping the future of AI for the betterment of society as a whole.