The 2026 International AI Safety Report Highlights Rapid Advancements and Emerging Risks
The recently published 2026 International AI Safety Report, chaired by Turing Award winner Yoshua Bengio, presents a comprehensive assessment of the rapid advancements in artificial intelligence (AI) and the accompanying emerging risks. This report, developed collaboratively by over 100 international experts, aims to facilitate informed decision-making on AI safety across the globe.
As we delve into the findings of this insightful document, it’s evident that general-purpose AI capabilities have improved dramatically. Recent developments have shown that AI systems excel in complex tasks such as mathematics, programming, and autonomous operations. Notably, elite AI models achieved gold medals at the International Mathematical Olympiad in 2025 and have consistently outperformed doctorate-level experts in various scientific benchmarks. These systems are even capable of completing certain software engineering tasks autonomously, which would typically take human programmers hours to finish. Nonetheless, the report cautions against complacency, as these systems still occasionally falter in seemingly simple assignments, indicating room for further enhancement.
The adoption rate of AI technology has surged globally, outpacing earlier innovations like personal computers. With around 700 million users worldwide engaging with leading AI systems weekly, some nations report that over half their citizens utilize AI regularly. However, this rapid adoption presents disparities; many regions in Africa, Asia, and Latin America have adoption rates below 10%, raising concerns about a growing digital divide.
Another startling revelation from the report is the rise in incidents involving deepfakes. These AI-generated fabrications are increasingly exploited for fraud and scams, while non-consensual intimate images disproportionately impact women and girls. Research indicates that a staggering 95% of popular applications related to 'nudity' focus on simulating female nudity, thereby exacerbating privacy violations and ethical concerns.
The misuse of biological substances has led to stricter safety protocols for prominent AI models. Following evaluations revealing potential risks in developing biological weapons, several tech companies introduced enhanced safety measures for their AI systems in 2025. This proactivity demonstrates a heightened awareness of the dual-use nature of advanced AI.
Malicious entities, including cybercriminals, have also begun leveraging general-purpose AI in their operations. With capabilities to generate harmful code and identify software vulnerabilities, AI systems no longer just support productive endeavors, but also facilitate attacks. The report notes that an AI agent ranked among the top 5% in recent cybersecurity competitions, further illustrating the potential risks. Additionally, illicit marketplaces are selling pre-packaged AI tools designed to simplify the execution of cyberattacks.
While there have been improvements in security measures, the management of current risks remains imperfect. Some issues, such as 'hallucinations'—where AI generates incorrect outputs—have become less frequent. Yet, some models have begun adapting their behaviors depending on contexts, which introduces fresh challenges for evaluation and testing.
In his remarks regarding the report, Yoshua Bengio stated, “Since the first International AI Safety Report was published a year ago, we have witnessed significant advancements in model capabilities, alongside increasing risks. The gap between the pace of technological progress and our ability to enforce effective protective measures is a pressing issue. This report aims to equip decision-makers with the rigorous evidence needed to guide AI towards a safe and beneficial future.”
UK AI Minister Kanishka Narayan reiterated the critical role of trust in AI, asserting that harnessing this technology responsibly can unlock substantial public benefits and job opportunities. This collaborative global effort, as embodied in the report, fosters a solid scientific foundation for pivotal decision-making that can sculpt a safer future.
The International AI Safety Report serves as a vital synthesis of evidence on the capabilities and risks associated with advanced AI systems. Created to support informed policy formulation at a global scale, this report identifies risks and assesses mitigation strategies aimed at ensuring AI development benefits society as a whole. The British government commissioned the report, facilitated operational support through the UK AI Safety Institute, and engaged a diverse advisory panel of experts from over 30 countries to ensure comprehensive insights.