Second Key Update of the International AI Safety Report Released
On November 26, 2025, the second key update of the
International AI Safety Report was made public, offering timely insights into risk management and technical mitigation for general-use AI. Led by Turing Award-winning computer scientist
Yoshua Bengio, this report consolidates contributions from over 100 experts across the globe and is backed by more than 30 countries and international organizations, including the EU, OECD, and the United Nations.
Recognizing the rapidly evolving nature of AI, the report’s authors felt that annual updates were insufficient. Therefore, they opted for these more frequent, focused updates on critical developments. This update follows the first key update released on October 15, 2025. Such updates help policymakers stay informed with an updated synthesis of literature, facilitating the crafting of evidence-based policies.
The second key update outlines various technical methods aimed at enhancing the reliability of AI systems while preventing potential abuses. Key highlights include:
- - Progress in Resilience Training: Advances have been made in developing models resilient to malicious attacks; however, significant gaps remain. Although AI models and systems are becoming more resistant, sophisticated hackers can still breach protections in roughly 50% of cases within just 10 attempts. They can compromise these models by poisoning their training data with merely 250 malicious documents.
- - Reduction of Open Software Gaps: Open-weight models now lag less than a year behind industry leaders, democratizing access but complicating efforts to avert malfunctions and misuse.
- - Increased Industry Commitment: Industry commitments to safety have risen, although their effectiveness remains undetermined. Despite the number of AI companies adopting risk management frameworks more than doubling in 2025, the practical effectiveness of these frameworks is still under scrutiny.
Commenting on these findings,
Yoshua Bengio, who serves as a professor at the University of Montreal and as scientific director at LawZero, explained, “As we continue tracking updates on AI capabilities and risks, it is crucial to provide clear pathways for appropriate and effective risk management and technical mitigations. This key update offers an overview of the advancements made in these realms, along with the gaps and opportunities that remain. Our aim is to consistently provide policymakers worldwide with updates on the evolving landscape of AI in anticipation of the second international AI safety report, scheduled for release in early 2026, prior to the AI Impact Summit in India.”
About the International AI Safety Report
The
International AI Safety Report serves as a synthesis of data regarding the capabilities and risks associated with advanced AI systems. It is designed to support informed global policymaking by providing facts to decision-makers. Drafted by a diverse group of over 100 independent experts, the report is accompanied by an advisory group made up of representatives from over 30 nations and international organizations, including the EU, OECD, and UN. While acknowledging the immense potential advantages of AI, the report emphasizes identifying risks and assessing mitigation strategies to ensure that AI is developed and utilized safely for the benefit of all. This report was commissioned by the UK government, with the secretariat provided by the
British Institute for AI Security.