NAMI's Initiative for Safe AI in Mental Health
The National Alliance on Mental Illness (NAMI), the largest grassroots nonprofit for mental health advocacy in the United States, is embarking on a novel initiative aimed at establishing benchmarks for the use of artificial intelligence (AI) in mental health support and information. Collaborating with Dr. John Torous from the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center—affiliated with Harvard Medical School—NAMI is investigating how AI tools respond when individuals seek mental health assistance.
This initiative is rooted in the pressing need for reliable and trustworthy information in a rapidly evolving technological landscape. NAMI's CEO, Daniel H. Gillison Jr., emphasizes the dual-edged nature of AI: while it offers unprecedented opportunities for help, it also poses risks without appropriate safeguards. "People deserve clear, trustworthy information, and they deserve to know when a tool may not be safe," Gillison asserts, highlighting the organization's commitment to consumer protection in mental health innovations.
Recent polling data underscores this necessity. According to a NAMI/Ipsos survey conducted in November 2025, 12% of adults expressed a likelihood of utilizing AI chatbots for mental health treatment or therapy within the next six months, with 1% indicating they already do. This growing interest in AI as a resource reflects a crucial demand for clear, reliable guidance, especially when it concerns the sensitive arena of mental health.
In response to these concerns, the first stages of NAMI's project will address three primary areas critical to the safe employment of AI tools in mental health:
1.
Safety and Crisis Response: Investigating whether AI systems can identify distress in users and provide appropriate, safe next steps.
2.
Information Accuracy and Quality: Assessing the factual accuracy of responses generated by AI tools, ensuring they align with established evidence and avoid promoting harmful or misleading claims.
3.
Cultural Relevance and Human Support: Evaluating how AI tools communicate and respect diverse cultural backgrounds and personal identities, ensuring the use of supportive and human-centered language.
Dr. Torous notes the non-replaceable role of professional mental health care, yet acknowledges the increasing use of AI for initial inquiries and support. "Some tools can aid users when applied responsibly, but others may inadvertently cause harm," he cautions. The goal of this initiative is to craft an independent, evidence-based resource that clarifies the capabilities and limitations of AI tools in mental health contexts.
NAMI’s approach is collaborative, integrating voices from peers, family members, clinicians, and researchers to reflect practical needs and preferences in their work. As they gather insights, NAMI aims to create a framework that is not only protective but also empowering for users. For more information about NAMI's AI initiative, visit
nami.org/AI.
About NAMI
NAMI stands as the foremost grassroots organization in the U.S. dedicated to improving the lives of those affected by mental illness. Through education, advocacy, and public awareness efforts, NAMI aims to provide comprehensive support for individuals and families grappling with mental health conditions.
About Beth Israel Deaconess Medical Center
BIDMC is a distinguished academic medical facility that couples exceptional patient care with high-caliber education and research. As a teaching affiliate of Harvard Medical School, it consistently ranks highly in terms of research funding and quality of healthcare services.
This groundbreaking initiative led by NAMI not only seeks to enhance safety in AI applications within mental health but also aims to empower users with the necessary knowledge to navigate these technological resources responsibly.