Ubie's AI Safety Guidelines
2026-04-03 02:43:29

Ubie Collaborates with JaDHA to Develop AI Safety Guidelines for Healthcare Sector

Ubie and JaDHA's Groundbreaking AI Safety Guidelines



Ubie, a notable health tech startup based in Tokyo, has made significant strides in the healthcare industry by formulating the "AI Safety Evaluation Guidelines for the Healthcare Sector," in collaboration with the AI Safety Institute (AISI). Ubie operates under the mission of guiding people to appropriate medical care through advanced technology, and is recognized as a working leader within the Japan Digital Health Alliance (JaDHA). The newly established guidelines aim to expedite the safe social implementation of generative AI technologies, particularly large language models (LLMs).

These guidelines reflect the nuances and specific risks inherent in healthcare, providing detailed evaluation methods for practitioners. They take heed of international discussions on the importance of building "Trustworthy AI," as highlighted in the recent Hiroshima Global Forum for Trustworthy AI. The guidelines serve as a vital reference point as the global community increasingly prioritizes reliable AI technologies in various sectors, including healthcare.

Background and Objectives



In recent years, generative AI has introduced what many are calling a "once-in-a-decade innovation" in healthcare, improving efficiencies for doctors and enhancing patient communications. However, the sector faces unique challenges, including risks associated with misinformation, the need for stringent privacy protection, and the assurance of security measures. These concerns underscore the necessity for a robust framework aimed at ensuring patient safety and trust in AI-driven solutions.

As the working leader in JaDHA's healthcare software group, Ubie prioritized the creation of these guidelines to ensure that businesses can uphold safety from the development and design phases onward. The goal is to cultivate an environment where business value and safety coexist harmoniously.

Key Features of the Guideline



The guideline is designed to be user-friendly, particularly for smaller companies that may lack specialized expertise. Here are its notable features:

1. Structured Evaluation Methods
The guidelines delineate five development and design stages within the AI life cycle, outlining methods for evaluation at each phase:
- Product Design: Clear identification of product purposes and use cases, risk assessments, and governance framework establishment.
- Model Selection: Choosing suitable models and conducting safety evaluations based on their applications.
- Product Implementation: Addressing system architecture, prompt design, and guardrail implementations.
- Product Validation: Conducting comprehensive testing, verification, and risk assessments.
- Product Deployment and Operation: Continuous monitoring and improvement post-launch.

2. Ten Diverse Evaluation Perspectives
The guideline details ten specific evaluation criteria alongside the various risks associated with them:
- Control of Harmful Information Output: Mitigating the risk of dangerous medical content that could directly harm patients or healthcare providers.
- Prevention of Misinformation: Managing the risk of generating fictitious evidence or inaccurate medication information due to AI hallucinations.
- Fairness and Inclusivity: Ensuring that AI maintains accuracy and quality across various demographics and does not disadvantage certain patient groups.
- High-Risk Utilization Management: Addressing the risk of non-SaMD products being employed as medical devices improperly.
- Privacy Protection: Safeguarding personal health information from leakage or misuse.
- Security Assurance: Preventing data breaches or modifications from attacks such as prompt injection.
- Explainability: Ensuring the transparency of AI outputs to avoid misjudgments by medical professionals.
- Robustness: Maintaining output quality amidst diverse inputs, including dialects and non-standard medical terms.
- Data Quality: Ensuring that input data is accurate and timely to avoid endangering patient health and safety.
- Verifiability: Establishing mechanisms for post-facto verification to retain societal trust.

Accessibility and Implementation



The AI Safety Evaluation Guidelines for the Healthcare Sector can be downloaded from the following link: AISI Guidelines.

Ubie positions this guide not just as a document but as a practical tool that guides corporations in the rapidly evolving landscape of AI technology and regulatory trends. Plans for updates are underway, and the organization aims to share some sections in markdown format for broader applicability.

Conclusion



Ubie views this guideline as a significant step toward realizing trustworthy AI in healthcare. The collaborative efforts within JaDHA aim to ensure that the innovation landscape remains vibrant while maintaining a robust safety framework. Furthermore, the organization is committed to ongoing improvements in the deployment of healthcare AI technologies, ensuring both efficacy and safety in patient care. We extend sincere gratitude to all stakeholders involved in this initiative and look forward to fostering a continued dialogue on the intersection of technology and health.


画像1

Topics Health)

【About Using Articles】

You can freely use the title and article content by linking to the page where the article is posted.
※ Images cannot be used.

【About Links】

Links are free to use.