As we approach the year 2025, the discussion around responsible artificial intelligence (AI) in the healthcare industry is gaining momentum. Brian Anderson, CEO of the Coalition for Healthcare AI (CHAI), believes that there is a growing consensus between industry and government on the definition of responsible AI, even in these polarized times.
Anderson emphasized the importance of policymakers and regulatory officials understanding the frameworks being developed in the private sector regarding responsible AI in healthcare. He believes that regulatory frameworks need to be developed around these frameworks to ensure the safe and ethical dissemination of healthcare AI.
One area of congruence between the private sector and regulatory agencies is the development of AI model cards, also known as ‘AI nutrition labels’. These labels provide a digestible form of communication that identifies key aspects of AI model development for users. The CHAI recently released an open-source version of its draft AI model card, aligning with the Office of the National Coordinator for Healthcare Technology’s guidelines.
Anderson highlighted the alignment between the private sector innovation community and public sector regulatory community in developing AI requirements for medical devices. The U.S. Food and Drug Administration (FDA) included an example of a voluntary AI model card in its draft total product life cycle recommendations for AI-enabled devices. This example demonstrates the FDA’s commitment to framing trust around the use of AI in healthcare.
Anderson emphasized the need for collaboration between private and public sector stakeholders to inform one another and work together on healthcare AI regulation. He believes that the incoming administration and leaders in Congress are interested in partnering in public-private partnerships with organizations like CHAI.
The CHAI’s model card is intended to be a living document that will be updated regularly as new capabilities emerge, particularly in the generative AI space. Anderson stressed the importance of flexibility in evaluating emerging capabilities and the need for regular updates to AI model cards, at least on an annual basis.
As providers begin to use AI-enabled clinical decision-support tools, they will need to navigate the complexity of imperfect transparency in AI systems. Anderson acknowledged that there may be challenges in disclosing certain information on model cards due to intellectual property concerns within the vendor community.
In conclusion, the healthcare industry is moving towards a shared understanding of responsible AI, with collaboration between industry and government shaping the future of healthcare AI regulation. With a commitment to transparency, flexibility, and collaboration, the industry is poised to harness the potential of AI while ensuring ethical and safe use in healthcare settings. Balancing the Protection of Vendor IP with Providing Essential Information for Healthcare AI Decision-Making
In the world of healthcare AI, striking a balance between protecting the intellectual property (IP) of vendors and ensuring that healthcare professionals have the necessary information to make informed decisions is crucial. According to Anderson, this balance is essential for guiding doctors in deciding whether or not to utilize AI models with their patients.
The impact of AI on patient outcomes is profound, with the causal relationship playing a significant role in determining the success of a particular treatment or intervention. Anderson emphasized the importance of providing doctors with the information they need to understand how an AI model may affect the patient in front of them.
While HTI-1’s 31 categorical areas provide a solid foundation for evaluating electronic health records and other certified health IT, Anderson acknowledged that they may not be sufficient for the diverse use cases of AI, especially in the direct-to-consumer space. He highlighted the need for more comprehensive evaluation frameworks, particularly as new use cases emerge, such as generative AI.
Looking ahead, the evaluation of healthcare AI models is expected to become even more complex over the next two to five years. This raises questions about how we define “human flourishing” in the context of AI-driven healthcare interventions. Anderson suggested that developing trust frameworks for health AI agents will require input from ethicists, philosophers, sociologists, and spiritual leaders to guide technologists and AI experts in creating robust evaluation criteria.
As the field of AI in healthcare evolves, Anderson emphasized the importance of involving a diverse group of stakeholders in the evaluation process. This includes community members and experts from various disciplines who can provide valuable insights into aligning AI models with our values and building trust with these technologies.
In the coming year, CHAI plans to lead efforts to bring together a wide range of stakeholders to develop a framework for evaluating healthcare AI models. Anderson acknowledged the challenge of creating a rubric for evaluating AI models aligned with our values but expressed confidence in the collaborative approach to address this issue.
Overall, the future of healthcare AI evaluation will require a multidisciplinary approach, with input from a diverse group of experts to ensure that AI models are ethically sound and aligned with patient values. By fostering collaboration and inclusivity, the healthcare industry can navigate the complexities of AI evaluation and pave the way for a more ethical and trustworthy AI-driven future in healthcare.