Artificial intelligence (AI) has become increasingly prevalent in the medical field, offering the potential to reduce risks and prioritize care for high-risk patients. However, researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University are advocating for more oversight of AI by regulatory bodies. This call for regulation comes after the U.S. Office for Civil Rights (OCR) of the Department of Health and Human Services (HHS) implemented a new rule under the Affordable Care Act (ACA) to prevent discrimination in “patient care decision support tools.”
The final rule, published by the OCR in May, prohibits discrimination based on race, color, national origin, age, disability, or sex in both AI and non-automated tools used in medicine. Developed in response to President Joe Biden’s Executive Order on AI, the rule aims to promote health equity by focusing on preventing discrimination. Senior author Marzyeh Ghassemi, an associate professor of EECS, believes that the rule is a crucial step forward and should drive equity-driven improvements to existing clinical decision-support tools.
The FDA has approved nearly 1,000 AI-enabled devices for clinical decision-making, but there is a lack of oversight for the clinical risk scores produced by these tools. The Jameel Clinic at MIT will host a regulatory conference in March 2025 to address this gap. The goal is to establish standards for transparency and non-discrimination in both AI and non-AI decision-support tools used in healthcare.
While non-AI decision-support tools may not be as complex as AI algorithms, they still play a significant role in clinical decision-making and must meet the same standards. Maia Hightower, CEO of Equality AI, emphasizes the importance of regulating clinical risk scores to ensure transparency and prevent discrimination. However, challenges may arise in regulating these scores due to potential deregulation efforts under the incoming administration.
In conclusion, the integration of AI in healthcare has the potential to improve patient care and outcomes. Still, it is essential to establish clear regulations to ensure the ethical use of AI and non-AI decision-support tools. By promoting transparency and non-discrimination, regulatory bodies can help healthcare providers prioritize patient care and reduce risks in clinical settings.