Predictive artificial intelligence tools have become increasingly common in hospitals, with a recent study published in Health Affairs revealing that 65% of U.S. hospitals are utilizing these tools. These tools range from identifying high-risk patients to assisting with appointment scheduling. However, the study also found that less than half of hospitals are evaluating these AI models for bias, posing potential risks to patient care.
The study, which analyzed survey responses from over 2,400 hospitals, highlighted a concerning trend. Only 61% of hospitals tested their predictive models for accuracy using their own data, and just 44% locally evaluated the models for bias. This lack of evaluation could lead to the replication of racial, ethnic, or gender biases within the AI models, exacerbating existing health disparities.
Interestingly, hospitals with high operating margins, those that developed their own models, and facilities that were part of health systems were more likely to conduct local accuracy and bias evaluations. This suggests a “growing digital divide” between high-resource and low-resource hospitals, which could potentially threaten patient safety, according to study author Paige Nong.
While AI technology holds great promise for the healthcare sector, concerns around accuracy and bias remain paramount. Models that work well with one patient population may not perform as effectively in other situations, underscoring the importance of testing models on providers’ own data. Continuous monitoring of AI products post-implementation is also crucial, as environmental changes could impact their performance over time.
Access to financial resources appears to play a significant role in hospitals’ ability to conduct local evaluations of their AI tools. Critical access hospitals, rural hospitals, and facilities serving areas with high levels of social disadvantage were less likely to use predictive models altogether. Furthermore, hospitals that had the technical expertise to develop their own models were more inclined to test the products with their own data.
It is essential for hospitals to prioritize the evaluation of their AI models for bias and accuracy to ensure the delivery of high-quality patient care. By conducting local evaluations and monitoring the performance of these tools, hospitals can mitigate potential risks and disparities in healthcare. As the healthcare sector continues to embrace AI technology, ensuring the ethical and unbiased use of these tools must be a top priority for all healthcare providers.