The use of AI-assisted predictive models in U.S. hospitals is on the rise, with around two-thirds of hospitals incorporating these tools into their operations. However, a recent study conducted by the University of Minnesota School of Public Health revealed that only 44% of hospitals are evaluating these models for bias, leading to concerns about equity in patient care.
The study, published in Health Affairs, analyzed data from 2,425 hospitals across the country and found disparities in AI adoption. Hospitals with greater financial resources and technical expertise were more likely to develop and evaluate their AI tools compared to under-resourced facilities.
The primary use of AI tools in hospitals includes predicting inpatient health trajectories, identifying high-risk outpatients, and streamlining scheduling. However, the study emphasized the importance of ensuring that AI tools are tailored to the specific needs of patient populations, especially in hospitals with limited resources.
Assistant Professor Paige Nong from the UMN School of Public Health highlighted the need for hospitals to be critical consumers of AI tools and ensure they are not perpetuating bias. She suggested using information provided in predictive model labels described by the Assistant Secretary for Technology Policy as a way for organizations to make informed decisions about the tools they adopt.
Nong also stressed the importance of conducting local bias evaluations and examining the predictors driving the output of AI tools. By identifying and avoiding biased predictors, hospitals can ensure fair and ethical decision-making in patient care.
Looking ahead, Nong expressed optimism about bridging the digital divide between well-funded hospitals and under-resourced ones in terms of AI adoption and evaluation capacity. Collaborations and partnerships, such as the Health AI Partnership, can help provide valuable support and insights to under-resourced care delivery organizations.
In conclusion, the study underscores the need for hospitals to prioritize the evaluation of AI tools for bias and equity in patient care. By taking proactive steps to address these issues, healthcare professionals can ensure that AI technology is used responsibly and ethically to improve patient outcomes.