As hype grows around artificial intelligence in healthcare, a seemingly paradoxical question is setting off faint warning bells in the minds of skeptics and proponents alike: Could AI tools meant to reduce clinician burden actually increase it?
It’s too soon to say, given the technology’s nascency. But the answer — which has big implications for the future adoption of AI in the industry — depends in part on how intensely clinicians have to fact-check models, along with financial incentives in healthcare’s majority fee-for-service payment infrastructure, according to experts.
All told, AI products could add up to weaker time savings than developers promise, and any time saved could quickly be filled up by additional patient visits as hospitals and medical groups hustle to increase earnings.
“That’s absolutely a worry,” said Graham Walker, the co-director for advanced development at Kaiser Permanente’s medical group, during a panel at the HIMSS healthcare conference in Las Vegas. “The easiest way to get more revenue out of your healthcare system is by telling the doctors and [nurse practitioners] and pharmacists to go faster and see maybe just one more patient.”
‘The tools aren’t perfect’
The goal of many AI tools in healthcare is to reduce administrative friction by helping with rote tasks like documenting patient visits, looking up medical information or filing paperwork. Such tasks contribute to clinician burnout, which is a major problem in the industry and can lead physicians to exit the field.
About one-third of doctors responding to an American Medical Association survey in 2023 said they were interested in or planning to leave their jobs in the next few years. Many cited work overload as the reason: According to some research, healthcare workers spend the majority of their week working on administrative tasks.
Enter AI. Algorithms have been in use by healthcare companies for decades, but excitement around the tech has reached a fever pitch in the past few years with advancements like generative AI, which can create original text, and AI agents, which can perform tasks without human oversight.
A number of companies are folding the technology into sleekly-packaged products, touting time-saving metrics that may almost seem too good to be true to clinicians sick of after-hours ‘pajama time’ spent on documentation.
Abridge, a startup that uses ambient listening and generative AI to automate clinical notetaking, and Nabla, another ambient AI assistant, both claim to save providers about two hours each day.
Software giant Oracle has woven AI into its health records platform for providers, including a clinical AI assistant. Physicians using that tool see a 30% decrease in documentation time, according to the company.
Meanwhile, Microsoft says its AI documentation product saves doctors five minutes on average per patient visit.
“The capabilities that AI scribes bring to the practice of care are remarkable,” said Rohit Chandra, the chief digital officer at the Cleveland Clinic, during a panel. (The academic medical system is currently rolling out an AI documentation software from Ambience across its provider network.)
Documentation products tackle an acute pain point for doctors, while being relatively simple to use, easy to implement and safe, because they’re supervised by clinicians, Chandra said.
However, AI performance metrics such as time saved should be taken with a grain of salt, according to experts.
Clinicians still need to review the AI’s output to catch mistakes. That process, known as ‘human in the loop’, is key to ensure accuracy and build trust in AI, experts say.
Letting AI tools operate unsupervised could result in medical records containing made-up symptoms, or missing information that could be key to a patient’s health.
“That’s where I think we can get into trouble. The technology is going to get to the point where, at least in my opinion, I don’t think we’re going to need to worry as much. But as we think about early adoption, we do have to be cognizant that it is going to make mistakes,” Brenton Hill, head of operations at health AI standards group the Coalition of Health AI, said during a panel.
But deputizing clinicians to police what AI generates can cut significantly into time savings or, in some cases, erase them altogether.
“The tools aren’t perfect,” said Deborah Edberg, a family medicine physician with CVS-owned primary care chain Oak Street Health, during a panel. “We use AI to do our documentation and I do spend quite a bit of time going back and editing … It can be a bit of a burden to make sure that what is recorded is accurate.”
To date, there have been no extensive, independent reviews of AI scribes in healthcare.
In a recent study published in JAMA Network Open, researchers found that Microsoft’s ambient scribe received mixed feedback from users. The study highlighted a recurring theme of the need for substantial editing and proofreading of AI-generated notes, which sometimes offset the time saved.
One of the concerns raised by experts is that time savings from AI tools may not necessarily reach clinicians due to the fee-for-service payment structure prevalent in the U.S. healthcare system. This structure rewards providers based on the volume of visits, tests, and procedures they perform, leading to a push for clinicians to see more patients in order to maximize revenue.
While AI tools like Microsoft’s ambient AI can free up time for doctors, there is a risk that this time will be filled with additional appointments, rather than providing doctors with much-needed flexibility or relief from their workload. Vendors are aware of the financial incentives that AI tools offer hospitals, with marketing materials touting benefits such as freeing up additional appointment slots per provider.
Despite the growing demand for AI tools in healthcare, experts emphasize the importance of considering the impact on physicians beyond just time saved. Factors like reducing mental load, simplifying tasks, and improving job satisfaction are crucial for physician well-being and overall quality of care.
Anecdotal feedback from clinicians using AI tools has been positive, with many expressing that they could never go back to traditional documentation methods. Companies like Epic have seen a significant increase in the adoption of AI tools, with physicians reporting improved efficiency and satisfaction in their work.
Overall, while AI tools offer significant benefits in terms of time savings and efficiency, it is important for healthcare organizations to consider the broader impact on clinicians and patient care. Balancing the need for increased revenue with the well-being of healthcare providers is essential in harnessing the full potential of AI in healthcare.