A photo taken on February 22, 2024 shows the logo of the Artificial Intelligence chat application on … More
Artificial intelligence has the potential to transform how life sciences companies engage with patients, physicians, and the broader healthcare ecosystem. But while the promise is real, so are the pitfalls, particularly in a highly regulated industry where compliance, transparency, and trust are not optional.
Many AI initiatives in pharma fail to deliver value – or even to move past the concept review stage – not because of technical limitations, but because the product strategy overlooks the unique operational, scientific, and regulatory realities of the industry.
Design Purpose‑Built AI with Proven Regulatory Guardrails
Plugging in a large language model-powered AI tool like GPT-4 or Gemini to level up a brand’s user experience might work in consumer tech, but it’s a nonstarter in pharma. Life sciences companies are required to submit promotional materials through a rigorous Medical-Legal-Regulatory (MLR) review process before exposing them to healthcare providers, patients, or consumers. And that prior review paradigm is fundamentally incompatible with the on-the-fly content generation of prominent general-purpose AI offerings.
Arpa Garay, former chief marketing officer at Merck & chief commercial officer at Moderna
“General‑purpose AI can be very useful to speed up and automate internal processes and tasks, but in customer-facing applications for pharma, it can’t be treated like plug‑and‑play from other industries,” says Arpa Garay, former chief marketing officer at Merck and chief commercial officer at Moderna. “When you’re communicating about treatment regimens, a hallucinated phrase isn’t a minor glitch, it’s a compliance crisis waiting to happen. Unless the model is purpose‑built to deliver pre‑approved content, full audit trails, and guardrails that our medical, legal, and regulatory teams trust, it simply doesn’t belong in the hands of customers and patients.”
While many AI developers focus on carefully curating the inputs to their models – both for general-purpose AI models and many smaller specialty AI models – this focus on the input, including model training and tuning, is only half of the puzzle, at least for life sciences. These models lack the controls needed to ensure that their outputs are compliant. The most successful pharma AI products operate within a closed-loop system that only surfaces both responsive and pre-approved language. For AI to succeed in life sciences, compliance must be built into the architecture, not treated as an afterthought.
When companies attempt to repurpose general purpose models to answer HCP questions or improve patient engagement, they risk hallucinations, off-label language, and noncompliant phrasing that can destroy trust and court regulatory and legal liability. And it only takes one such misstep by a brand’s shiny new AI capability to jeopardize the launch, and cause the wider organization to look even more skeptically at future AI initiatives. In contrast, organizations that have succeeded in deploying AI do so by utilizing trusted and vetted closed-loop systems, ensuring that only MLR-approved language is ever presented, keeping the brand on message and in full compliance at all times.
Balance Compliance with Clinician and Patient‑Centric Usability
Many organizations succeed in building AI tools that satisfy every compliance requirement, yet still miss the mark in practice. Clearing even the most rigorous compliance review, with even the most careful and expert AI standards committee supporting it, does not guarantee that a physician will actually use the app during a consultation or that a patient will finish an enrollment flow.
Jennifer Oleksiw, chief customer officer at Eli Lilly, frames the challenge: “Today, Lilly is part of a movement to transform healthcare, inspired by consumers seeking greater control over their health. They expect more than medicine; they want information, services, and partnership. Committed to health above all, we focus on holistic approaches and leverage AI to personalize and enhance customer experiences throughout their health journeys. However, using AI comes with challenges that must be addressed to unlock the full potential of digital health. It’s essential to ensure responsible AI use and compliant ways to capture data so we can reach the right people with the right content at the right time.”
Jennifer Oleksiw, chief customer officer at Eli Lilly
Oleksiw’s point is a reminder that usefulness must carry significant weight as well. Consider a patient‑onboarding chatbot built to streamline access to financial assistance: its copy was fully approved, yet dense language and an awkward interface drove most users to abandon the process. In contrast, the teams that are successfully launching and deriving value from AI adoption pair regulatory diligence with best-in-class UX —iterating on plain‑language copy, navigation cues, and visual design alongside patients and clinicians. When usability receives the same disciplined attention as compliance, AI moves beyond “approved” to genuinely improving decisions and outcomes.
Drive Enterprise‑Wide Alignment from Day One
Even the most elegantly engineered AI platform will sputter without enterprise‑wide alignment. As Diogo Rau, chief information & digital officer at Eli Lilly, puts it:
Diogo Rau, chief information and digital officer at Eli Lilly
“Some of the biggest problems AI can solve are in life sciences. But I have strong conviction it’s not just about the number of GPUs you have. You need scientists with intuition, machine‑learning experts with fresh ideas, labs to test the ideas, manufacturing experts to know what you can actually make at scale, and so on. We can’t have just one team go all in on AI; it needs to be the entire company. The big problems won’t be solved by one model coming up with a molecule generated in a vacuum.”
Rau’s warning explains why AI initiatives in pharma rarely collapse on technical grounds alone. More often they falter because brand, medical, legal, IT, and commercial operations pull in different directions. Without early, sustained buy‑in from every stakeholder, promising pilots die in committee. The companies that manage to scale AI treat it as an enterprise capability from day one—bringing every function responsible for approval, deployment, and measurement into the room—so breakthroughs in the lab translate into impact in the market.
Launch AI Initiatives That Deliver Measurable Outcomes
The most common strategic misstep? Launching AI because it’s “hot” rather than because it’s solving a clear business or clinical problem.
Dalya Gayed, MD, VP & US marketing lead, Reblozyl at Bristol Myers Squibb
“AI isn’t the goal – impact is,” says Dalya Gayed, MD, VP & US marketing lead for Reblozyl at Bristol Myers Squibb. “In life sciences, we must use AI not because it’s new, but because it gets us to better outcomes faster, smarter, and with fewer resources. Innovation is no longer optional, it’s how we stay relevant and deliver real value.”
Successful pharma AI launches begin with a well-defined goal: such as reducing time-to-diagnosis, boosting adherence, increasing HCP engagement, or accelerating clinical trial enrollment. The AI is then selected and implemented to support that outcome – not the other way around.
Overall, AI is poised to reshape the life sciences industry, but only for companies that take a thoughtful, context-aware approach. In regulated environments, trust and usability matter just as much as technical capability. The companies that lead this transformation will be those that align innovation with compliance, strategy with execution, and technology with human behavior.