The pharmaceutical industry recently received some much-awaited insights into how the FDA plans to regulate artificial intelligence (AI) in drug development. The draft guidance outlines a seven-step framework to assess the risk of AI models in drug and biological product development. While it does not cover AI models in the drug discovery process, it does offer a first glimpse into the FDA’s priorities as AI technology becomes more integrated into research and development.
FDA Commissioner Dr. Robert Califf stated that the FDA is committed to supporting innovative approaches for medical product development through a risk-based framework that promotes innovation while ensuring regulatory standards are met. AI has the potential to transform clinical research and accelerate medical product development to enhance patient care.
For AI companies collaborating with pharmaceutical companies in clinical trials and data management, the guidance serves as a starting point, but the need for more detailed information is still prevalent.
The draft guidance emphasizes a risk-based approach, requiring a clear explanation of the relative risk associated with AI models used in drug development. Issues such as potential biases in datasets that could affect the reliability of results are highlighted. Steps in the framework focus on defining the AI model and its data, as well as assessing its risk.
While the pharmaceutical industry has been eager for clarity from the FDA on evaluating drugs developed with AI tools, some feel that the guidance falls short of the revolutionary changes anticipated. The guidance’s scope is also seen as limited by some AI companies, who believe that it does not capture the complexity of artificial intelligence in drug development adequately.
With the draft guidance released, the FDA is seeking feedback from industry stakeholders during a public comment period to align the guidance with companies’ experiences. As AI becomes more integrated into the drug development process, both pharmaceutical and AI companies are expected to play a significant role in shaping the FDA’s views on the technology.
There are differing opinions on the flexibility and recognition of nuances in AI and large language models within the assessment framework. Some believe that the FDA’s initial approach is too rigid and relies too heavily on traditional statistical methods, while others view the guidance as brief and high-level.
As the industry awaits further guidance, the draft provides a glimpse into the regulatory framework to come. While AI adoption in pharma has increased, many models are still relatively new, and the FDA has refrained from providing specific criteria for each use case. The evolving landscape of AI regulation in drug development will likely continue to be shaped by feedback from industry stakeholders and ongoing advancements in technology. The guidance provided by regulatory agencies regarding AI models is causing uncertainty and confusion among companies. Instead of providing clear guidelines, the agencies are advising companies to discuss their plans with them and be prepared for stringent criteria if the risks associated with the AI model are deemed high. This approach has left many companies wondering about the future of their AI projects.
“They’ve stuck to their corner,” said Sasu, a spokesperson for a tech company. “They say not everything is encompassed in this guidance. So because of that, there’s a lot that falls into the gray area.”
This regulatory strategy is creating a lot of ambiguity and making it difficult for companies to navigate the regulatory landscape. Without clear guidelines, companies are unsure about how to proceed with their AI projects and what criteria they need to meet to ensure compliance.
In order to address this uncertainty, companies are advised to engage in open communication with regulatory agencies and seek clarification on any unclear areas. By being proactive and transparent in their discussions, companies can better understand the regulatory requirements and ensure that their AI projects meet the necessary criteria.
Ultimately, the key to successfully navigating the regulatory landscape for AI models is open communication and collaboration between companies and regulatory agencies. By working together to address any uncertainties and clarify requirements, companies can ensure that their AI projects are compliant and can proceed with confidence in their development and deployment.