Scroll Back to Top
Blog

Should you trust AI with your clinical trials? Five questions to ask first

Why AI, why now?

Clinical trials are under more pressure than ever. AI offers the promise of faster decisions, smarter trial designs, and more precise patient targeting. It can sift through millions of data points in seconds, spot patterns humans might miss, and help sponsors make confident calls earlier in the process1,2. But with that promise comes a critical question: how much trust can we place in this intelligence?

Sophie De-Oliveira, senior product analyst at Fortrea, explores where AI is already proving its worth in clinical trials, the relationships between AI and humans in clinical trials, and building new workflows to catalyse this newfound partnership.

Where AI outperforms humans

AI excels in high-volume, pattern-recognition-driven, or computationally complex tasks that would overwhelm human capacity. But before trusting these capabilities, sponsors need to ask critical questions about transparency and validation.

Pattern recognition and prediction

AI algorithms can analyze massive datasets to detect subtle correlations near invisible to humans. Machine learning models have demonstrated the ability to predict sepsis up to six hours earlier than traditional methods using electronic health record data – a capability that could revolutionize safety monitoring in trials.1

The trust question: Can you verify the data sources, understand the model architecture, and know the known limitations? Has the tool been tested in similar trial settings to yours? Without proper validation using defined test sets, even impressive-sounding capabilities remain untrustworthy.

Data matching and cohort identification

Natural language processing can rapidly screen thousands of patient records using trial criteria. Some companies have developed AI I to read structured and free-text patient data rapidly to pre-screen candidates more efficiently than manual review, potentially solving one of clinical research's biggest bottlenecks.2

The trust question: Does the system integrate with your existing workflows, and do you understand how the training data was sourced and cleaned? AI systems can inherit bias from their creators and datasets – if training data lacked diversity, your patient identification may systematically exclude certain populations.

Automation of repetitive workflows

Robotic process automation can handle routine tasks like medical coding and regulatory documentation at scale with speed and consistency, freeing your team to focus on strategic decisions that truly require human expertise.

The trust question: Who's accountable when AI makes an error in a regulatory submission? Clear ownership and accountability structures are essential before trusting AI with mission-critical processes.

What sponsors need to consider before trusting AI

Beyond understanding where AI works well, sponsors must evaluate five critical trust factors:

  1. Validation: Has the tool been tested in similar trial settings? Defining appropriate test sets and validation frameworks is crucial. A model that works for cardiovascular trials may fail catastrophically in ophthalmology without proper domain-specific testing.
  2. Transparency: Is the AI explainable and auditable? You need visibility into data sources, model architecture, objective functions, and known limitations to satisfy regulatory and audit demands. If a vendor can't explain how their system reaches decisions, you can't trust those decisions.
  3. Integration: Can it work seamlessly with your existing systems and workflows? Poor integration creates new risks and inefficiencies that can undermine any AI benefits.
  4. Bias: How might the AI reflect its creator's assumptions or data limitations? Probe vendor bias-mitigation processes and request fairness audit reports. Historical examples such as a large tech firm’s biased hiring algorithm and a prominent US university’s findings on facial recognition bias show how AI systems can perpetuate and amplify existing inequities3.4.
  5. Ownership and accountability: Who's accountable when AI makes a critical call? Define clear ownership and accountability structures – who signs off, who monitors outputs, who owns the maintenance and improvement of AI systems, who responds to errors. Governance frameworks like cross-functional oversight committees become crucial.

Why human–AI partnership builds sustainable trust

The future isn't about replacing humans with machines – it's about amplifying human capabilities with intelligent tools.

The most effective approach recognizes that AI enables scalability while humans provide safety, guardrails, and context. Where AI can analyze exponentially more data than any human team, humans ensure those insights remain clinically relevant, ethical, and compliant with regulatory standards. While AI is fast, it's not always right – even high-performing models make confident mistakes, which is why human oversight serves as the critical fail-safe preserving trial integrity and patient safety5. This partnership works because humans guide learning and adaptation, shaping the data environments and feedback loops that AI models depend on to evolve in clinically meaningful directions6. Trust requires accountability, and confidence in AI systems grows strongest when humans remain actively involved, ready to explain and correct machine-driven outputs when necessary.

The verdict: earned trust through rigorous evaluation

So, can we trust AI to run our clinical trials? The answer isn't a simple yes or no – it's "it depends on how rigorously you evaluate trustworthiness."

The regulatory landscape is evolving rapidly. The FDA's recent guidance emphasizes the need for algorithm transparency, performance validation, and ongoing monitoring of adaptive models7. AI outputs that inform regulatory decisions must remain interpretable and auditable – reinforcing why human oversight isn't just beneficial, it's required8.

As AI capabilities expand, success won't be measured by how much we can automate, but by how effectively we can combine machine intelligence with human wisdom. The organizations that master this balance will deliver trials that are not just faster and more efficient, but more patient-centric and scientifically robust.

The AI revolution in clinical research is here. The question isn't whether to embrace it, but how to do so thoughtfully, strategically, and with the trust of your patients and investigators intact.

Ready to explore how AI can transform your clinical trials while keeping human expertise at the center? Fortrea's innovation team can help you navigate the complexity and unlock the potential. Contact us

Listen to our neuroscience team exploring the opportunities for AI in neuroscience clinical trials with Joanie Brown and Ixico: Cortex Conversations: Insights in Neuroscience

References

  1. Henry, K. E., Adams, R., Parent, C., Soleimani, H., Sridharan, A., Johnson, L., ... & Saria, S. (2022). Factors driving provider adoption of the TREWS machine learning-based early warning system and its effects on sepsis treatment timing. Nature Medicine, 28(10), 2084–2094.
  2. Straus, M. (2022, May 12). Patient recruitment goes high-tech. Applied Clinical Trials. https://www.appliedclinicaltrialsonline.com/view/patient-recruitment-goes-high-tech
  3. Metz, C. (2018, October 10). Amazon scrapped a secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  4. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a.html
  5. Amodei, D., et al. (2016). Concrete Problems in AI Safety. https://arxiv.org/abs/1606.06565
  6. Holzinger, A. (2016). Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2), 119–131.
  7. FDA. (2023). Proposed Regulatory Framework for Modifications to AI/ML-Based Software as a Medical Device. https://www.fda.gov/media/122535/download
  8. Samek, W., et al. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. https://arxiv.org/abs/1708.08296