Despite the huge and varied advances in medical research, traditional randomised clinical trials remain a widely accepted and effective method of ensuring the efficacy and safety of new medicines and treatments. Randomised clinical trials need to prove the general efficacy of a new treatment or medication, but often also whether it will benefit a specific segment of a patient population. Depending on the specificity of that segment, it can be difficult to recruit enough patients into trials.
The integration of AI in clinical trials offers potentially unprecedented advantages, from accelerating drug discovery to enhancing patient recruitment, especially for clinical trials for highly specialised treatments. AI models could use research and analyse data from a variety of sources, including electronic patient records, to improve and calibrate patient selection and optimise the effectiveness of trials. Large language models, like those most people are now accustomed to using, could be deployed to structure data from clinical staff and streamline this to aid in trial preparation. However, as with any new technology, it is vital that when AI models are deployed, the legal and ethical issues are considered at an early stage and monitored throughout the course of the trial design, recruitment, management and analysis processes.
Using patient data safely
Ensuring the security and privacy of sensitive health information is paramount. Deploying AI in clinical trials would have to be coupled with the implementation of robust and secure encryption methods, strict access controls, and comprehensive anonymisation techniques. Crucial to any clinical trial using AI is a framework that acknowledges the potential of the AI application whilst balancing the protection of personal data and upholding trust in the process.
Equally, it is important to keep in mind that AI models are only as good as the data that underpins them. Inaccurate or incomplete data has the potential to skew the results produced by the model.
Ethical AI Decision-Making and Fairness Metrics in Patient Recruitment
If, as expected, AI becomes integral in decision-making for patient recruitment, the importance of ethical considerations only grow. Fairness metrics play a crucial role in identifying and mitigating biases in the recruitment process. Biotech and pharmaceutical companies must ensure algorithms that are fair and avoid discrimination. Implementing rigorous fairness metrics helps in creating an inclusive and unbiased patient pool, ensuring that clinical trials reflect the diversity of the population and provide the most useful and accurate results.
Any protocols will have two-fold results – the ethical foundation of the inclusion of AI in the trial will be solidified, and the research findings are likely to be more useful.
It can sometimes be overlooked, but inability for AI systems to have empathy expected by clinical staff can also have an impact on its integration into clinical trials. AI is not likely yet to be able to replicate much of the human-to-human functions required in clinical trials and proper patient care by staff will be essential to sit alongside AI technologies.
Regulatory Interventions for AI in Enhancing Clinical Trials
Strong regulation is essential to maintain standards, ensure safety, and uphold ethical principles. Regulatory bodies, such as the Medicines and Healthcare Products Regulatory Agency (MHRA) must adapt to the evolving landscape, developing guidance that encompass the unique challenges posed by the possible integration of AI applications in clinical trials. As a minimum, this guidance should include: validation and approval processes for AI algorithms, standards for data security, and mechanisms for monitoring and auditing AI systems in clinical trials.
A proactive regulatory approach is crucial to fostering innovation while safeguarding the integrity of clinical research. The MHRA and NHS England are already exploring and developing AI technologies, but mostly in the medical devices space. The UK National AI Strategy, published in December 2022, sets out the priorities for AI in healthcare in the UK, following the August 2019 announcement of the NHS AI Lab. The National Strategy makes no mention of clinical trials.
The UK Government issued a consultation in March 2023 on legislative changes for clinical trials, following the introduction of the Medicines and Medical Devices Act 2021, but notably no proposals on AI were put forward. Given the opportunities available to the organisers of clinical trials, this may well change in the near future.
In summary, the potential integration of AI and clinical trials holds immense promise, but responsible implementation is key. Any successful implementation will have to strike a balance between advancement and patient data security, always ensuring ethical AI decision-making, and robust security frameworks.