Yerramalli Subramaniam is CTO of CliniOps, a technology leader in DCT and unified platform.
This year, President Biden called on some of the biggest artificial intelligence (AI) companies to voluntarily commit to “help drive safe, secure, and trustworthy development of AI technology.” Although risks to safety and security are challenges for AI deployment, there is optimism that with mitigation strategies in place, the benefits may outweigh the risks.
In the context of research, it is crucial to strike a balance between the application of AI for the enhancement of clinical trials (CTs) and ensuring the safety and security of patient data. The U.S. Food and Drug Administration (FDA) has started to provide guidance on the use of AI in healthcare and clinical research.
The opportunities for AI development in the realm of medical devices and software, as well as the utilization of biomedical data, are actively encouraged. However, it’s important to note that the complete potential and associated risks of AI in drug development, specifically within the context of CTs, are still subject to much scrutiny.
AI technology involves a variety of advanced tools and networks designed to mimic human intelligence. These technologies possess the capability to interpret and learn from data inputs or training data, enabling them to make independent decisions and achieve stated objectives.
In our current data-driven, patient-centered approaches to healthcare and research, AI and machine learning (ML) are powerful instruments to streamline clinical research, particularly in two crucial areas: clinical operations and clinical outcomes.
Optimism For Enhanced Clinical Operations
In my experience, CTs, marked by heavy financial commitment, can take about six to seven years to test the safety and efficacy of a drug in humans for specific diseases. Delays and failures are expected. AI can serve as a useful tool for enhancing the efficiency of clinical operations, with a particular focus on tasks like identifying clinical sites, streamlining patient recruitment and closely monitoring patient protocol compliance.
Site identification is most often a laborious and complex process that could be streamlined by matching study requirements to the capabilities of each site, based on the previous track record of each site, using AI. For example, a sponsor wants a CT in the United States involving treatment for an infectious disease prevalent in tropical areas. How does one find a site for that? With AI, different study sites could be quickly surveyed in terms of claims related to the infectious disease. There will always be room for expert opinion to weigh in after AI has done its exhaustive search.
A majority of CT failures relate to patient recruitment. Utilizing AI, the process of selecting patients from extensive patient data records, which include demographics and health profiles, can be expedited at any phase of the study once the clinical sites have been identified.
As part of CT design, AI tools can perform automated eligibility analysis, matching participants to trials. For certain phases of CTs, specifically Phase II and III, AI can assist in the selection of patients from disease-specific populations by employing patient-specific genome-exposome profile analysis. This approach can also contribute to the early prediction of available drug targets in the selected patients.
Patients dropping out of CTs is another cause of CT failure. With protocol design through AI-powered sensors and mobile and wearable solutions to monitor patient compliance, dropouts could be reduced. By implementing AI in the monitoring and analysis of patients’ responses to therapies, the process of collecting data, measuring outcomes and interpreting results follows clearer and faster pathways.
Optimism For Safe Clinical Outcomes
Adverse events (AEs), such as symptoms like chest pain and nausea, are promptly identified and flagged. However, the current approach relies mainly on reactive intervention by the CT study team or healthcare professionals in close proximity to the patient, which can result in a potential risk. This risk arises from the possibility that symptoms might be reported late, leading to interventions that may also be delayed or insufficient.
To counter this risk, AI technology can be employed, particularly through smart device monitoring and biomarker checks. Even in cases where patients may not actively report symptoms, AI has the capability to predict the likelihood of an AE occurring, such as chest pain within 10 hours of medication ingestion, based on continuous monitoring of specific biomarkers. Furthermore, AI can enhance patient safety by assisting clinicians in the early detection of potential AEs, utilizing both biological and digital biomarkers that might otherwise be overlooked during manual reviews.
Primary Risks To Safety And Security
The primary risks of AI, particularly in clinical trials, are inadvertent exposure of patient data and data generation bias.
The largest patient database is taken from electronic medical records, including data from laboratories, imaging, disease-specific information, genetics and various other sources. From this data, patient recruitment for CTs begins.
Throughout the study, patient outcomes are studied. The possibility of security breaches or compromised privacy is always there, so the need to anonymize data is paramount. Protection of patient data is possible with measures currently available, such as differential privacy with its “statistical noise” and homomorphic encryption, which permits calculations on coded or encrypted data.
If AI is working on input data that may be biased against particular patient profiles, whether in terms of gender, economic resources or geography, AI’s algorithms may perpetuate this bias, which is unethical. Patient recruitment could be skewed and certain profiles could be automatically rejected based on data bias. To ensure ethical and unbiased decision-making, fairness metrics in AI need to be set in place, such as disparate impact, equal opportunity and predictive equality.
AI: Here To Stay
Although AI is on its way to ubiquity, there is a need to regulate its use. In the clinical research industry, the hope is to achieve a more efficient process for delivering essential therapies and improving patient outcomes, all while being cognizant of the need for regulations that safeguard the security and privacy of patient data.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here