Traditionally speaking, the clinical trial industry has been a slow evolutionary process (e.g., managing stacks and stacks of physical documents) with discrete moments of revolution (e.g., the use of social media and claims data to expedite connecting sponsor studies with potential investigators).
Our Kevin Rooney explores how artificial intelligence impacts clinical trials:
Interested in learning more about how to incorporate machine learning into your clinical trial? Please feel free to reach out to us:
Brad Haby, Sr. Director of Data Science
Kevin Rooney, Data Scientist
Nathan Smith, Data Scientist
AI translates as an instigator for the next revolution of clinical trial development. It is the automobile replacing the horse of decision making. The problem, however, isn’t the buy-in for the impact of AI, but rather the waterfall of change management that needs to happen in response to AI-based decisions. Here you can see the example of slow evolution confronting inevitable revolution, and why AI is only now being implemented in the clinical trial conversation.
At this point, I would like to replace the term AI with machine learning (ML). The two are virtually synonymous with one distinction: To borrow from Zunger (2017), “artificial intelligence has historically been defined as whatever computers can’t do yet.” For the Data Science team at PRA, ML refers to the AI we are doing today.
How can machine learning help the drug development process?
At PRA, the Data Science team engages mostly on two levels. One is the clinical operations side. The other is real world evidence.
For an example of clinical operations, the Data Science team is engaged with the Project Management Office to predict the health of studies along their timeline (i.e., time, cost, and quality). An analogy might be the National Oceanic and Atmospheric Administration dropping sensors into the ocean and developing predictive ocean climate models to better assist tanker activity. For PRA, developing critical sensors in conjunction with historical data is the foundation for predicting clinical trial success and avoiding potential risk – all of which can speed up the timeline from protocol submission to FDA approval by identifying potential risk earlier and correcting sooner.
With real world evidence, we are able to tap into medical data across the United States including electronic medical records, insurance claims data, and prescription data. With these combined data, PRA is capable of comparing patient populations such as Non-Hodgkin lymphoma to current patients enrolled in clinical trials targeting the same disease. The benefit is early targeting of potential adverse events that may be correlated to certain unknown subgroup populations.
Additionally, ML can help drug development by predicting patients likely to transition to a particular disease (Rai, Hu, Rooney, Quach, & Quigley, forthcoming). By providing predictive disease transitions, drug makers can assess the market potential for new treatments, possibly getting them to market sooner. For potential patients, it means receiving beneficial treatments earlier from the time of their diagnosis.
Is ML a game changer?
Without a doubt. It is the hope of every decision maker to answer this one simple question: “How can I define the impact my decision will have on the future, with confidence?”
For “confidence,” if you have been in a forced situation to make a decision with potentially large impact on those who rely on you, having confidence scientifically built in can help define which option you choose.
For “defining the impact,” ML can show you things you might never have noticed as influential variables to your decision. This helps elevate the robustness of how a person decides to act. Defining impact needs to mature a bit in the minds of decision makers. They need to see this in action to understand that the pattern of variables might influence the future better than one or two discrete traditional variables. On the flip side, data scientists need to clearly hear what decision makers say are traditionally important variables to ensure these are tested in the model. In the end, comparisons between traditional ideas and ML-discovered ideas can show which interrelated variables are better at predicting. Even if models confirm the traditional, intuitive ways, they can affirm business traditions with the added clarity of scientific confidence and nuance.
What are some of the challenges?
There are two distinct challenges when leveraging ML:
- Having reasonably clean and informative data
- Actioning change management in response to ML results
Regarding the first issue, many people have the impression that the hard part of ML is writing the code. It is hard, but it is structured. You can rely on a large community of developers to help get you past issues when building an ML model. However, any ML model is only as good as the data available. If the data is not clean and informative, then the ML model will not be valuable.
As the second issue illustrates, the truly hard part is convincing people that the current collection of data is incomplete, inaccurate or misleading and they need to change their habits of collecting data so that the ML models can run more effectively.
How might our industry confront these two challenges?
The value of ML needs to be clear at the individual level and needs to address how it will help each person today. This allows the future reward for changing current data collection habits to be greater than going about business as usual. At the enterprise level, setting up an internal “third-party” team that is dedicated to enforcing high-impact change gives persistence for ensuring that change is indeed happening.
Both issues illustrate the problem of bridging hope with machine learning. ML models often challenge the accuracy of people’s intuition and many applaud this challenge as a much-needed common source of truth to replace individual bias.
However, it’s a fallacy to assume that intuition is comparable with predictive ML models. Intuition is predicative in nature, but it is from a source of what we hope the future will be. ML models have no hope. They simply say, “given history, here’s the most likely future.” If a company’s success relies on the culture of its customers as well as the culture of its employees, then hope becomes critical in the health of the business. Because hope is always a response to current inadequacies, a business finds value in pivoting to fulfill the hope of their stakeholders. Without intuition, ML models would only echo past inadequacies as predictions for making decisions that support them. The resolve is to distinguish what is predicting (i.e., the ML models) from what is desired (i.e., the hope of those supporting the business). Taken together, a business can intuitively decide in what way current predicting trends should change or not.
What are some of the successes?
For our team, many of the successes come from combining information across business units to illuminate how things are interrelated and what their combined predicted trajectory is. Whether we are tying the clinical trial management system together with our planning and forecast data along with site recruitment modeling or looking at real world evidence patient data and tying it to trial lab data, the result is a broader assessment of impact the data might be suggesting.
One particular case study combined a variety of data including bid data, resource availability data and site recruitment data for assisting PRA’s resourcing group. Their question was, “Can we build a better forecasting model for potential resource demand early on in the RFP process?”
Our answer was, “We can predict a bid’s win or loss with a total accuracy of greater than 75 percent” (Moss, Smith, & Juday, 2018). By providing better predictions, the resourcing group could plan a greater level of detail than with their own previous methods.
Another case study came from study start-up that asked, “Can we use historical clearance rates on regulatory documents and clinical trial agreements to predict likely over burn in hours for each current study?”
By combining three separate data sources together in our model, we were able to produce predictions with accuracies of 85% and 80% respectively for over burn by any given study (Lockee, Wise, Grissom, 2017). Showing the hours likely to over burn along with the strength of accuracy for each study, the study start-up team could easily see the studies that were high in hours and high in predictive strength. Our ML model quickly brought attention to the most likely studies to over burn by the greatest amount.
What is the impact of ML? What will it mean for researchers and ultimately patients?
By its nature, ML embodies the search for efficiency, to understand what is happening in the world and suggest what is likely to happen. The impact is the reduction of inefficiency. In the clinical trial world that applies to reduced study timelines, more precise patient engagement, and better disease comprehension.
One place for efficiency we addressed on a human abuse study was decreasing the screen fail rate by allowing our ML models to suggest expanded analyte values when defining the inclusion criteria for patients. For instance, if the protocol from our sponsor provides an albumin serum range of 3.4 – 5.5 g/dL, our ML model might suggest expanding that range to 4.3 – 5.1 g/dL in order to hit the sponsor’s screen success rate of 78% (Webster et al., 2017).
This is just one instance. The ML model can handle multiple analytes providing best case ranges in combination to maximize the screen success rate. Ultimately, the therapeutic expert and the sponsor need to make the final decision on what ranges are necessary for the goals of the protocol, but the ML model gives alternative parameters for them to consider in order to achieve a specific screen rate goal.
For the patient considering participation in a study, knowing that doctors have greater resources at hand to track, assess, and predict likely outcomes during the trial should be reassuring. In my opinion, it’s becoming increasingly unfair for any single physician to understand, in real time, the complex shifts in drug development and efficiently assess all available supportive data. I believe ML does not take away the doctor’s ability to question their engagement with patients. I believe ML allows doctors to assess their questions more robustly by leveraging large data sources where ML can illustrate trends and potential predictions so doctors may better engage their patients, real-time. Again, returning to the car and horse analogy, the goal is still to get from point A to B, but the speed, comfort, and safety for getting there is greatly increased.
What is PRA currently doing in this space? Vision for the future?
I can tell you from my vantage point, the revolutionary moment with ML has already begun at PRA. The Data Science team is executing on a multitude of requests coming from the highest portions of PRA to build ML models. We are currently building out these prototypes with a variety of them set to run in production within months.
The vision for the future is to create greater collaboration between decision makers and data scientists to increase the impact and confidence of predicting business objectives. Data science teams need to be included at the outset of specifying business objectives so that they may guide how data is collected for greater predictive power. My suggestion to decision makers is to consider what their highest level of impact could be if they only had better confidence in the way information is trending. My suggestion to data science teams is to help decision makers connect the dots across business units for broader impact. The win-win happens as high impact change is felt across studies, sponsors, and the company rather than at the individual or project level.
By collaborating more closely and earlier on, the bridge can be built between the hope of business stakeholders and our ability to use ML to increase the impact and confidence for their objectives.