As patients struggle with the rising costs of their medications, life science companies battle the ever-growing expense of conducting clinical trials. Short of radical changes in regulatory requirements, the only way towards meaningful cost savings is to identify and implement efficiencies in the way we conduct research.
eSource, data initially recorded in electronic format, is one such efficiency solution that has been generating a lot of interest. There are many flavors of eSource, but the variation with the most potential is the use of Electronic Health Records (EHR) data, an initiative that has only recently become possible with the move from paper charts to electronic records.
The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 allocated $27 million to incentivize the adoption of Electronic Health Record (EHR) systems. It also mandated that EHR systems include key functionality like interoperability. Though the road to adoption has been bumpy and ease-of-use has significant room for improvement, EHR systems are now widely implemented across in the US from major hospitals to small physician practices. Surveys in 2015 showed EHR systems being used in 96% of non-Federal acute care hospitals and by 60% of independent physicians.
To facilitate communication between different systems, you need standards. The standards to enable data exchange between EHRs and clinical data systems were developed by a joint collaboration of Integrating the Healthcare Enterprise (IHE), Clinical Data Interchange Standards Consortium (CDISC) and the FDA. These standards are grouped together under the CDISC Healthcare Link Initiative. (Nextrials is the only CDISC Registered Solution Provider for the Healthcare Initiative.)
The FDA published the guidance document Electronic Source Data in Clinical Investigations in 2013 and their enthusiastic support for eSource is evident in the document’s introduction: “In an effort to streamline and modernize clinical investigations this guidance promotes capturing source data in electronic form….”. The FDA has since followed up with another draft guidance in 2016 – Use of Electronic Health Record Data in Clinical Investigations.
So, all the pieces are in place. The required infrastructure, EHR systems, is widely adopted. The enabling standards have been written and validated. The regulatory guidance and support has been established. And the necessary motivation is there - sponsors want to save money, sites want to reduce workload, and the FDA wants to streamline research. What are we waiting for? I was at a strategy session last year where former FDA Commissioner Dr. Robert Califf asked that very question.
People have offered multiple excuses for the slow adoption but the root cause is really the conservative nature of our industry. People want proof it will work before they take a risk. Will eSource actually deliver the efficiencies promised by this new paradigm?
There have been multiple eSource pilots conducted over the years. (Nextrials did one of the first pilots back in 2009 with >75% of data obtained directly from the EHR systems of 4 sites and positive experiences reported by the study coordinators.) But the results from these pilots have not rigorously documented the efficiencies of eSource. Our industry is like a hammer where we expect everything to look like a nail. We expect to see results documented in a controlled trial versus the established standard.
Well, the Office of Informatics at the Duke University School of Medicine has done just that! In an article published earlier this year in the International Journal of Medical Informatics, they report on the outcome of a controlled study that compared eSource with the conventional data flow of abstracting EHR data and re-entering the data into an Electronic Data Capture (EDC) system. The results are impressive:
- The eSource workflow resulted in a 37% reduction in time compared to the conventional workflow
- The eSource workflow required one less full-time employee
- eSource data had a 0% error rate compared to a 9% error rate seen with manual entry
Which brings us back to the question posed by Dr. Califf: What are we waiting for?