Researchers find a savings of more than one million dollars and prevention of hundreds, if not thousands, of adverse drug events could have been had with machine learning system
Support for artificial intelligence (AI) and machine learning (ML) in healthcare has been mixed among anatomic pathologists and clinical laboratory leaders. Nevertheless, there’s increasing evidence that diagnostic systems based on AI and ML can be as accurate or more accurate at detecting disease than systems without them.
Dark Daily has covered the development of artificial intelligence and machine learning systems and their ability to accurately detect disease in many e-briefings over the years. Now, a recent study conducted at Brigham and Women’s Hospital (BWH) and Massachusetts General Hospital (MGH) suggests machine learning can be more accurate than existing clinical decision support (CDS) systems at detecting prescription medication errors as well.
The study was partially retrospective in that the
researchers compiled past alerts generated by the CDS systems at BWH and MGH
between 2009-2011 and added them to alerts generated during the active part of
the study, which took place from January 1, 2012 to December 31, 2013, for a
total of five years’ worth of CDS alerts.
They then sent the same patient-encounter data that generated those CDS alerts to a machine learning platform called MedAware, an AI-enabled software system developed in Ra’anana, Israel.
MedAware was created for the “identification and prevention
of prescription errors and adverse drug effects,” notes the study, which goes
on to state, “This system identifies medication issues based on machine
learning using a set of algorithms with different complexity levels, ranging
from statistical analysis to deep learning with neural networks. Different
algorithms are used for different types of medication errors. The data elements
used by the algorithms include demographics, encounters, lab test results,
vital signs, medications, diagnosis, and procedures.”
The researchers then compared the alerts produced by
MedAware to the existing CDS alerts from that 5-year period. The results were
astonishing.
According to the study:
“68.2% of the alerts generated were unique to
the MedAware system and not generated by the institutions’ CDS alerting system.
“Clinical outlier alerts were the type least
likely to be generated by the institutions’ CDS—99.2% of these alerts were
unique to the MedAware system.
“The largest overlap was with dosage alerts,
with only 10.6% unique to the MedAware system.
“68% of the time-dependent alerts were unique to
the MedAware system.”
Perhaps even more important was the results of the cost
analysis, which found:
“The average cost of an adverse event
potentially prevented by an alert was $60.67 (range: $5.95–$115.40).
“The average adverse event cost per type of
alert varied from $14.58 (range: $2.99–$26.18) for dosage outliers to $19.14
(range: $1.86–$36.41) for clinical outliers and $66.47 (range: $6.47–$126.47)
for time-dependent alerts.”
The researchers concluded that, “Potential savings of $60.67 per alert was mainly derived from the prevention of ADEs [adverse drug events]. The prevention of ADEs could result in savings of $60.63 per alert, representing 99.93% of the total potential savings. Potential savings related to averted calls between pharmacists and clinicians could save an average of $0.047 per alert, representing 0.08% of the total potential savings.
“Extrapolating the results of the analysis to the 747,985
BWH and MGH patients who had at least one outpatient encounter during the
two-year study period from 2012 to 2013, the alerts that would have been fired
over five years of their clinical care by the machine learning medication
errors identification system could have resulted in potential savings of
$1,294,457.”
Savings of more than one million dollars plus the prevention
of potential patient harm or deaths caused by thousands of adverse drug events
is a strong argument for machine learning platforms in diagnostics and
prescription drug monitoring.
Researchers Say Current Clinical Decision Support Systems
are Limited
Machine learning is not the same as artificial intelligence. ML is a “discipline of AI” which aims for “enhancing accuracy,” while AI’s objective is “increasing probability of success,” explained Tech Differences.
Healthcare needs the help. Prescription medication errors cause patient harm or deaths that cost more than $20 billion annually, states a Joint Commission news release.
CDS alerting systems are widely used to improve patient
safety and quality of care. However, the BWH-MGH researchers say the current
CDS systems “have a variety of limitations.” According to the study:
“One limitation is that current CDS systems are rule-based and can thus identify only the medication errors that have been previously identified and programmed into their alerting logic.
“Further, most have high alerting rates with many false positives, resulting in alert fatigue.”
Commenting on the value of adding machine learning
medication alerts software to existing CDS hospital systems, the BWH-MGH
researchers wrote, “This kind of approach can complement traditional rule-based
decision support, because it is likely to find additional errors that would not
be identified by usual rule-based approaches.”
However, they concluded, “The true value of such alerts is
highly contingent on whether and how clinicians respond to such alerts and
their potential to prevent actual patient harm.”
Future research based on real-time data is needed before machine
learning systems will be ready for use in clinical settings, HealthITAnalytics
noted.
However, medical laboratory leaders and pathologists will
want to keep an eye on developments in machine learning and artificial
intelligence that help physicians reduce medication errors and adverse drug
events. Implementation of AI-ML systems in healthcare will certainly affect
clinical laboratory workflows.
Clinical laboratories working with AI should be aware of ethical challenges being pointed out by industry experts and legal authorities
Experts are voicing concerns that using artificial
intelligence (AI) in healthcare could present ethical challenges that need
to be addressed. They say databases and algorithms may introduce bias into the
diagnostic process, and that AI may not perform as intended, posing a potential
for patient harm.
If true, the issues raised by these experts would have major
implications for how clinical
laboratories and anatomic
pathology groups might use artificial intelligence. For that reason,
medical laboratory executives and pathologists should be aware of possible
drawbacks to the use of AI and machine-learning
algorithms in the diagnostic process.
Is AI Underperforming?
AI’s ability to improve diagnoses, precisely target
therapies, and leverage healthcare data is predicted to be a boon to precision medicine and personalized
healthcare.
For example, Accenture
(NYSE:ACN) says that hospitals will spend $6.6 billion on AI by 2021. This
represents an annual growth rate of 40%, according
to a report from the Dublin, Ireland-based consulting firm, which states,
“when combined, key clinical health AI applications can potentially create $150
billion in annual savings for the United States healthcare economy by 2026.”
But are healthcare providers too quick to adopt AI?
Accenture defines AI as a “constellation of technologies
from machine learning to natural
language processing that allows machines to sense, comprehend, act, and
learn.” However, some experts say AI is not performing as intended, and that it
introduces biases in healthcare worthy of investigation.
What Goes in Limits What Comes Out
Could machine learning lead to machine decision-making that
puts patients at risk? Some legal authorities say yes. Especially when computer
algorithms are based on limited data sources and questionable methods, lawyers
warn.
How can AI provide accurate medical insights for people when
the information going into databases is limited in the first place? Ossorio
pointed to lack of diversity in genomic
data. “There are still large groups of people for whom we have almost no
genomic data. This is another way in which the datasets that you might use to
train your algorithms are going to exclude certain groups of people
altogether,” she told HDM.
She also sounded the alarm about making decisions about
women’s health when data driving them are based on studies where women have
been “under-treated compared with men.”
“This leads to poor treatment, and that’s going to be
reflected in essentially all healthcare data that people are using when they
train their algorithms,” Ossorio said during a Machine Learning for Healthcare (MLHC) conference
covered by HDM.
How Bias Happens
Bias can enter healthcare data in three forms: by humans, by
design, and in its usage. That’s according to David Magnus, PhD, Director
of the Stanford Center for
Biomedical Ethics (SCBE) and Senior Author of a paper published in the New England
Journal of Medicine (NEJM) titled, “Implementing Machine
Learning in Health Care—Addressing Ethical Challenges.”
The paper’s authors wrote, “Physician-researchers are
predicting that familiarity with machine-learning tools for analyzing big data
will be a fundamental requirement for the next generation of physicians and
that algorithms might soon rival or replace physicians in fields that involve
close scrutiny of images, such as radiology and anatomical pathology.”
In a news
release, Magnus said, “You can easily imagine that the algorithms being
built into the healthcare system might be reflective of different, conflicting
interests. What if the algorithm is designed around the goal of making money?
What if different treatment decisions about patients are made depending on
insurance status or their ability to pay?”
In addition to the possibility of algorithm bias, the
authors of the NEJM paper have other concerns about AI affecting
healthcare providers:
“Physicians must adequately understand how
algorithms are created, critically assess the source of the data used to create
the statistical models designed to predict outcomes, understand how the models
function and guard against becoming overly dependent on them.
“Data gathered about patient health, diagnostics,
and outcomes become part of the ‘collective knowledge’ of published literature
and information collected by healthcare systems and might be used without
regard for clinical experience and the human aspect of patient care.
“Machine-learning-based clinical guidance may
introduce a third-party ‘actor’ into the physician-patient relationship, challenging
the dynamics of responsibility in the relationship and the expectation of
confidentiality.”
Acknowledge Healthcare’s Differences
Still, the Stanford researchers acknowledge that AI can
benefit patients. And that healthcare leaders can learn from other industries,
such as car companies, which have test driven AI.
“Artificial intelligence will be pervasive in healthcare in a
few years,” said
Nigam Shah, PhD, co-author of the NEJM paper and Associate Professor of Medicine at Stanford, in the news release. He added that healthcare leaders need to be aware of the “pitfalls” that have happened in other industries and be cognizant of data.
“Be careful about knowing the data from which you learn,” he
warned.
AI’s ultimate role in healthcare diagnostics is not yet fully
known. Nevertheless, it behooves clinical laboratory leaders and anatomic
pathologists who are considering using AI to address issues of quality and
accuracy of the lab data they are generating. And to be aware of potential
biases in the data collection process.
Softened FDA regulation of both clinical-decision-support and patient-decision-support software applications could present opportunities for clinical laboratory developers of such tools
Physician decision-support software utilizes medical laboratory test data as a significant part of a full dataset used to guide caregivers. Thus, if the FDA makes it easier for developers to get regulatory clearance for these types of products, that could positively impact medical labs’ ability to service their client physicians.
Additionally, clinical pathologists have unique training in diagnosing diseases and understanding the capabilities and limitations of medical laboratory tests in supporting how physicians diagnose disease and make treatment decisions. Thus, actions by the FDA to make it easier for developers of software algorithms that can incorporate clinical laboratory data and anatomic pathology images with the goal of improving diagnoses, decisions to treat, and monitoring of patients have the potential to bring great benefit to the nation’s medical laboratories.
FDA Clarifies Role in Regulating CDS/PDS Applications
The new guidelines clarified items specified in the 21st Century Cures Act, which was enacted by Congress in December of 2016. This Act authorized $6.3 billion in funding for the discovery, development, and delivery of advanced, state-of-the art medical cures.
“Today, we’re announcing three new guidances—two draft and one final—that address, in part, important provisions of the 21st Century Cures Act, that offer additional clarity about where the FDA sees its role in digital health, and importantly, where we don’t see a need for FDA involvement,” FDA commissioner Scott Gottlieb, MD, Commissioner of Food and Drugs, noted in a statement. “We’ve taken the instructions Congress gave us under the Cures Act and [we] are building on these provisions to make sure that we’re adopting the full spirit of the goals we were entrusted with by Congress.”
Helping Doctors’ Decision-Making
The first guideline concerns clinical decision support systems that are designed to help doctors make data-driven decisions about patient care. The new guidelines make it easier for software developers to get regulatory clearance, which, the FDA hopes, will spark innovation and makes regulation more efficient.
“CDS has many uses, including helping providers, and ultimately patients, identify the most appropriate treatment plan for their disease or condition,” Gottlieb said in the FDA’s statement. “For example, such software can include programs that compare patient-specific signs, symptoms, or results with available clinical guidelines to recommend diagnostic tests, investigations or therapy.
“This type of technology has the potential to enable providers and patients to fully leverage digital tools to improve decision making,” Gottlieb continued. “We want to encourage developers to create, adapt, and expand the functionalities of their software to aid providers in diagnosing and treating old and new medical maladies.”
Identifying Digital Health Applications That Receive/Don’t Receive FDA Oversight
The second guideline discusses and delineates which digital health applications are considered low risk and, thus, will not fall under FDA regulations.
Products that are not intended to be used for the diagnosis, cure, mitigation, prevention, or treatment of a condition will not be regulated by the FDA. These technologies are not considered medical devices and may include gadgets such as weight management and mindfulness tools. They can provide value to consumers and the healthcare industry while posing a low risk to patients.
“Similarly, the CDS draft guidance also proposes to not enforce regulatory requirements for lower-risk decision support software that’s intended to be used by patients or caregivers—known as patient-decision-support software (PDS)—when such software allows a patient or a caregiver to independently review the basis of the treatment recommendation,” Gottlieb noted in the statement.
Scott Gottlieb, MD (above), FDA Commissioner of Food and Drugs, noted in a statement, “We believe our proposals for regulating CDS and PDS not only fulfill the provisions of the Cures Act, but also strike the right balance between ensuring patient safety and promoting innovation. Clinical laboratories may find opportunities to work with CDS/PDS developers and support their client physicians. (Photo copyright: FDA.)
However, products that are intended to be used for the diagnosis, cure, mitigation, prevention, or treatment of a condition are considered medical devices and will fall under FDA regulations.
“The FDA will continue to enforce oversight of software programs that are intended to process or analyze medical images, signals from in vitro diagnostic devices, or patterns acquired from a processor like an electrocardiogram that use analytical functionalities to make treatment recommendations, as these remain medical devices under the Cures Act,” noted Gottlieb.
Items such as mobile apps that are utilized to maintain and encourage a healthy lifestyle are not deemed to be medical devices and will fall outside FDA regulations. The guidelines also defined that Office of the National Coordinator for Health Information Technology (ONC)-certified electronic health record (EHR) systems are not medical devices and, thus, will not be regulated by the FDA.
Software-as-a-Medical Device Gets FDA Oversight
The third guidance document deals with the assessment of the safety, performance, and effectiveness of Software as a Medical Device (SaMD).
“This final guidance provides globally recognized principles for analyzing and assessing SaMD, based on the overall risk of the product. The agency’s adoption of these principles provides us with an initial framework when further developing our own specific regulatory approaches and expectations for regulatory oversight and is another important piece in our overarching policy framework for digital health,” Gottlieb noted in the statement.
SaMD is defined by the International Medical Device Regulators Forum (IMDRF) as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.”
Gottlieb noted that the three important guidance documents being issued would continue to expand the FDA’s efforts to encourage innovation in the ever-changing field of digital health. “Our aim is to provide more clarity on, and innovative changes to, our risk-based approach to digital health products, so that innovators know where they stand relative to the FDA’s regulatory framework. Our interpretation of the Cures Act is creating a bright line to define those areas where we do not require premarket review,” he concluded.
What remains to be seen is how the new FDA regulations will impact clinical laboratories and anatomic pathology groups. With the expanding interest in artificial intelligence (AI) and self-learning software systems, healthcare futurists are predicting a rosy future for informatics products that incorporate these technologies. Hopefully, with these new guidelines in place, innovative clinical laboratories will have the opportunity to develop new digital products for their clients.