News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Machine Learning System Catches Two-Thirds More Prescription Medication Errors than Existing Clinical Decision Support Systems at Two Major Hospitals

Researchers find a savings of more than one million dollars and prevention of hundreds, if not thousands, of adverse drug events could have been had with machine learning system

Support for artificial intelligence (AI) and machine learning (ML) in healthcare has been mixed among anatomic pathologists and clinical laboratory leaders. Nevertheless, there’s increasing evidence that diagnostic systems based on AI and ML can be as accurate or more accurate at detecting disease than systems without them.

Dark Daily has covered the development of artificial intelligence and machine learning systems and their ability to accurately detect disease in many e-briefings over the years. Now, a recent study conducted at Brigham and Women’s Hospital (BWH) and Massachusetts General Hospital (MGH) suggests machine learning can be more accurate than existing clinical decision support (CDS) systems at detecting prescription medication errors as well.

The researchers published their findings in the Joint Commission Journal on Quality and Patient Safety, titled, “Using a Machine Learning System to Identify and Prevent Medication Prescribing Errors: A Clinical and Cost Analysis Evaluation.”

A Retrospective Study

The study was partially retrospective in that the researchers compiled past alerts generated by the CDS systems at BWH and MGH between 2009-2011 and added them to alerts generated during the active part of the study, which took place from January 1, 2012 to December 31, 2013, for a total of five years’ worth of CDS alerts.

They then sent the same patient-encounter data that generated those CDS alerts to a machine learning platform called MedAware, an AI-enabled software system developed in Ra’anana, Israel.

MedAware was created for the “identification and prevention of prescription errors and adverse drug effects,” notes the study, which goes on to state, “This system identifies medication issues based on machine learning using a set of algorithms with different complexity levels, ranging from statistical analysis to deep learning with neural networks. Different algorithms are used for different types of medication errors. The data elements used by the algorithms include demographics, encounters, lab test results, vital signs, medications, diagnosis, and procedures.”

The researchers then compared the alerts produced by MedAware to the existing CDS alerts from that 5-year period. The results were astonishing.

According to the study:

  • “68.2% of the alerts generated were unique to the MedAware system and not generated by the institutions’ CDS alerting system.
  • “Clinical outlier alerts were the type least likely to be generated by the institutions’ CDS—99.2% of these alerts were unique to the MedAware system.
  • “The largest overlap was with dosage alerts, with only 10.6% unique to the MedAware system.
  • “68% of the time-dependent alerts were unique to the MedAware system.”

Perhaps even more important was the results of the cost analysis, which found:

  • “The average cost of an adverse event potentially prevented by an alert was $60.67 (range: $5.95–$115.40).
  • “The average adverse event cost per type of alert varied from $14.58 (range: $2.99–$26.18) for dosage outliers to $19.14 (range: $1.86–$36.41) for clinical outliers and $66.47 (range: $6.47–$126.47) for time-dependent alerts.”

The researchers concluded that, “Potential savings of $60.67 per alert was mainly derived from the prevention of ADEs [adverse drug events]. The prevention of ADEs could result in savings of $60.63 per alert, representing 99.93% of the total potential savings. Potential savings related to averted calls between pharmacists and clinicians could save an average of $0.047 per alert, representing 0.08% of the total potential savings.

“Extrapolating the results of the analysis to the 747,985 BWH and MGH patients who had at least one outpatient encounter during the two-year study period from 2012 to 2013, the alerts that would have been fired over five years of their clinical care by the machine learning medication errors identification system could have resulted in potential savings of $1,294,457.”

Savings of more than one million dollars plus the prevention of potential patient harm or deaths caused by thousands of adverse drug events is a strong argument for machine learning platforms in diagnostics and prescription drug monitoring.

“There’s huge promise for machine learning in healthcare. If clinicians use the technology on the front lines, it could lead to improved clinical decision support and new information at the point of care,” said Raj Ratwani, PhD (above), Vice President of Scientific Affairs at MedStar Health Research Institute (MHRI), Director of MedStar Health’s National Center for Human Factors in Healthcare, and Associate Professor of Emergency Medicine at Georgetown University School of Medicine, told HealthITAnalytics. [Photo copyright: MedStar Institute for Innovation.)

Researchers Say Current Clinical Decision Support Systems are Limited

Machine learning is not the same as artificial intelligence. ML is a “discipline of AI” which aims for “enhancing accuracy,” while AI’s objective is “increasing probability of success,” explained Tech Differences.

Healthcare needs the help. Prescription medication errors cause patient harm or deaths that cost more than $20 billion annually, states a Joint Commission news release.

CDS alerting systems are widely used to improve patient safety and quality of care. However, the BWH-MGH researchers say the current CDS systems “have a variety of limitations.” According to the study:

  • “One limitation is that current CDS systems are rule-based and can thus identify only the medication errors that have been previously identified and programmed into their alerting logic.
  • “Further, most have high alerting rates with many false positives, resulting in alert fatigue.”

Alert fatigue leads to physician burnout, which is a big problem in large healthcare systems using multiple health information technology (HIT) systems that generate large amounts of alerts, such as: electronic health record (EHR) systems, hospital information systems (HIS), laboratory information systems (LIS), and others.

Commenting on the value of adding machine learning medication alerts software to existing CDS hospital systems, the BWH-MGH researchers wrote, “This kind of approach can complement traditional rule-based decision support, because it is likely to find additional errors that would not be identified by usual rule-based approaches.”

However, they concluded, “The true value of such alerts is highly contingent on whether and how clinicians respond to such alerts and their potential to prevent actual patient harm.”

Future research based on real-time data is needed before machine learning systems will be ready for use in clinical settings, HealthITAnalytics noted. 

However, medical laboratory leaders and pathologists will want to keep an eye on developments in machine learning and artificial intelligence that help physicians reduce medication errors and adverse drug events. Implementation of AI-ML systems in healthcare will certainly affect clinical laboratory workflows.

—Donna Marie Pocius

Related Information:

AI and Healthcare: A Giant Opportunity

Using a Machine Learning System to Identify and Prevent Medication Prescribing Errors:  A Clinical and Cost Analysis Evaluation

Machine Learning System Accurately Identifies Medication Errors

Journal Study Evaluates Success of Automated Machine Learning System to Prevent Medication Prescribing Errors

Differences Between Machine Learning and Artificial Intelligence

Machining a New Layer of Drug Safety

Harvard and Beth Israel Deaconess Researchers Use Machine Learning Software Plus Human Intelligence to Improve Accuracy and Speed of Cancer Diagnoses

XPRIZE Founder Diamandis Predicts Tech Giants Amazon, Apple, and Google Will Be Doctors of The Future

Hospitals Worldwide Are Deploying Artificial Intelligence and Predictive Analytics Systems for Early Detection of Sepsis in a Trend That Could Help Clinical Laboratories, Microbiologists

Could Biases in Artificial Intelligence Databases Present Health Risks to Patients and Financial Risks to Healthcare Providers, including Medical Laboratories?

Clinical laboratories working with AI should be aware of ethical challenges being pointed out by industry experts and legal authorities

Experts are voicing concerns that using artificial intelligence (AI) in healthcare could present ethical challenges that need to be addressed. They say databases and algorithms may introduce bias into the diagnostic process, and that AI may not perform as intended, posing a potential for patient harm.

If true, the issues raised by these experts would have major implications for how clinical laboratories and anatomic pathology groups might use artificial intelligence. For that reason, medical laboratory executives and pathologists should be aware of possible drawbacks to the use of AI and machine-learning algorithms in the diagnostic process.

Is AI Underperforming?

AI’s ability to improve diagnoses, precisely target therapies, and leverage healthcare data is predicted to be a boon to precision medicine and personalized healthcare.

For example, Accenture (NYSE:ACN) says that hospitals will spend $6.6 billion on AI by 2021. This represents an annual growth rate of 40%, according to a report from the Dublin, Ireland-based consulting firm, which states, “when combined, key clinical health AI applications can potentially create $150 billion in annual savings for the United States healthcare economy by 2026.”

But are healthcare providers too quick to adopt AI?

Accenture defines AI as a “constellation of technologies from machine learning to natural language processing that allows machines to sense, comprehend, act, and learn.” However, some experts say AI is not performing as intended, and that it introduces biases in healthcare worthy of investigation.

Keith Dreyer, DO, PhD, is Chief Data Science Officer at Partners Healthcare and Vice Chairman of Radiology at Massachusetts General Hospital (MGH). At a World Medical Innovation Forum on Artificial Intelligence covered by HealthITAnalytics, he said, “There are currently no measures to indicate that a result is biased or how much it might be biased. We need to explain the dataset these answers came from, how accurate we can expect them to be, where they work, and where they don’t work. When a number comes back, what does it really mean? What’s the difference between a seven and an eight or a two?” (Photo copyright: Healthcare in Europe.)

What Goes in Limits What Comes Out

Could machine learning lead to machine decision-making that puts patients at risk? Some legal authorities say yes. Especially when computer algorithms are based on limited data sources and questionable methods, lawyers warn.

Pilar Ossorio PhD, JD, Professor of Law and Bioethics at the University of Wisconsin Law School (UW), toldHealth Data Management (HDM) that genomics databases, such as the Genome-Wide Association Studies (GWAS), house data predominantly about people of Northern European descent, and that could be a problem.

How can AI provide accurate medical insights for people when the information going into databases is limited in the first place? Ossorio pointed to lack of diversity in genomic data. “There are still large groups of people for whom we have almost no genomic data. This is another way in which the datasets that you might use to train your algorithms are going to exclude certain groups of people altogether,” she told HDM.

She also sounded the alarm about making decisions about women’s health when data driving them are based on studies where women have been “under-treated compared with men.”

“This leads to poor treatment, and that’s going to be reflected in essentially all healthcare data that people are using when they train their algorithms,” Ossorio said during a Machine Learning for Healthcare (MLHC) conference covered by HDM.

How Bias Happens 

Bias can enter healthcare data in three forms: by humans, by design, and in its usage. That’s according to David Magnus, PhD, Director of the Stanford Center for Biomedical Ethics (SCBE) and Senior Author of a paper published in the New England Journal of Medicine (NEJM) titled, “Implementing Machine Learning in Health Care—Addressing Ethical Challenges.”

The paper’s authors wrote, “Physician-researchers are predicting that familiarity with machine-learning tools for analyzing big data will be a fundamental requirement for the next generation of physicians and that algorithms might soon rival or replace physicians in fields that involve close scrutiny of images, such as radiology and anatomical pathology.”

In a news release, Magnus said, “You can easily imagine that the algorithms being built into the healthcare system might be reflective of different, conflicting interests. What if the algorithm is designed around the goal of making money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”

In addition to the possibility of algorithm bias, the authors of the NEJM paper have other concerns about AI affecting healthcare providers:

  • “Physicians must adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models designed to predict outcomes, understand how the models function and guard against becoming overly dependent on them.
  • “Data gathered about patient health, diagnostics, and outcomes become part of the ‘collective knowledge’ of published literature and information collected by healthcare systems and might be used without regard for clinical experience and the human aspect of patient care.
  • “Machine-learning-based clinical guidance may introduce a third-party ‘actor’ into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.”    
“We need to be cautious about caring for people based on what algorithms are showing us. The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically,” said Danton Char, MD, Lead Author and Assistant Professor of Anesthesiology, Perioperative, and Pain Medicine at Stanford, in the news release. “I think society has become very breathless in looking for quick answers,” he added. (Photo copyright: Stanford Medicine.)

Acknowledge Healthcare’s Differences

Still, the Stanford researchers acknowledge that AI can benefit patients. And that healthcare leaders can learn from other industries, such as car companies, which have test driven AI. 

“Artificial intelligence will be pervasive in healthcare in a few years,” said

Nigam Shah, PhD, co-author of the NEJM paper and Associate Professor of Medicine at Stanford, in the news release. He added that healthcare leaders need to be aware of the “pitfalls” that have happened in other industries and be cognizant of data. 

“Be careful about knowing the data from which you learn,” he warned.

AI’s ultimate role in healthcare diagnostics is not yet fully known. Nevertheless, it behooves clinical laboratory leaders and anatomic pathologists who are considering using AI to address issues of quality and accuracy of the lab data they are generating. And to be aware of potential biases in the data collection process.

—Donna Marie Pocius

Related Information:

Accenture: Healthcare Artificial Intelligence

Could Artificial Intelligence Do More Harm than Good in Healthcare?

AI Machine Learning Algorithms Are Susceptible to Biased Data

Implementing Machine Learning in Healthcare—Addressing Ethical Challenges

Researchers Say Use of AI in Medicine Raises Ethical Questions

CMS Missed 96 Hospitals with Suspected HAI Reporting Due to Limited Use of Analytics, OIG Report Reveals

OIG suggests better use of analytics by CMS could prevent gaming of the system by providers; clinical laboratories can help through test utilization management technology

It may come as a surprise to many hospital-based pathologists and clinical laboratory managers that the Centers for Medicare and Medicaid Services (CMS) has reason to suspect that some hospitals are “gaming” the system in how they report hospital-acquired infections (HAIs).

In 2015, CMS implemented the Hospital-Acquired Condition Reduction Program (HACRP) as part of the Patient Protection and Affordable Care Act (ACA). The HACRP program incentivizes hospitals to lower their HAI rates by adjusting reimbursements according to the inpatient quality reporting (hospital IQR) data provided by the healthcare providers. Hospital IQR data is the basis on which CMS validates a hospital’s HAI rate (among other things CMS is tracking) to determine the hospital’s reimbursement rate for that year.

However, an April 2017 report by the Office of the Inspector General US Department of Health and Human Services (OIG) noted that CMS was not doing enough to identify and target hospitals with abnormal reporting of HAIs.

The OIG reported:

  • CMS, in 2016, met its regulatory requirement to validate inpatient quality reporting data;
  • It reviewed data of 400 randomly selected hospitals as well as 49 hospitals targeted for failing to report half their HAIs, or for low scores in the prior year’s validation process;

However, OIG also reported that CMS did not include hospitals that displayed abnormal data patterns in its targeted sample. Targeting those hospitals, according to the OIG, could identify inaccurate reporting.

CMS staff had identified 96 hospitals with aberrant data patterns, but did not target them for validation—even though the agency can select up to 200 targeted hospitals for review, Becker’s Hospital Review pointed out.

Dollars More Important than Deaths

According to the OIG report, Medicare excluded in its investigation dozens of hospitals with suspected HAI reporting. This is odd since the CMS and the Centers for Disease Control (CDC) apparently are aware that some healthcare providers have manipulated data to improve their quality measure scores and thus increase their reimbursement rates.

“Collecting and analyzing quality data is increasingly central to Medicare programs that link payments to quality and value. Therefore, it is important for CMS to ensure that hospitals are not gaming [manipulating data to improve scores] their reporting of quality data,” the OIG report noted.

“There are greater requirements for what a company says about a washing machine’s performance than there is for a hospital on quality of care. And this needs to change,” stated Peter Pronovost, MD, PhD, in the Kaiser Health News article. “We require auditing of financial data, but we don’t require auditing of healthcare quality data, and that implies that dollars are more important than deaths.” Pronovost is Senior Vice President for Patient Safety and Quality at Johns Hopkins University School of Medicine.

 

Peter Pronovost, MD, PhD

Peter Pronovost, MD, PhD (above) testifying on preventable deaths before the Senate Subcommittee on Primary Health and Aging in 2014. He is Senior Vice President for Patient Safety and Quality at Johns Hopkins University School of Medicine in Baltimore. Pronovost told Kaiser Health News that there are no uniform standards for reviewing data that hospitals report to Medicare. (Photo copyright: US Senate Committee on Health, Education, Labor and Pensions.)

Medicare Missed Hospitals with Suspected HAI Data

CMS should have done an in-depth review of many hospitals that submitted “aberrant data patterns” in 2013 and 2014, the OIG stated in its report. According to a Kaiser Health News article, such patterns could include:

  • A rapid change in results;
  • Improbably low infection rates; and
  • Assertions that infections nearly always struck before patients arrived at the hospital.

“There’s a certain amount of blind faith that hospitals are going to tell the truth. It’s a bit much to expect that if they had a bad record they are going to fess up to it,” noted Lisa McGiffert, Director of the Safe Patient Project at Consumers Union, in the Kaiser Health News article.

CMS Needs Better Data Analytics

So, what does the OIG advise CMS to do? The agency called for “better use of analytics to ensure the integrity of hospital-reported quality data.” Specifically, OIG suggested CMS:

  • Identify hospitals with abnormal percentages of patients who had infections on admission;
  • Apply risk scores to identify hospitals with high propensity to manipulate reporting;
  • Use experiences to create and improve models that identify hospitals most likely to game their reporting.

CMS’ Administrator Seema Verma reportedly responded, “We will continue to evaluate the use of better analytics as feasible, based on Medicare’s operational capabilities.”

Medical Laboratory Diagnostic Testing Part of Gaming the System

A 2015 CMS/CDC joint statement noted “three ways that hospitals may be deviating from CDC’s definitions for reportable HAIs,” and two involve diagnostic test ordering. According to the OIG report, they include:

  • Overculturing: Diagnostic tests may be overutilized by providers in absence of clinical symptoms. Hospitals may use positive results to game their data by claiming infections that appeared days later were present on admission and thus not reportable.
  • Underculturing: Hospitals underculture when they do not order diagnostic tests in the presence of clinical symptoms. By not ordering the test, the hospital does not learn whether the patient truly has an infection and, therefore, the hospital does not have to report it.
  • Adjudication: Hospital administrative staff may inappropriately overrule those who report infections. HAIs are, therefore, not shared.

Clinical Laboratories Can Help

One in 25 people each day receives an HAI, CDC estimates. The OIG findings should be a reminder to medical laboratories and pathology groups that quality measures and patient outcomes are often transparent to media, patients, and the public.

One way medical laboratories in hospitals and health systems can help is by investing in utilization management technology and protocols that ensure appropriate lab test utilization. Informing doctors on the availability of appropriate diagnostic tests based on patients’ existing conditions, unique physiologies, or medical histories, could help prevent hospitals from inadvertently or deliberately game the system.

Clearly, transparency in healthcare is increasing. That means there will be more news stories revealing federal agencies’ failures to respond to healthcare data in ways that could have protected patients and the public. Clinical laboratories don’t want to be included in negative reporting.

—Donna Marie Pocius

Related Content:

CMS Validated Hospital Inpatient Quality Reporting Program Data, But Should Use Additional Tools to Identify Gaming

Medicare Failed to Investigate Suspicious Infection Cases from 96 Hospitals

CMS Can Do More to Validate Hospital-Reported Infection Data, OIG Report Finds

Study Suggests Medical Errors Now Third Leading Cause of Death in the US

Research Study at Johns Hopkins University Reveals CDC Does Not Record Medical Errors in Annual Mortality Report, Yet Such Errors Are Third Leading Cause of Death

Biggest Opportunity for Clinical Laboratory Industry is Utilization Management of Lab Tests, But Only If It Is Done Well

Lessons from the Pioneers: Reporting Healthcare-Associated Infections

Webinar: Simple, Swift Approaches to Lab Test Utilization Management: Proven Ways for Your Clinical Laboratory to Use Data and Collaborations to Add Value 

Federal Appeals Court Rules Yelp Not Responsible for Bad Reviews; Labs Advised to Examine Their Online Presence

Clinical laboratories and pathology groups can benefit from developing a strategy for addressing negative Yelp reviews

In today’s wired world, clinical laboratories and anatomic pathology groups have a new challenge: what to do when unhappy patients go to social networking sites and post comments about their negative experience with their lab. A lab can have a sterling reputation for service and it can all unravel if a vociferous and angry patient posts rants on the Internet.

Today’s reality is that, like them or not, online reviews posted on websites such as Yelp are here to stay. That is why medical lab managers and pathologists should know about a recent court ruling that protects websites that feature consumer reviews about businesses.

One business owner who sued such a website learned this out the hard way—in court. A locksmith in Redmond, Wash., reportedly filed a libel lawsuit, claiming he lost 95% of his business after receiving a negative 1-star review on Yelp. Regardless, a federal appeals court ruled that Yelp’s star rating system, which is based on user input, does not make Yelp responsible for negative reviews of businesses, the Chicago Tribune reported. (more…)

Research Study at Johns Hopkins University Reveals CDC Does Not Record Medical Errors in Annual Mortality Report, Yet Such Errors Are Third Leading Cause of Death

An earlier Johns Hopkins study looked at diagnostic errors and determined that such errors were the leading cause of malpractice payouts. Can clinical laboratories help?

At a time of heightened transparency in healthcare outcomes, a Johns Hopkins University School of Medicine (Johns Hopkins) study makes a startling conclusion: medical errors are an under-recognized cause of patients’ deaths in the United States. In fact, medical errors rank third—after heart disease and cancer—in causing patients’ deaths, according to a Johns Hopkins statement.

This finding has many implications for pathologists and clinical laboratory managers. Often, medical errors are associated with the failure of physicians to order correct medical laboratory tests at critical junctures. Alternatively, a medical error can result if the physician fails to take appropriate action after getting an accurate lab test result. Thus, any effort within the health system to reduce medical errors will probably bring pathologists and medical laboratory scientists into closer consultation with clinicians.

What the researchers at Johns Hopkins also learned during their study is that medical error is not reported as a cause of death on death certificates. Further, the Centers for Disease Control and Prevention (CDC) has no “medical error” category in its annual report on deaths and mortality, The New York Times (NYT) reported. (more…)

;