News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Machine Learning System Catches Two-Thirds More Prescription Medication Errors than Existing Clinical Decision Support Systems at Two Major Hospitals

Researchers find a savings of more than one million dollars and prevention of hundreds, if not thousands, of adverse drug events could have been had with machine learning system

Support for artificial intelligence (AI) and machine learning (ML) in healthcare has been mixed among anatomic pathologists and clinical laboratory leaders. Nevertheless, there’s increasing evidence that diagnostic systems based on AI and ML can be as accurate or more accurate at detecting disease than systems without them.

Dark Daily has covered the development of artificial intelligence and machine learning systems and their ability to accurately detect disease in many e-briefings over the years. Now, a recent study conducted at Brigham and Women’s Hospital (BWH) and Massachusetts General Hospital (MGH) suggests machine learning can be more accurate than existing clinical decision support (CDS) systems at detecting prescription medication errors as well.

The researchers published their findings in the Joint Commission Journal on Quality and Patient Safety, titled, “Using a Machine Learning System to Identify and Prevent Medication Prescribing Errors: A Clinical and Cost Analysis Evaluation.”

A Retrospective Study

The study was partially retrospective in that the researchers compiled past alerts generated by the CDS systems at BWH and MGH between 2009-2011 and added them to alerts generated during the active part of the study, which took place from January 1, 2012 to December 31, 2013, for a total of five years’ worth of CDS alerts.

They then sent the same patient-encounter data that generated those CDS alerts to a machine learning platform called MedAware, an AI-enabled software system developed in Ra’anana, Israel.

MedAware was created for the “identification and prevention of prescription errors and adverse drug effects,” notes the study, which goes on to state, “This system identifies medication issues based on machine learning using a set of algorithms with different complexity levels, ranging from statistical analysis to deep learning with neural networks. Different algorithms are used for different types of medication errors. The data elements used by the algorithms include demographics, encounters, lab test results, vital signs, medications, diagnosis, and procedures.”

The researchers then compared the alerts produced by MedAware to the existing CDS alerts from that 5-year period. The results were astonishing.

According to the study:

  • “68.2% of the alerts generated were unique to the MedAware system and not generated by the institutions’ CDS alerting system.
  • “Clinical outlier alerts were the type least likely to be generated by the institutions’ CDS—99.2% of these alerts were unique to the MedAware system.
  • “The largest overlap was with dosage alerts, with only 10.6% unique to the MedAware system.
  • “68% of the time-dependent alerts were unique to the MedAware system.”

Perhaps even more important was the results of the cost analysis, which found:

  • “The average cost of an adverse event potentially prevented by an alert was $60.67 (range: $5.95–$115.40).
  • “The average adverse event cost per type of alert varied from $14.58 (range: $2.99–$26.18) for dosage outliers to $19.14 (range: $1.86–$36.41) for clinical outliers and $66.47 (range: $6.47–$126.47) for time-dependent alerts.”

The researchers concluded that, “Potential savings of $60.67 per alert was mainly derived from the prevention of ADEs [adverse drug events]. The prevention of ADEs could result in savings of $60.63 per alert, representing 99.93% of the total potential savings. Potential savings related to averted calls between pharmacists and clinicians could save an average of $0.047 per alert, representing 0.08% of the total potential savings.

“Extrapolating the results of the analysis to the 747,985 BWH and MGH patients who had at least one outpatient encounter during the two-year study period from 2012 to 2013, the alerts that would have been fired over five years of their clinical care by the machine learning medication errors identification system could have resulted in potential savings of $1,294,457.”

Savings of more than one million dollars plus the prevention of potential patient harm or deaths caused by thousands of adverse drug events is a strong argument for machine learning platforms in diagnostics and prescription drug monitoring.

“There’s huge promise for machine learning in healthcare. If clinicians use the technology on the front lines, it could lead to improved clinical decision support and new information at the point of care,” said Raj Ratwani, PhD (above), Vice President of Scientific Affairs at MedStar Health Research Institute (MHRI), Director of MedStar Health’s National Center for Human Factors in Healthcare, and Associate Professor of Emergency Medicine at Georgetown University School of Medicine, told HealthITAnalytics. [Photo copyright: MedStar Institute for Innovation.)

Researchers Say Current Clinical Decision Support Systems are Limited

Machine learning is not the same as artificial intelligence. ML is a “discipline of AI” which aims for “enhancing accuracy,” while AI’s objective is “increasing probability of success,” explained Tech Differences.

Healthcare needs the help. Prescription medication errors cause patient harm or deaths that cost more than $20 billion annually, states a Joint Commission news release.

CDS alerting systems are widely used to improve patient safety and quality of care. However, the BWH-MGH researchers say the current CDS systems “have a variety of limitations.” According to the study:

  • “One limitation is that current CDS systems are rule-based and can thus identify only the medication errors that have been previously identified and programmed into their alerting logic.
  • “Further, most have high alerting rates with many false positives, resulting in alert fatigue.”

Alert fatigue leads to physician burnout, which is a big problem in large healthcare systems using multiple health information technology (HIT) systems that generate large amounts of alerts, such as: electronic health record (EHR) systems, hospital information systems (HIS), laboratory information systems (LIS), and others.

Commenting on the value of adding machine learning medication alerts software to existing CDS hospital systems, the BWH-MGH researchers wrote, “This kind of approach can complement traditional rule-based decision support, because it is likely to find additional errors that would not be identified by usual rule-based approaches.”

However, they concluded, “The true value of such alerts is highly contingent on whether and how clinicians respond to such alerts and their potential to prevent actual patient harm.”

Future research based on real-time data is needed before machine learning systems will be ready for use in clinical settings, HealthITAnalytics noted. 

However, medical laboratory leaders and pathologists will want to keep an eye on developments in machine learning and artificial intelligence that help physicians reduce medication errors and adverse drug events. Implementation of AI-ML systems in healthcare will certainly affect clinical laboratory workflows.

—Donna Marie Pocius

Related Information:

AI and Healthcare: A Giant Opportunity

Using a Machine Learning System to Identify and Prevent Medication Prescribing Errors:  A Clinical and Cost Analysis Evaluation

Machine Learning System Accurately Identifies Medication Errors

Journal Study Evaluates Success of Automated Machine Learning System to Prevent Medication Prescribing Errors

Differences Between Machine Learning and Artificial Intelligence

Machining a New Layer of Drug Safety

Harvard and Beth Israel Deaconess Researchers Use Machine Learning Software Plus Human Intelligence to Improve Accuracy and Speed of Cancer Diagnoses

XPRIZE Founder Diamandis Predicts Tech Giants Amazon, Apple, and Google Will Be Doctors of The Future

Hospitals Worldwide Are Deploying Artificial Intelligence and Predictive Analytics Systems for Early Detection of Sepsis in a Trend That Could Help Clinical Laboratories, Microbiologists

Harvard and Beth Israel Deaconess Researchers Use Machine Learning Software Plus Human Intelligence to Improve Accuracy and Speed of Cancer Diagnoses

Machine learning software may help pathologists make earlier and more accurate diagnoses

In Boston, two major academic centers are teaming up to apply big data and machine learning to the problem of diagnosing cancers earlier and with more accuracy. It is research that might have major implications for the anatomic pathology profession.

A collaborative effort between teams at Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS) has resulted in an innovation that could result in more accurate diagnoses in the pathology laboratory. The teams have been working on a machine learning software program that will eventually function as an artificial intelligence (AI) to improve the accuracy of diagnostics. They hope to someday build AI-powered computer systems that can accurately and quickly interpret pathology images. (more…)

What is Swarm Learning and Might It Come to a Clinical Laboratory Near You?

International research team that developed swarm learning believe it could ‘significantly promote and accelerate collaboration and information exchange in research, especially in the field of medicine’

Swarm Learning” is a technology that enables cross-site analysis of population health data while maintaining patient privacy protocols to generate improvements in precision medicine. That’s the goal described by an international team of scientists who used this approach to develop artificial intelligence (AI) algorithms that seek out and identify lung disease, blood cancer, and COVID-19 data stored in disparate databases.

Since 80% of patient records feature clinical laboratory test results, there’s no doubt this protected health information (PHI) would be curated by the swarm learning algorithms. 

Researchers with DZNE (German Center for Neurodegenerative Diseases), the University of Bonn, and Hewlett Packard Enterprise (HPE) who developed the swarm learning algorithms published their findings in the journal Nature, titled, “Swarm Learning for Decentralized and Confidential Clinical Machine Learning.”

In their study they wrote, “Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine. … However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation. Here, to facilitate the integration of any medical data from any data owner worldwide without violating privacy laws, we introduce Swarm Learning—a decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking, and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning.”

What is Swarm Learning?

Swarm Learning is a way to collaborate and share medical research toward a goal of advancing precision medicine, the researchers stated.

The technology blends AI with blockchain-based peer-to-peer networking to create information exchange across a network, the DZNE news release explained. The machine learning algorithms are “trained” to detect data patterns “and recognize the learned patterns in other data as well,” the news release noted. 

Joachim Schultze, MD

“Medical research data are a treasure. They can play a decisive role in developing personalized therapies that are tailored to each individual more precisely than conventional treatments,” said Joachim Schultze, MD (above), Director, Systems Medicine at DZNE and Professor, Life and Medical Sciences Institute at the University of Bonn, in the news release. “It’s critical for science to be able to use such data as comprehensively and from as many sources as possible,” he added. This, of course, would include clinical laboratory test results data. (Photo copyright: University of Bonn.)
 

Since, as Dark Daily has reported many times, clinical laboratory test data comprises as much as 80% of patients’ medical records, such a treasure trove of information will most likely include medical laboratory test data as well as reports on patient diagnoses, demographics, and medical history. Swarm learning incorporating laboratory test results may inform medical researchers in their population health analyses.

“The key is that all participants can learn from each other without the need of sharing confidential information,” said Eng Lim Goh, PhD, Senior Vice President and Chief Technology Officer for AI at Hewlett Packard Enterprise (HPE), which developed base technology for swarm learning, according to the news release.

An HPE blog post notes that “Using swarm learning, the hospital can combine its data with that of hospitals serving different demographics in other regions and then use a private blockchain to learn from a global average, or parameter, of results—without sharing actual patient information.

“Under this model,” the blog continues, “‘each hospital is able to predict, with accuracy and with reduced bias, as though [it has] collected all the patient data globally in one place and learned from it,’ Goh says.”

Swarm Learning Applied in Study

The researchers studied four infectious and non-infectious diseases:

They used 16,400 transcriptomes from 127 clinical studies and assessed 95,000 X-ray images.

  • Data for transcriptomes were distributed over three to 32 blockchain nodes and across three nodes for X-rays.
  • The researchers “fed their algorithms with subsets of the respective data set” (such as those coming from people with disease versus healthy individuals), the news release noted.

Findings included:

  • 90% algorithm accuracy in reporting on healthy people versus those diagnosed with diseases for transcriptomes.
  • 76% to 86% algorithm accuracy in reporting of X-ray data.
  • Methodology worked best for leukemia.
  • Accuracy also was “very high” for tuberculosis and COVID-19.
  • X-ray data accuracy rate was lower, researchers said, due to less available data or image quality.

“Our study thus proves that swarm learning can be successfully applied to very different data. In principle, this applies to any type of information for which pattern recognition by means of artificial intelligence is useful. Be it genome data, X-ray images, data from brain imaging, or other complex data,” Schultze said in the DZNE news release.

The researchers plan to conduct additional studies aimed at exploring swarm learning’s implications to Alzheimer’s disease and other neurodegenerative diseases.

Is Swarm Learning Coming to Your Lab?

The scientists say hospitals as well as research institutions may join or form swarms. So, hospital-based medical laboratory leaders and pathology groups may have an opportunity to contribute to swarm learning. According to Schultze, sharing information can go a long way toward “making the wealth of experience in medicine more accessible worldwide.”

Donna Marie Pocius

Related Information:

AI With Swarm Intelligence: A Novel Technology for Cooperative Analysis of Big Data

Swarm Learning for Decentralized and Confidential Clinical Machine Learning

Swarm Learning

HPE’s Dr. Goh on Harnessing the Power of Swarm Learning

Swarm Learning: This Artificial Intelligence Can Detect COVID-19, Other Diseases

Researchers in Five Countries Use AI, Deep Learning to Analyze and Monitor the Quality of Donated Red Blood Cells Stored for Transfusions

By training a computer to analyze blood samples, and then automating the expert assessment process, the AI processed months’ worth of blood samples in a single day

New technologies and techniques for acquiring and transporting biological samples for clinical laboratory testing receive much attention. But what of the quality of the samples themselves? Blood products are expensive, as hospital medical laboratories that manage blood banks know all too well. Thus, any improvement to how labs store blood products and confidently determine their viability for transfusion is useful.

One such improvement is coming out of Canada. Researchers at the University of Alberta  (U of A) in collaboration with scientists and academic institutions in five countries are looking into ways artificial intelligence (AI) and deep learning can be used to efficiently and quickly analyze red blood cells (RBCs). The results of the study may alter the way donated blood is evaluated and selected for transfusion to patients, according to an article in Folio, a U of A publication, titled, “AI Could Lead to Faster, Better Analysis of Donated Blood, Study Shows.” 

The study, which uses AI and imaging flow cytometry (IFC) to scrutinize the shape of RBCs, assess the quality of the stored blood, and remove human subjectivity from the process, was published in Proceedings of the National Academy of Sciences (PNAS,) titled, “Objective Assessment of Stored Blood Quality by Deep Learning.”

Improving Blood Diagnostics through Precision Medicine and Deep Learning

“This project is an excellent example of how we are using our world-class expertise in precision health to contribute to the interdisciplinary work required to make fundamental changes in blood diagnostics,” said Jason Acker, PhD, a senior scientist at Canadian Blood Services’ Centre for Innovation, Professor of Laboratory Medicine and Pathology at the University of Alberta, and one of the lead authors of the study, in the Folio article.

The research took more than three years to complete and involved 19 experts from 12 academic institutions and blood collection facilities located in Canada, Germany, Switzerland, the United Kingdom, and the US.

Jason Acker, PhD (above), Senior Research Scientist, Canadian Blood Services, and Professor of Laboratory Medicine and Pathology at the University of Alberta in a white lab jacket in a laboratory
“Our study shows that artificial intelligence gives us better information about the red blood cell morphology, which is the study of how these cells are shaped, much faster than human experts,” said Jason Acker, PhD (above), Senior Research Scientist, Canadian Blood Services, and Professor of Laboratory Medicine and Pathology at the University of Alberta, in an article published on the Canadian Blood Services website. “We anticipate this technology will improve diagnostics for clinicians as well as quality assurance for blood operators such as Canadian Blood Services in the coming years,” he added. Clinical laboratories in the US may also benefit from this new blood viability process. (Photo copyright: University of Alberta.)

To perform the study, the scientists first collected and manually categorized 52,000 red blood cell images. Those images were then used to train an algorithm that mimics the way a human mind works. The computer system was next tasked with analyzing the shape of RBCs for quality purposes. 

Removing Human Bias from RBC Classification

“I was happy to collaborate with a group of people with diverse backgrounds and expertise,” said Tracey Turner, a senior research assistant in Acker’s laboratory and one of the authors of the study, in a Canadian Blood Services (CBS) article. “Annotating and reviewing over 52,000 images took a long time, however, it allowed me to see firsthand how much bias there is in manual classification of cell shape by humans and the benefit machine classification could bring.”

According to the CBS article, a red blood cell lasts about 115 days in the human body and the shape of the RBC reveals its age. Newer, healthier RBCs are shaped like discs with smooth edges. As they age, those edges become jagged and the cell eventually transforms into a sphere and loses the ability to perform its duty of transporting oxygen throughout the body. 

Blood donations are processed, packed, and stored for later use. Once outside the body, the RBCs begin to change their shape and deteriorate. RBCs can only be stored for a maximum of 42 days before they lose the ability to function properly when transfused into a patient. 

Scientists routinely examine the shape of RBCs to assess the quality of the cell units for transfusion to patients and, in some cases, diagnose and assess individuals with certain disorders and diseases. Typically, microscope examinations of red blood cells are performed by experts in medical laboratories to determine the quality of the stored blood. The RBCs are classified by shape and then assigned a morphology index score. This can be a complex, time-consuming, and laborious process.

“One of the amazing things about machine learning is that it allows us to see relationships we wouldn’t otherwise be able to see,” Acker said. “We categorize the cells into the buckets we’ve identified, but when we categorize, we take away information.”

Human analysis, apparently, is subjective and different professionals can arrive at different results after examining the same blood samples. 

“Machines are naive of bias, and AI reveals some characteristics we wouldn’t have identified and is able to place red blood cells on a more nuanced spectrum of change in shape,” Acker explained.

The researchers discovered that the AI could accurately analyze and categorize the quality of the red blood cells. This ability to perform RBC morphology assessment could have critical implications for transfusion medicine.

“The computer actually did a better job than we could, and it was able to pick up subtle differences in a way that we can’t as humans,” Acker said.

“It’s not surprising that the red cells don’t just go from one shape to another. This computer showed that there’s actually a gradual progression of shape in samples from blood products, and it’s able to better classify these changes,” he added. “It radically changes the speed at which we can make these assessments of blood product quality.”

More Precision Matching Blood Donors to Recipients

According to the World Health Organization (WHO), approximately 118.5 million blood donations are collected globally each year. There is a considerable contrast in the level of access to blood products between high- and low-income nations, which makes accurate assessment of stored blood even more critical. About 40% of all blood donations are collected in high-income countries that home to only about 16% of the world’s population.

More studies and clinical trials will be necessary to determine if U of A’s approach to using AI to assess the quality of RBCs can safely transfer to clinical use. But these early results promise much in future precision medicine treatments.

“What this research is leading us to is the fact that we have the ability to be much more precise in how we match blood donors and recipients based on specific characteristics of blood cells,” Acker stated. “Through this study we have developed machine learning tools that are going to help inform how this change in clinical practice evolves.”

The AI tools being developed at the U of A could ultimately benefit patients as well as blood collection centers, and at hospitals where clinical laboratories typically manage the blood banking services, by making the process of matching transfusion recipients to donors more precise and ultimately safer.

—JP Schlingman

Related Information:

Objective Assessment of Stored Blood Quality by Deep Learning

Machines Rival Expert Analysis of Stored Red Blood Cell Quality

Breakthrough Study Uses AI to Analyze Red Blood Cells

Machine Learning Opens New Frontiers in Red Blood Cell Research

AI Could Lead to Faster, Better Analysis of Donated Blood, Study Shows

Blood Safety and Availability

NIH Researchers Identify Biomarkers Associated with Consumption of Ultra Processed Foods

Findings could reduce the need for self-reporting in future nutritional studies and lead to new clinical laboratory testing

Clinical laboratory testing may one day influence whether a person snacks on a bag of chips every day or chooses to eat healthy foods instead.

Researchers at the National Institutes of Health (NIH) reported that they have identified biomarkers in blood and urine that can indicate an individual’s consumption of ultra-processed foods (UPF).

Scientists discovered a signature that is predictive of a dietary pattern that’s high in ultra-processed food, study leader Erikka Loftfield, PhD, MPH, epidemiologist and principal investigator with the NIH, told the Associated Press (AP).

Using data on the biomarkers—metabolites left after the body breaks down food—the researchers devised a “poly-metabolite score” that could potentially “reduce the reliance on, or complement the use of, self-reported dietary data in large population studies,” according to an NIH press release.

This will be helpful because, according to the AP, “Typical nutrition studies rely on recall: asking people what they ate during a certain period. But such reports are notoriously unreliable because people don’t remember everything they ate, or they record it inaccurately.”

“Limitations of self-reported diet are well known. Metabolomics provides an exciting opportunity to not only improve our methods for objectively measuring complex exposures like diet and intake of ultra-processed foods, but also to understand the mechanisms by which diet might be impacting health,” said Loftfield in the press release.

Thus, it’s conceivable that one day clinical laboratory testing could affect people’s food choices and help to improve their health.

The researchers published their study in the journal PLOS Medicine titled, “Identification and Validation of Poly-Metabolite Scores for Diets High in Ultra-Processed Food: An Observational Study and Post-Hoc Randomized Controlled Crossover-Feeding Trial.”

“There’s a need for both a more objective measure and potentially also a more accurate measure,” Erikka Loftfield, PhD, MPH, epidemiologist and principal investigator with the NIH, told the Associated Press. (Photo copyright: National Cancer Institute.)

Study Methodology

The findings were based in part on an earlier study of 718 AARP members, aged 50-74, who agreed to submit blood and urine samples. The participants also completed dietary recall reports.

“The researchers found hundreds of metabolites that correlated with the percentage of energy from ultra-processed foods in the diet,” the NIH press release noted. “Using machine learning, researchers identified metabolic patterns associated with high intake of ultra-processed foods and calculated poly-metabolite scores for blood and urine separately.”

To test their findings, the researchers referred to a 2019 NIH study involving 20 adults aged 18 to 50. Under carefully controlled conditions, these participants spent two weeks consuming high levels of ultra-processed foods, followed by two weeks consuming no ultra-processed foods. As with the AARP cohort, they also submitted blood and urine samples. The poly-metabolite score proved to be an accurate measure of which diets they had consumed, the researchers reported.

The researchers acknowledged limitations in the study that will necessitate further research. “Study participants were older US adults whose diets may vary from other populations,” the authors noted. “Poly-metabolite scores should be evaluated and iteratively improved in populations with diverse diets and a wide range of UPF intake.”

Ultra-Processed Foods Defined

The NIH defines ultra-processed foods as “ready-to-eat or ready-to-heat, industrially manufactured products, typically high in calories and low in essential nutrients.” Diets high in these foods have been associated with “increased risk of obesity and related chronic diseases, including some types of cancer,” the press release noted.

In identifying these foods, the researchers cited a 2019 paper published in the journal Public Health Nutrition (PHN). The paper relied on the NOVA classification system, which makes a distinction between “processed” and “ultra-processed” foods. The latter typically contain “food substances never or rarely used in kitchens,” or cosmetic additives “whose function is to make the final product palatable or more appealing,” the PHN paper noted.

“From sugary cereals at breakfast to frozen pizzas at dinner, plus in-between snacks of potato chips, sodas and ice cream, ultra-processed foods make up about 60% of the US diet,” the AP reported in an earlier story. “For kids and teens, it’s even higher—about two-thirds of what they eat.”            

—Stephen Beale

;