Cedars-Sinai Researchers Determine Smartphone App Can Assess Stool Form as Well as Gastroenterologists and Better than IBS Patients

Artificial intelligence performs BSS assessments with higher sensitivity and specificity than human diagnosticians

In a recent study conducted by scientists at Cedars-Sinai Medical Center in Los Angeles, researchers evaluated a smartphone application (app) that uses artificial intelligence (AI) to assess and characterize digital images of stool samples. The app, it turns out, matched the accuracy of participating gastroenterologists and exceeded the accuracy of study patients’ self-reports of stool specimens, according to a news release.

Though smartphone apps are technically not clinical laboratory tools, anatomic pathologists and medical laboratory scientists (MLSs) may be interested to learn how health information technology (HIT), machine learning, and smartphone apps are being used to assess different aspects of individuals’ health, independent of trained healthcare professionals.

The issue that the Cedars Sinai researchers were investigating is the accuracy of patient self-reporting. Because poop can be more complicated than meets the eye, when asked to describe their bowel movements patients often find it difficult to be specific. Thus, use of a smartphone app that enables patients to accurately assess their stools in cases where watching the function of their digestive tract is relevant to their diagnoses and treatment would be a boon to precision medicine treatments of gastroenterology diseases.

The scientists published their findings in the American Journal of Gastroenterology, titled, “A Smartphone Application Using Artificial Intelligence Is Superior to Subject Self-Reporting when Assessing Stool Form.”

Mark Pimentel, MD

“This app takes out the guesswork by using AI—not patient input—to process the images (of bowel movements) taken by the smartphone,” said gastroenterologist Mark Pimentel, MD (above), Executive Director of Cedars-Sinai’s Medically Associated Science and Technology (MAST) program and principal investigator of the study, in a news release. “The mobile app produced more accurate and complete descriptions of constipation, diarrhea, and normal stools than a patient could, and was comparable to specimen evaluations by well-trained gastroenterologists in the study.” (Photo copyright: Cedars-Sinai.)

Pros and Cons of Bristol Stool Scale

In their paper, the scientists discussed the Bristol Stool Scale (BSS), a traditional diagnostic tool for identifying stool forms into seven categories. The seven types of stool are:

  • Type 1: Separate hard lumps, like nuts (difficult to pass).
  • Type 2: Sausage-shaped, but lumpy.
  • Type 3: Like a sausage, but with cracks on its surface.
  • Type 4: Like a sausage or snake, smooth and soft (average stool).
  • Type 5: Soft blobs with clear cut edges.
  • Type 6: Fluffy pieces with ragged edges, a mushy stool (diarrhea).
  • Type 7: Watery, no solid pieces, entirely liquid (diarrhea). 

In an industry guidance report on irritable bowel syndrome (IBS)and associated drugs for treatment, the US Food and Drug Administration (FDA) said the BSS is “an appropriate instrument for capturing stool consistency in IBS.”

But even with the BSS, things can get murky for patients. Inaccurate self-reporting of stool forms by people with IBS and diarrhea can make proper diagnoses difficult.

“The problem is that whenever you have a patient reporting an outcome measure, it becomes subjective rather than objective. This can impact the placebo effect,” gastroenterologist Mark Pimentel, MD, Executive Director of Cedars-Sinai’s Medically Associated Science and Technology (MAST) program and principal investigator of the study, told Healio.

Thus, according to the researchers, AI algorithms can help with diagnosis by systematically doing the assessments for the patients, News Medical reported.

30,000 Stool Images Train New App

To conduct their study, the Cedars-Sinai researchers tested an AI smartphone app developed by Dieta Health. According to Health IT Analytics, employing AI trained on 30,000 annotated stool images, the app characterizes digital images of bowel movements using five parameters:

  • BSS,
  • Consistency,
  • Edge fuzziness,
  • Fragmentation, and
  • Volume.

“The app used AI to train the software to detect the consistency of the stool in the toilet based on the five parameters of stool form, We then compared that with doctors who know what they are looking at,” Pimentel told Healio.

AI Assessments Comparable to Doctors, Better than Patients

According to Health IT Analytics, the researchers found that:

  • AI assessed the stool comparable to gastroenterologists’ assessments on BSS, consistency, fragmentation, and edge fuzziness scores.
  • AI and gastroenterologists had moderate-to-good agreement on volume.
  • AI outperformed study participant self-reports based on the BSS with 95% accuracy, compared to patients’ 89% accuracy.

Additionally, the AI outperformed humans in specificity and sensitivity as well:

  • Specificity (ability to correctly report a negative result) was 27% higher.
  • Sensitivity (ability to correctly report a positive result) was 23% higher.

“A novel smartphone application can determine BSS and other visual stool characteristics with high accuracy compared with the two expert gastroenterologists. Moreover, trained AI was superior to subject self-reporting of BSS. AI assessments could provide more objective outcome measures for stool characterization in gastroenterology,” the Cedars-Sinai researchers wrote in their paper.

“In addition to improving a physician’s ability to assess their patients’ digestive health, this app could be advantageous for clinical trials by reducing the variability of stool outcome measures,” said gastroenterologist Ali Rezaie, MD, study co-author and Medical Director of Cedars-Sinai’s GI Motility Program in the news release.

The researchers plan to seek FDA review of the mobile app.

Opportunity for Clinical Laboratories

Anatomic pathologists and clinical laboratory leaders may want to reach out to referring gastroenterologists to find out how they can help to better serve gastro patients. As the Cedars-Sinai study suggests, AI smartphone apps can perform BSS assessments as good as or better than humans and may be useful tools in the pursuit of precision medicine treatments for patient suffering from painful gastrointestinal disorders.

—Donna Marie Pocius

Related Information:

Smartphone Application Using Artificial Intelligence is Superior to Subject Self-Reporting When Assessing Stool Form

Study: App More Accurate than Patient Evaluation of Stool Samples

Industry Guidance Report: Irritable Bowel Syndrome—Clinical Evaluation of Drugs

Artificial Intelligence-based Smartphone App for Characterizing Stool Form

AI Mobile App Improves on “Subjective” Patient-Reported Stool Assessment in IBS

Artificial Intelligence App Outperforms Patient-Reported Stool Assessments

Hackensack Meridian Health and Hologic Tap Google Cloud’s New Medical Imaging Suite for Cancer Diagnostics

Google designed the suite to ease radiologists’ workload and enable easy and secure sharing of critical medical imaging; technology may eventually be adapted to pathologists’ workflow

Clinical laboratory and pathology group leaders know that Google is doing extensive research and development in the field of cancer diagnostics. For several years, the Silicon Valley giant has been focused on digital imaging and the use of artificial intelligence (AI) algorithms and machine learning to detect cancer.

Now, Google Cloud has announced it is launching a new medical imaging suite for radiologists that is aimed at making healthcare data for the diagnosis and care of cancer patients more accessible. The new suite “promises to make medical imaging data more interoperable and useful by leveraging artificial intelligence,” according to MedCity News.

In a press release, medical technology company Hologic, and healthcare provider Hackensack Meridian Health in New Jersey, announced they were the first customers to use Google Cloud’s new suite of medical imaging products.

“Hackensack Meridian Health has begun using it to detect metastasis in prostate cancer patients earlier, and Hologic is using it to strengthen its diagnostic platform that screens women for cervical cancer,” MedCity News reported.

Alissa Hsu Lynch

“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and now we’re making our imaging expertise, tools, and technologies available for healthcare and life sciences enterprises,” said Alissa Hsu Lynch (above), Global Lead of Google Cloud’s MedTech Strategy and Solutions, in a press release. “Our Medical Imaging Suite shows what’s possible when tech and healthcare companies come together.” Clinical laboratory companies may find Google’s Medical Imaging Suite worth investigating. (Photo copyright: Influencive.)

.

Easing the Burden on Radiologists

Clinical laboratory leaders and pathologists know that laboratory data drives most healthcare decision-making. And medical images make up 90% of all healthcare data, noted an article in Proceedings of the IEEE (Institute of Electrical and Electronics Engineers).

More importantly, medical images are growing in size and complexity. So, radiologists and medical researchers need a way to quickly interpret them and keep up with the increased workload, Google Cloud noted.

“The size and complexity of these images is huge, and, often, images stay sitting in data siloes across an organization,” said Alissa Hsu Lynch, Global Lead, MedTech Strategy and Solutions at Google, told MedCity News. “In order to make imaging data useful for AI, we have to address interoperability and standardization. This suite is designed to help healthcare organizations accelerate the development of AI so that they can enable faster, more accurate diagnosis and ease the burden for radiologists,” she added.

According to the press release, Google Cloud’s Medical Imaging Suite features include:

  • Imaging Storage: Easy and secure data exchange using the international DICOM (digital imaging and communications in medicine) standard for imaging. A fully managed, highly scalable, enterprise-grade development environment that includes automated DICOM de-identification. Seamless cloud data management via a cloud-native enterprise imaging PACS (picture archiving and communication system) in clinical use by radiologists.
  • Imaging Lab: AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images, and Google Cloud native integration with any DICOMweb viewer.
  • Imaging Datasets and Dashboards: Ability to view and search petabytes of imaging data to perform advanced analytics and create training datasets with zero operational overhead.
  • Imaging AI Pipelines: Accelerated development of AI pipelines to build scalable machine learning models, with 80% fewer lines of code required for custom modeling.
  • Imaging Deployment: Flexible options for cloud, on-prem (on-premises software), or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy requirements—while providing centralized management and policy enforcement with Google Distributed Cloud.

First Customers Deploy Suite

Hackensack Meridian Health hopes Google’s imaging suite will, eventually, enable the healthcare provider to predict factors affecting variance in prostate cancer outcomes.

“We are working toward building AI capabilities that will support image-based clinical diagnosis across a range of imaging and be an integral part of our clinical workflow,” said Sameer Sethi, Senior Vice President and Chief Data and Analytics Officer at Hackensack, in a news release.

The New Jersey healthcare network said in a statement that its work with Google Cloud includes use of AI and machine learning to enable notification of newborn congenital disorders and to predict sepsis risk in real-time.

Hologic, a medical technology company focused on women’s health, said its collaboration integrates Google Cloud AI with the company’s Genius Digital Diagnostics System.

“By complementing our expertise in diagnostics and AI with Google Cloud’s expertise in AI, we’re evolving our market-leading technologies to improve laboratory performance, healthcare provider decision making, and patient care,” said Michael Quick, Vice President of Research and Development and Innovation at Hologic, in the press release.

Hologic says its Genius Digital Diagnostics System combines AI with volumetric medical imaging to find pre-cancerous lesions and cancer cells. From a Pap test digital image, the system narrows “tens of thousands of cells down to an AI-generated gallery of the most diagnostically relevant,” according to the company website.

Hologic plans to work with Google Cloud on storage and “to improve diagnostic accuracy for those cancer images,” Hsu Lynch told MedCity News.

Medical image storage and sharing technologies like Google Cloud’s Medical Imaging Suite provide an opportunity for radiologists, researchers, and others to share critical image studies with anatomic pathologists and physicians providing care to cancer patients.   

One key observation is that the primary function of this service that Google has begun to deploy is to aid in radiology workflow and productivity, and to improve the accuracy of cancer diagnoses by radiologists. Meanwhile, Google continues to employ pathologists within its medical imaging research and development teams.

Assuming that the first radiologists find the Google suite of tools effective in support of patient care, it may not be too long before Google moves to introduce an imaging suite of tools designed to aid the workflow of surgical pathologists as well.

Donna Marie Pocius

Related Information:

Google Cloud Delivers on the Promise of AI and Data Interoperability with New Medical Imaging Suite

Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies with Progress Highlights, and Future Promises

Google Cloud Unveils Medical Imaging Suite with Hologic, Hackensack Meridian as First Customers

Google Cloud Medical Imaging Suite and its Deep Insights

Hackensack Meridian Health and Google Expand Relationship to Improve Patient Care

Google Cloud Introduces New AI-Powered Medical Imaging Suite

Stanford Medicine Scientists Sequence Patient’s Whole Genome in Just Five Hours Using Nanopore Genome Sequencing, AI, and Cloud Computing

And in less than eight hours, they had diagnosed a child with a rare genetic disorder, results that would take clinical laboratory testing weeks to return, demonstrating the clinical value of the genomic process

In another major genetic sequencing advancement, scientists at Stanford University School of Medicine have developed a method for rapid sequencing of patients’ whole human genome in as little as five hours. And the researchers used their breakthrough to diagnose rare genetic diseases in under eight hours, according to a Stanford Medicine news release. Their new “ultra-rapid genome sequencing approach” could lead to significantly faster diagnostics and improved clinical laboratory treatments for cancer and other diseases.

The Stanford Medicine researchers used nanopore sequencing and artificial intelligence (AI) technologies in a “mega-sequencing approach” that has redefined “rapid” for genetic diagnostics. The sequence for one study participant—completed in just five hours and two minutes—set the first Guinness World Record for the fastest DNA sequencing to date, the news release states.

The Stanford scientists described their new method for rapid diagnosis of genetic diseases in the New England Journal of Medicine (NEJM) titled, “Ultrarapid Nanopore Genome Sequencing in a Critical Care Setting.”

Euan Ashley, MD, PhD

“A few weeks is what most clinicians call ‘rapid’ when it comes to sequencing a patient’s genome and returning results,” said cardiovascular disease specialist Euan Ashley, MD, PhD (above), professor of medicine, genetics, and biomedical data science, at Stanford University in the news release. “The right people suddenly came together to achieve something amazing. We really felt like we were approaching a new frontier.” Their results could lead to faster diagnostics and clinical laboratory treatments. (Photo copyright: Stanford Medicine.)

.

Need for Fast Genetic Diagnosis 

In their NEJM paper, the Stanford scientists argue that rapid genetic diagnosis is key to clinical management, improved prognosis, and critical care cost savings.

“Although most critical care decisions must be made in hours, traditional testing requires weeks and rapid testing requires days. We have found that nanopore genome sequencing can accurately and rapidly provide genetic diagnoses,” the authors wrote.

To complete their study, the researchers sequenced the genomes of 12 patients from two hospitals in Stanford, Calif. They used nanopore genome sequencing, cloud computing-based bioinformatics, and a “custom variant prioritization.”

Their findings included:

  • Five people received a genetic diagnosis from the sequencing information in about eight hours.
  • Diagnostic rate of 42%, about 12% higher than the average rate for diagnosis of genetic disorders (the researchers noted that not all conditions are genetically based and appropriate for sequencing).
  • Five hours and two minutes to sequence a patient’s genome in one case.
  • Seven hours and 18 minutes to sequence and diagnose that case.

How the Nanopore Process Works

To advance sequencing speed, the researchers used equipment by Oxford Nanopore Technologies with 48 sequencing units called “flow cells”—enough to sequence a person’s whole genome at one time.

The Oxford Nanopore PromethION Flow Cell generates more than 100 gigabases of data per hour, AI Time Journal reported. The team used a cloud-based storage system to enable computational power for real-time analysis of the data. AI algorithms scanned the genetic code for errors and compared the patients’ gene variants to variants associated with diseases found in research data, Stanford explained.

According to an NVIDIA blog post, “The researchers accelerated both base calling and variant calling using NVIDIA GPUs on Google Cloud. Variant calling, the process of identifying the millions of variants in a genome, was also sped up with NVIDIA Clara Parabricks, a computational genomics application framework.”

Rapid Genetic Test Produces Clinical Benefits

“Together with our collaborators and some of the world’s leaders in genomics, we were able to develop a rapid sequencing analysis workflow that has already shown tangible clinical benefits,” said Mehrzad Samadi, PhD, NVIDIA Senior Engineering Manager and co-author of the NEJM paper, in the blog post. “These are the kinds of high-impact problems we live to solve.”

In their paper, the Stanford researchers described their use of the rapid genetic test to diagnose and treat an infant who was experiencing epileptic seizures on arrival to Stanford’s pediatric emergency department. In just eight hours, their diagnostic test found that the infant’s convulsions were attributed to a mutation in the gene CSNK2B, “a variant and gene known to cause a neurodevelopmental disorder with early-onset epilepsy,” the researchers wrote.

“By accelerating every step of this process—from collecting a blood sample to sequencing the whole genome to identifying variants linked to diseases—[the Stanford] research team took just hours to find a pathogenic variant and make a definitive diagnosis in a three-month-old infant with a rare seizure-causing genetic disorder. A traditional gene panel analysis ordered at the same time took two weeks to return results,” AI Time Journal reported.

New Benchmarks

The Stanford research team wants to cut the sequencing time in half. But for now, the five-hour rapid whole genome sequence can be considered by clinical laboratory leaders, pathologists, and research scientists a new benchmark in genetic sequencing for diagnostic purposes.

Stories like Stanford’s rapid diagnosis of the three-month old patient with epileptic seizures, point to the ultimate value of advances in genomic sequencing technologies.

Donna Marie Pocius

Related Information:

Fastest DNA Sequencing Technique Helps Undiagnosed Patients Find Answers in Mere Hours

Ultrarapid Nanopore Genome Sequencing in a Critical Care Setting

Stanford Researchers Use AI to Sequence and Analyze DNA in Five Hours

World Record-Setting DNA Sequencing Technique Helps Clinicians Rapidly Diagnose Critical Care Patients

Ultima Genomics Delivers the $100 Genome

Diagnosing Ovarian Cancer Using Perception-based Nanosensors and Machine Learning

Two studies show the accuracy of perception-based systems in detecting disease biomarkers without needing molecular recognition elements, such as antibodies

Researchers from multiple academic and research institutions have collaborated to develop a non-conventional machine learning-based technology for identifying and measuring biomarkers to detect ovarian cancer without the need for molecular identification elements, such as antibodies.

Traditional clinical laboratory methods for detecting biomarkers of specific diseases require a “molecular recognition molecule,” such as an antibody, to match with each disease’s biomarker. However, according to a Lehigh University news release, for ovarian cancer “there’s not a single biomarker—or analyte—that indicates the presence of cancer.

“When multiple analytes need to be measured in a given sample, which can increase the accuracy of a test, more antibodies are required, which increases the cost of the test and the turnaround time,” the news release noted.

The multi-institutional team included scientists from Memorial Sloan Kettering Cancer Center, Weill Cornell Medicine, the University of Maryland, the National Institutes of Standards and Technology, and Lehigh University.

Unveiled in two sequential studies, the new method for detecting ovarian cancer uses machine learning to examine spectral signatures of carbon nanotubes to detect and recognize the disease biomarkers in a very non-conventional fashion.

Daniel Heller, PhD
 
“Carbon nanotubes have interesting electronic properties,” said Daniel Heller, PhD (above), in the Lehigh University news release. “If you shoot light at them, they emit a different color of light, and that light’s color and intensity can change based on what’s sticking to the nanotube. We were able to harness the complexity of so many potential binding interactions by using a range of nanotubes with various wrappings. And that gave us a range of different sensors that could all detect slightly different things, and it turned out they responded differently to different proteins.” This method differs greatly from traditional clinical laboratory methods for identifying disease biomarkers. (Photo copyright: Memorial Sloan-Kettering Cancer Center.)

Perception-based Nanosensor Array for Detecting Disease

The researchers published their findings from the two studies in the journals Science Advances, titled, “A Perception-based Nanosensor Platform to Detect Cancer Biomarkers,” and Nature Biomedical Engineering, titled, “Detection of Ovarian Cancer via the Spectral Fingerprinting of Quantum-Defect-Modified Carbon Nanotubes in Serum by Machine Learning.”

In the Science Advances paper, the researchers described their development of “a perception-based platform based on an optical nanosensor array that leverages machine learning algorithms to detect multiple protein biomarkers in biofluids.

“Perception-based machine learning (ML) platforms, modeled after the complex olfactory system, can isolate individual signals through an array of relatively nonspecific receptors. Each receptor captures certain features, and the overall ensemble response is analyzed by the neural network in our brain, resulting in perception,” the researchers wrote.

“This work demonstrates the potential of perception-based systems for the development of multiplexed sensors of disease biomarkers without the need for specific molecular recognition elements,” the researchers concluded.

In the Nature Biomedical Engineering paper, the researchers described a fined-tuned toolset that could accurately differentiate ovarian cancer biomarkers from biomarkers in individuals who are cancer-free.

“Here we show that a ‘disease fingerprint’—acquired via machine learning from the spectra of near-infrared fluorescence emissions of an array of carbon nanotubes functionalized with quantum defects—detects high-grade serous ovarian carcinoma in serum samples from symptomatic individuals with 87% sensitivity at 98% specificity (compared with 84% sensitivity at 98% specificity for the current best [clinical laboratory] screening test, which uses measurements of cancer antigen 125 and transvaginal ultrasonography,” the researchers wrote.

“We demonstrated that a perception-based nanosensor platform could detect ovarian cancer biomarkers using machine learning,” said Yoona Yang, PhD, a postdoctoral research associate in Lehigh’s Department of Chemical and Biomolecular Engineering and co-first author of the Science Advances article, in the news release.

How Perception-based Machine Learning Platforms Work

According to Yang, perception-based sensing functions like the human brain.

“The system consists of a sensing array that captures a certain feature of the analytes in a specific way, and then the ensemble response from the array is analyzed by the computational perceptive model. It can detect various analytes at once, which makes it much more efficient,” Yang said.

The “array” the researchers are referring to are DNA strands wrapped around single-wall carbon nanotubes (DNA-SWCNTs).

“SWCNTs have unique optical properties and sensitivity that make them valuable as sensor materials. SWCNTS emit near-infrared photoluminescence with distinct narrow emission bands that are exquisitely sensitive to the local environment,” the researchers wrote in Science Advances.

“Carbon nanotubes have interesting electronic properties,” said Daniel Heller, PhD, Head of the Cancer Nanotechnology Laboratory at Memorial Sloan Kettering Cancer Center and Associate Professor in the Department of Pharmacology at Weill Cornell Medicine of Cornell University, in the Lehigh University news release.

“If you shoot light at them, they emit a different color of light, and that light’s color and intensity can change based on what’s sticking to the nanotube. We were able to harness the complexity of so many potential binding interactions by using a range of nanotubes with various wrappings. And that gave us a range of different sensors that could all detect slightly different things, and it turned out they responded differently to different proteins,” he added.

The researchers put their technology to practical test in the second study. The wanted to learn if it could differentiate symptomatic patients with high-grade ovarian cancer from cancer-free individuals. 

The research team used 269 serum samples. This time, nanotubes were bound with a specific molecule providing “an extra signal in terms of data and richer data from every nanotube-DNA combination,” said Anand Jagota PhD, Professor, Bioengineering and Chemical and Biomolecular Engineering, Lehigh University, in the news release.

This year, 19,880 women will be diagnosed with ovarian cancer and 12,810 will die from the disease, according to American Cancer Society data. While more research and clinical trials are needed, the above studies are compelling and suggest the possibility that one day clinical laboratories may detect ovarian cancer faster and more accurately than with current methods.   

—Donna Marie Pocius

Related Information:

Perception-Based Nanosensor Platform Could Advance Detection of Ovarian Cancer

Perception-Based Nanosensor Platform to Detect Cancer Biomarkers

Detection of Ovarian Cancer via the Spectral Fingerprinting of Quantum-Defect-Modified Carbon Nanotubes in Serum by Machine Learning

Machine Learning Nanosensor Platform Detects Early Cancer Biomarkers

What is Swarm Learning and Might It Come to a Clinical Laboratory Near You?

International research team that developed swarm learning believe it could ‘significantly promote and accelerate collaboration and information exchange in research, especially in the field of medicine’

Swarm Learning” is a technology that enables cross-site analysis of population health data while maintaining patient privacy protocols to generate improvements in precision medicine. That’s the goal described by an international team of scientists who used this approach to develop artificial intelligence (AI) algorithms that seek out and identify lung disease, blood cancer, and COVID-19 data stored in disparate databases.

Since 80% of patient records feature clinical laboratory test results, there’s no doubt this protected health information (PHI) would be curated by the swarm learning algorithms. 

Researchers with DZNE (German Center for Neurodegenerative Diseases), the University of Bonn, and Hewlett Packard Enterprise (HPE) who developed the swarm learning algorithms published their findings in the journal Nature, titled, “Swarm Learning for Decentralized and Confidential Clinical Machine Learning.”

In their study they wrote, “Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine. … However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation. Here, to facilitate the integration of any medical data from any data owner worldwide without violating privacy laws, we introduce Swarm Learning—a decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking, and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning.”

What is Swarm Learning?

Swarm Learning is a way to collaborate and share medical research toward a goal of advancing precision medicine, the researchers stated.

The technology blends AI with blockchain-based peer-to-peer networking to create information exchange across a network, the DZNE news release explained. The machine learning algorithms are “trained” to detect data patterns “and recognize the learned patterns in other data as well,” the news release noted. 

Joachim Schultze, MD

“Medical research data are a treasure. They can play a decisive role in developing personalized therapies that are tailored to each individual more precisely than conventional treatments,” said Joachim Schultze, MD (above), Director, Systems Medicine at DZNE and Professor, Life and Medical Sciences Institute at the University of Bonn, in the news release. “It’s critical for science to be able to use such data as comprehensively and from as many sources as possible,” he added. This, of course, would include clinical laboratory test results data. (Photo copyright: University of Bonn.)
 

Since, as Dark Daily has reported many times, clinical laboratory test data comprises as much as 80% of patients’ medical records, such a treasure trove of information will most likely include medical laboratory test data as well as reports on patient diagnoses, demographics, and medical history. Swarm learning incorporating laboratory test results may inform medical researchers in their population health analyses.

“The key is that all participants can learn from each other without the need of sharing confidential information,” said Eng Lim Goh, PhD, Senior Vice President and Chief Technology Officer for AI at Hewlett Packard Enterprise (HPE), which developed base technology for swarm learning, according to the news release.

An HPE blog post notes that “Using swarm learning, the hospital can combine its data with that of hospitals serving different demographics in other regions and then use a private blockchain to learn from a global average, or parameter, of results—without sharing actual patient information.

“Under this model,” the blog continues, “‘each hospital is able to predict, with accuracy and with reduced bias, as though [it has] collected all the patient data globally in one place and learned from it,’ Goh says.”

Swarm Learning Applied in Study

The researchers studied four infectious and non-infectious diseases:

They used 16,400 transcriptomes from 127 clinical studies and assessed 95,000 X-ray images.

  • Data for transcriptomes were distributed over three to 32 blockchain nodes and across three nodes for X-rays.
  • The researchers “fed their algorithms with subsets of the respective data set” (such as those coming from people with disease versus healthy individuals), the news release noted.

Findings included:

  • 90% algorithm accuracy in reporting on healthy people versus those diagnosed with diseases for transcriptomes.
  • 76% to 86% algorithm accuracy in reporting of X-ray data.
  • Methodology worked best for leukemia.
  • Accuracy also was “very high” for tuberculosis and COVID-19.
  • X-ray data accuracy rate was lower, researchers said, due to less available data or image quality.

“Our study thus proves that swarm learning can be successfully applied to very different data. In principle, this applies to any type of information for which pattern recognition by means of artificial intelligence is useful. Be it genome data, X-ray images, data from brain imaging, or other complex data,” Schultze said in the DZNE news release.

The researchers plan to conduct additional studies aimed at exploring swarm learning’s implications to Alzheimer’s disease and other neurodegenerative diseases.

Is Swarm Learning Coming to Your Lab?

The scientists say hospitals as well as research institutions may join or form swarms. So, hospital-based medical laboratory leaders and pathology groups may have an opportunity to contribute to swarm learning. According to Schultze, sharing information can go a long way toward “making the wealth of experience in medicine more accessible worldwide.”

Donna Marie Pocius

Related Information:

AI With Swarm Intelligence: A Novel Technology for Cooperative Analysis of Big Data

Swarm Learning for Decentralized and Confidential Clinical Machine Learning

Swarm Learning

HPE’s Dr. Goh on Harnessing the Power of Swarm Learning

Swarm Learning: This Artificial Intelligence Can Detect COVID-19, Other Diseases

;