News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Research Consortium Identifies 188 New CRISPR Gene-Editing Systems, Some More Accurate than CRISPR

New gene-editing systems could provide markedly improved accuracy for DNA and RNA editing leading to new precision medicine tools and genetic therapies

In what may turn out to be a significant development in genetic engineering, researchers from three institutions have identified nearly 200 new systems that can be used for editing genes. It is believed that a number of these new systems can provide comparable or better accuracy when compared to CRISPER (Clustered Regularly Interspaced Short Palindromic Repeats), currently the most-used gene editing method.

CRISPR-Cas9 has been the standard tool for CRISPR gene editing and genetic engineering. However, publication of these new research findings are expected to give scientists better, more precise tools to edit genes. In turn, these developments could lead to new clinical laboratory tests and precision medicine therapies for patients with inherited genetic diseases.

Researchers from Broad Institute, Massachusetts Institute of Technology (MIT), and the federal National Institutes of Health (NIH) have uncovered 188 new CRISPR systems “in their native habitat of bacteria” with some showing superior editing capabilities, New Atlas reported.

“Best known as a powerful gene-editing tool, CRISPR actually comes from an inbuilt defense system found in bacteria and simple microbes called archaea. CRISPR systems include pairs of ‘molecular scissors’ called Cas enzymes, which allow microbes to cut up the DNA of viruses that attack them. CRISPR technology takes advantage of these scissors to cut genes out of DNA and paste new genes in,” according to Live Science.

In its article, New Atlas noted that the researchers looked to bacteria because “In nature, CRISPR is a self-defense tool used by bacteria.” They developed an algorithm—called FLSHclust—to conduct “a deep dive into three databases of bacteria, found in environments as diverse as Antarctic lakes, breweries, and dog saliva.”

The research team published their findings in the journal Science titled, “Uncovering the Functional Diversity of Rare CRISPR-Cas Systems with Deep Terascale Clustering.”

In their paper, the researchers wrote, “We developed fast locality-sensitive hashing–based clustering (FLSHclust), a parallelized, deep clustering algorithm with linearithmic scaling based on locality-sensitive hashing. FLSHclust approaches MMseqs2, a gold-standard quadratic-scaling algorithm, in clustering performance. We applied FLSHclust in a sensitive CRISPR discovery pipeline and identified 188 previously unreported CRISPR-associated systems, including many rare systems.”

“In lab tests [the newfound CRISPR systems] demonstrated a range of functions, and fell into both known and brand new categories,” New Atlas reported.

Soumya Kannan, PhD

“Some of these microbial systems were exclusively found in water from coal mines,” Soumya Kannan, PhD (above), a Graduate Fellow at MIT’s Zhang Lab and co-first author of the study, told New Atlas. “If someone hadn’t been interested in that, we may never have seen those systems.” These new gene-editing systems could lead to new clinical laboratory genetic tests and therapeutics for chronic diseases. (Photo copyright: MIT McGovern Institute.)

Deeper Look at Advancement                    

The CRISPR-Cas9 made a terrific impact when it was announced in 2012, earning a Nobel Prize in Chemistry.

Though CRISPR-Cas9 brought huge benefits to genetic research, the team noted in their Science paper that “existing methods for sequence mining lag behind the exponentially growing databases that now contain billions of proteins, which restricts the discovery of rare protein families and associations.

“We sought to comprehensively enumerate CRISPR-linked gene modules in all existing publicly available sequencing data,” the scientist continued. “Recently, several previously unknown biochemical activities have been linked to programmable nucleic acid recognition by CRISPR systems, including transposition and protease activity. We reasoned that many more diverse enzymatic activities may be associated with CRISPR systems, many of which could be of low abundance in existing [gene] sequence databases.”

Among the previously unknown gene-editing systems the researchers found were some belonging to the Type 1 CRISPR systems class. These “have longer guide RNA sequences than Cas9. They can be directed to their targets more precisely, reducing the risk of off-target edits—one of the main problems with CRISPR gene editing,” New Atlas reported.

“The authors also identified a CRISPR-Cas enzyme, Cas14, which cuts RNA precisely. These discoveries may help to further improve DNA- and RNA-editing technologies, with wide-ranging applications in medicine and biotechnology,” the Science paper noted.

Testing also showed these systems were able to edit human cells, meaning “their size should allow them to be delivered in the same packages currently used for CRISPR-Cas9,” New Atlas added.

Another newfound gene-editing system demonstrated “collateral activity, breaking down nucleic acids after binding to the target, New Atlas reported. SHERLOCK, a tool used to diagnose single samples of RNA or DNA to diagnose disease, previously utilized this system.

Additionally, New Atlas noted, “a type VII system was found to target RNA, which could unlock a range of new tools through RNA editing. Others could be adapted to record when certain genes are expressed, or as sensors for activity in cells.”

Looking Ahead

The strides in science from the CRISPR-Cas9 give a hint at what can come from the new discovery. “Not only does this study greatly expand the field of possible gene editing tools, but it shows that exploring microbial ecosystems in obscure environments could pay off with potential human benefits,” New Atlas noted.

“This study introduces FLSHclust as a tool to cluster millions of sequences quickly and efficiently, with broad applications in mining large sequence databases. The CRISPR-linked systems that we discovered represent an untapped trove of diverse biochemical activities linked to RNA-guided mechanisms, with great potential for development as biotechnologies,” the researchers wrote in Science.

How these newfound gene-editing tools and the new FLSHclust algorithm will eventually lead to new clinical laboratory tests and precision medicine diagnostics is not yet clear. But the discoveries will certainly improve DNA/RNA editing, and that may eventually lead to new clinical and biomedical applications.

—Kristin Althea O’Connor

Related Information:

Algorithm Identifies 188 New CRISPR Gene-Editing Systems

188 New Types of CRISPR Revealed by Algorithm

FLSHclust, a New Algorithm, Reveals Rare and Previously Unknown CRISPR-Cas Systems

Uncovering the Functional Diversity of Rare CRISPR-Cas Systems with Deep Terascale Clustering

Questions and Answers about CRISPR

Annotation and Classification of CRISPR-Cas Systems

SHERLOCK: Nucleic Acid Detection with CRISPR Nucleases

Cedars-Sinai Researchers Determine Smartphone App Can Assess Stool Form as Well as Gastroenterologists and Better than IBS Patients

Artificial intelligence performs BSS assessments with higher sensitivity and specificity than human diagnosticians

In a recent study conducted by scientists at Cedars-Sinai Medical Center in Los Angeles, researchers evaluated a smartphone application (app) that uses artificial intelligence (AI) to assess and characterize digital images of stool samples. The app, it turns out, matched the accuracy of participating gastroenterologists and exceeded the accuracy of study patients’ self-reports of stool specimens, according to a news release.

Though smartphone apps are technically not clinical laboratory tools, anatomic pathologists and medical laboratory scientists (MLSs) may be interested to learn how health information technology (HIT), machine learning, and smartphone apps are being used to assess different aspects of individuals’ health, independent of trained healthcare professionals.

The issue that the Cedars Sinai researchers were investigating is the accuracy of patient self-reporting. Because poop can be more complicated than meets the eye, when asked to describe their bowel movements patients often find it difficult to be specific. Thus, use of a smartphone app that enables patients to accurately assess their stools in cases where watching the function of their digestive tract is relevant to their diagnoses and treatment would be a boon to precision medicine treatments of gastroenterology diseases.

The scientists published their findings in the American Journal of Gastroenterology, titled, “A Smartphone Application Using Artificial Intelligence Is Superior to Subject Self-Reporting when Assessing Stool Form.”

Mark Pimentel, MD

“This app takes out the guesswork by using AI—not patient input—to process the images (of bowel movements) taken by the smartphone,” said gastroenterologist Mark Pimentel, MD (above), Executive Director of Cedars-Sinai’s Medically Associated Science and Technology (MAST) program and principal investigator of the study, in a news release. “The mobile app produced more accurate and complete descriptions of constipation, diarrhea, and normal stools than a patient could, and was comparable to specimen evaluations by well-trained gastroenterologists in the study.” (Photo copyright: Cedars-Sinai.)

Pros and Cons of Bristol Stool Scale

In their paper, the scientists discussed the Bristol Stool Scale (BSS), a traditional diagnostic tool for identifying stool forms into seven categories. The seven types of stool are:

  • Type 1: Separate hard lumps, like nuts (difficult to pass).
  • Type 2: Sausage-shaped, but lumpy.
  • Type 3: Like a sausage, but with cracks on its surface.
  • Type 4: Like a sausage or snake, smooth and soft (average stool).
  • Type 5: Soft blobs with clear cut edges.
  • Type 6: Fluffy pieces with ragged edges, a mushy stool (diarrhea).
  • Type 7: Watery, no solid pieces, entirely liquid (diarrhea). 

In an industry guidance report on irritable bowel syndrome (IBS)and associated drugs for treatment, the US Food and Drug Administration (FDA) said the BSS is “an appropriate instrument for capturing stool consistency in IBS.”

But even with the BSS, things can get murky for patients. Inaccurate self-reporting of stool forms by people with IBS and diarrhea can make proper diagnoses difficult.

“The problem is that whenever you have a patient reporting an outcome measure, it becomes subjective rather than objective. This can impact the placebo effect,” gastroenterologist Mark Pimentel, MD, Executive Director of Cedars-Sinai’s Medically Associated Science and Technology (MAST) program and principal investigator of the study, told Healio.

Thus, according to the researchers, AI algorithms can help with diagnosis by systematically doing the assessments for the patients, News Medical reported.

30,000 Stool Images Train New App

To conduct their study, the Cedars-Sinai researchers tested an AI smartphone app developed by Dieta Health. According to Health IT Analytics, employing AI trained on 30,000 annotated stool images, the app characterizes digital images of bowel movements using five parameters:

  • BSS,
  • Consistency,
  • Edge fuzziness,
  • Fragmentation, and
  • Volume.

“The app used AI to train the software to detect the consistency of the stool in the toilet based on the five parameters of stool form, We then compared that with doctors who know what they are looking at,” Pimentel told Healio.

AI Assessments Comparable to Doctors, Better than Patients

According to Health IT Analytics, the researchers found that:

  • AI assessed the stool comparable to gastroenterologists’ assessments on BSS, consistency, fragmentation, and edge fuzziness scores.
  • AI and gastroenterologists had moderate-to-good agreement on volume.
  • AI outperformed study participant self-reports based on the BSS with 95% accuracy, compared to patients’ 89% accuracy.

Additionally, the AI outperformed humans in specificity and sensitivity as well:

  • Specificity (ability to correctly report a negative result) was 27% higher.
  • Sensitivity (ability to correctly report a positive result) was 23% higher.

“A novel smartphone application can determine BSS and other visual stool characteristics with high accuracy compared with the two expert gastroenterologists. Moreover, trained AI was superior to subject self-reporting of BSS. AI assessments could provide more objective outcome measures for stool characterization in gastroenterology,” the Cedars-Sinai researchers wrote in their paper.

“In addition to improving a physician’s ability to assess their patients’ digestive health, this app could be advantageous for clinical trials by reducing the variability of stool outcome measures,” said gastroenterologist Ali Rezaie, MD, study co-author and Medical Director of Cedars-Sinai’s GI Motility Program in the news release.

The researchers plan to seek FDA review of the mobile app.

Opportunity for Clinical Laboratories

Anatomic pathologists and clinical laboratory leaders may want to reach out to referring gastroenterologists to find out how they can help to better serve gastro patients. As the Cedars-Sinai study suggests, AI smartphone apps can perform BSS assessments as good as or better than humans and may be useful tools in the pursuit of precision medicine treatments for patient suffering from painful gastrointestinal disorders.

—Donna Marie Pocius

Related Information:

Smartphone Application Using Artificial Intelligence is Superior to Subject Self-Reporting When Assessing Stool Form

Study: App More Accurate than Patient Evaluation of Stool Samples

Industry Guidance Report: Irritable Bowel Syndrome—Clinical Evaluation of Drugs

Artificial Intelligence-based Smartphone App for Characterizing Stool Form

AI Mobile App Improves on “Subjective” Patient-Reported Stool Assessment in IBS

Artificial Intelligence App Outperforms Patient-Reported Stool Assessments

Hackensack Meridian Health and Hologic Tap Google Cloud’s New Medical Imaging Suite for Cancer Diagnostics

Google designed the suite to ease radiologists’ workload and enable easy and secure sharing of critical medical imaging; technology may eventually be adapted to pathologists’ workflow

Clinical laboratory and pathology group leaders know that Google is doing extensive research and development in the field of cancer diagnostics. For several years, the Silicon Valley giant has been focused on digital imaging and the use of artificial intelligence (AI) algorithms and machine learning to detect cancer.

Now, Google Cloud has announced it is launching a new medical imaging suite for radiologists that is aimed at making healthcare data for the diagnosis and care of cancer patients more accessible. The new suite “promises to make medical imaging data more interoperable and useful by leveraging artificial intelligence,” according to MedCity News.

In a press release, medical technology company Hologic, and healthcare provider Hackensack Meridian Health in New Jersey, announced they were the first customers to use Google Cloud’s new suite of medical imaging products.

“Hackensack Meridian Health has begun using it to detect metastasis in prostate cancer patients earlier, and Hologic is using it to strengthen its diagnostic platform that screens women for cervical cancer,” MedCity News reported.

Alissa Hsu Lynch

“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and now we’re making our imaging expertise, tools, and technologies available for healthcare and life sciences enterprises,” said Alissa Hsu Lynch (above), Global Lead of Google Cloud’s MedTech Strategy and Solutions, in a press release. “Our Medical Imaging Suite shows what’s possible when tech and healthcare companies come together.” Clinical laboratory companies may find Google’s Medical Imaging Suite worth investigating. (Photo copyright: Influencive.)

.

Easing the Burden on Radiologists

Clinical laboratory leaders and pathologists know that laboratory data drives most healthcare decision-making. And medical images make up 90% of all healthcare data, noted an article in Proceedings of the IEEE (Institute of Electrical and Electronics Engineers).

More importantly, medical images are growing in size and complexity. So, radiologists and medical researchers need a way to quickly interpret them and keep up with the increased workload, Google Cloud noted.

“The size and complexity of these images is huge, and, often, images stay sitting in data siloes across an organization,” said Alissa Hsu Lynch, Global Lead, MedTech Strategy and Solutions at Google, told MedCity News. “In order to make imaging data useful for AI, we have to address interoperability and standardization. This suite is designed to help healthcare organizations accelerate the development of AI so that they can enable faster, more accurate diagnosis and ease the burden for radiologists,” she added.

According to the press release, Google Cloud’s Medical Imaging Suite features include:

  • Imaging Storage: Easy and secure data exchange using the international DICOM (digital imaging and communications in medicine) standard for imaging. A fully managed, highly scalable, enterprise-grade development environment that includes automated DICOM de-identification. Seamless cloud data management via a cloud-native enterprise imaging PACS (picture archiving and communication system) in clinical use by radiologists.
  • Imaging Lab: AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images, and Google Cloud native integration with any DICOMweb viewer.
  • Imaging Datasets and Dashboards: Ability to view and search petabytes of imaging data to perform advanced analytics and create training datasets with zero operational overhead.
  • Imaging AI Pipelines: Accelerated development of AI pipelines to build scalable machine learning models, with 80% fewer lines of code required for custom modeling.
  • Imaging Deployment: Flexible options for cloud, on-prem (on-premises software), or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy requirements—while providing centralized management and policy enforcement with Google Distributed Cloud.

First Customers Deploy Suite

Hackensack Meridian Health hopes Google’s imaging suite will, eventually, enable the healthcare provider to predict factors affecting variance in prostate cancer outcomes.

“We are working toward building AI capabilities that will support image-based clinical diagnosis across a range of imaging and be an integral part of our clinical workflow,” said Sameer Sethi, Senior Vice President and Chief Data and Analytics Officer at Hackensack, in a news release.

The New Jersey healthcare network said in a statement that its work with Google Cloud includes use of AI and machine learning to enable notification of newborn congenital disorders and to predict sepsis risk in real-time.

Hologic, a medical technology company focused on women’s health, said its collaboration integrates Google Cloud AI with the company’s Genius Digital Diagnostics System.

“By complementing our expertise in diagnostics and AI with Google Cloud’s expertise in AI, we’re evolving our market-leading technologies to improve laboratory performance, healthcare provider decision making, and patient care,” said Michael Quick, Vice President of Research and Development and Innovation at Hologic, in the press release.

Hologic says its Genius Digital Diagnostics System combines AI with volumetric medical imaging to find pre-cancerous lesions and cancer cells. From a Pap test digital image, the system narrows “tens of thousands of cells down to an AI-generated gallery of the most diagnostically relevant,” according to the company website.

Hologic plans to work with Google Cloud on storage and “to improve diagnostic accuracy for those cancer images,” Hsu Lynch told MedCity News.

Medical image storage and sharing technologies like Google Cloud’s Medical Imaging Suite provide an opportunity for radiologists, researchers, and others to share critical image studies with anatomic pathologists and physicians providing care to cancer patients.   

One key observation is that the primary function of this service that Google has begun to deploy is to aid in radiology workflow and productivity, and to improve the accuracy of cancer diagnoses by radiologists. Meanwhile, Google continues to employ pathologists within its medical imaging research and development teams.

Assuming that the first radiologists find the Google suite of tools effective in support of patient care, it may not be too long before Google moves to introduce an imaging suite of tools designed to aid the workflow of surgical pathologists as well.

Donna Marie Pocius

Related Information:

Google Cloud Delivers on the Promise of AI and Data Interoperability with New Medical Imaging Suite

Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies with Progress Highlights, and Future Promises

Google Cloud Unveils Medical Imaging Suite with Hologic, Hackensack Meridian as First Customers

Google Cloud Medical Imaging Suite and its Deep Insights

Hackensack Meridian Health and Google Expand Relationship to Improve Patient Care

Google Cloud Introduces New AI-Powered Medical Imaging Suite

Stanford Medicine Scientists Sequence Patient’s Whole Genome in Just Five Hours Using Nanopore Genome Sequencing, AI, and Cloud Computing

And in less than eight hours, they had diagnosed a child with a rare genetic disorder, results that would take clinical laboratory testing weeks to return, demonstrating the clinical value of the genomic process

In another major genetic sequencing advancement, scientists at Stanford University School of Medicine have developed a method for rapid sequencing of patients’ whole human genome in as little as five hours. And the researchers used their breakthrough to diagnose rare genetic diseases in under eight hours, according to a Stanford Medicine news release. Their new “ultra-rapid genome sequencing approach” could lead to significantly faster diagnostics and improved clinical laboratory treatments for cancer and other diseases.

The Stanford Medicine researchers used nanopore sequencing and artificial intelligence (AI) technologies in a “mega-sequencing approach” that has redefined “rapid” for genetic diagnostics. The sequence for one study participant—completed in just five hours and two minutes—set the first Guinness World Record for the fastest DNA sequencing to date, the news release states.

The Stanford scientists described their new method for rapid diagnosis of genetic diseases in the New England Journal of Medicine (NEJM) titled, “Ultrarapid Nanopore Genome Sequencing in a Critical Care Setting.”

Euan Ashley, MD, PhD

“A few weeks is what most clinicians call ‘rapid’ when it comes to sequencing a patient’s genome and returning results,” said cardiovascular disease specialist Euan Ashley, MD, PhD (above), professor of medicine, genetics, and biomedical data science, at Stanford University in the news release. “The right people suddenly came together to achieve something amazing. We really felt like we were approaching a new frontier.” Their results could lead to faster diagnostics and clinical laboratory treatments. (Photo copyright: Stanford Medicine.)

.

Need for Fast Genetic Diagnosis 

In their NEJM paper, the Stanford scientists argue that rapid genetic diagnosis is key to clinical management, improved prognosis, and critical care cost savings.

“Although most critical care decisions must be made in hours, traditional testing requires weeks and rapid testing requires days. We have found that nanopore genome sequencing can accurately and rapidly provide genetic diagnoses,” the authors wrote.

To complete their study, the researchers sequenced the genomes of 12 patients from two hospitals in Stanford, Calif. They used nanopore genome sequencing, cloud computing-based bioinformatics, and a “custom variant prioritization.”

Their findings included:

  • Five people received a genetic diagnosis from the sequencing information in about eight hours.
  • Diagnostic rate of 42%, about 12% higher than the average rate for diagnosis of genetic disorders (the researchers noted that not all conditions are genetically based and appropriate for sequencing).
  • Five hours and two minutes to sequence a patient’s genome in one case.
  • Seven hours and 18 minutes to sequence and diagnose that case.

How the Nanopore Process Works

To advance sequencing speed, the researchers used equipment by Oxford Nanopore Technologies with 48 sequencing units called “flow cells”—enough to sequence a person’s whole genome at one time.

The Oxford Nanopore PromethION Flow Cell generates more than 100 gigabases of data per hour, AI Time Journal reported. The team used a cloud-based storage system to enable computational power for real-time analysis of the data. AI algorithms scanned the genetic code for errors and compared the patients’ gene variants to variants associated with diseases found in research data, Stanford explained.

According to an NVIDIA blog post, “The researchers accelerated both base calling and variant calling using NVIDIA GPUs on Google Cloud. Variant calling, the process of identifying the millions of variants in a genome, was also sped up with NVIDIA Clara Parabricks, a computational genomics application framework.”

Rapid Genetic Test Produces Clinical Benefits

“Together with our collaborators and some of the world’s leaders in genomics, we were able to develop a rapid sequencing analysis workflow that has already shown tangible clinical benefits,” said Mehrzad Samadi, PhD, NVIDIA Senior Engineering Manager and co-author of the NEJM paper, in the blog post. “These are the kinds of high-impact problems we live to solve.”

In their paper, the Stanford researchers described their use of the rapid genetic test to diagnose and treat an infant who was experiencing epileptic seizures on arrival to Stanford’s pediatric emergency department. In just eight hours, their diagnostic test found that the infant’s convulsions were attributed to a mutation in the gene CSNK2B, “a variant and gene known to cause a neurodevelopmental disorder with early-onset epilepsy,” the researchers wrote.

“By accelerating every step of this process—from collecting a blood sample to sequencing the whole genome to identifying variants linked to diseases—[the Stanford] research team took just hours to find a pathogenic variant and make a definitive diagnosis in a three-month-old infant with a rare seizure-causing genetic disorder. A traditional gene panel analysis ordered at the same time took two weeks to return results,” AI Time Journal reported.

New Benchmarks

The Stanford research team wants to cut the sequencing time in half. But for now, the five-hour rapid whole genome sequence can be considered by clinical laboratory leaders, pathologists, and research scientists a new benchmark in genetic sequencing for diagnostic purposes.

Stories like Stanford’s rapid diagnosis of the three-month old patient with epileptic seizures, point to the ultimate value of advances in genomic sequencing technologies.

Donna Marie Pocius

Related Information:

Fastest DNA Sequencing Technique Helps Undiagnosed Patients Find Answers in Mere Hours

Ultrarapid Nanopore Genome Sequencing in a Critical Care Setting

Stanford Researchers Use AI to Sequence and Analyze DNA in Five Hours

World Record-Setting DNA Sequencing Technique Helps Clinicians Rapidly Diagnose Critical Care Patients

Ultima Genomics Delivers the $100 Genome

Diagnosing Ovarian Cancer Using Perception-based Nanosensors and Machine Learning

Two studies show the accuracy of perception-based systems in detecting disease biomarkers without needing molecular recognition elements, such as antibodies

Researchers from multiple academic and research institutions have collaborated to develop a non-conventional machine learning-based technology for identifying and measuring biomarkers to detect ovarian cancer without the need for molecular identification elements, such as antibodies.

Traditional clinical laboratory methods for detecting biomarkers of specific diseases require a “molecular recognition molecule,” such as an antibody, to match with each disease’s biomarker. However, according to a Lehigh University news release, for ovarian cancer “there’s not a single biomarker—or analyte—that indicates the presence of cancer.

“When multiple analytes need to be measured in a given sample, which can increase the accuracy of a test, more antibodies are required, which increases the cost of the test and the turnaround time,” the news release noted.

The multi-institutional team included scientists from Memorial Sloan Kettering Cancer Center, Weill Cornell Medicine, the University of Maryland, the National Institutes of Standards and Technology, and Lehigh University.

Unveiled in two sequential studies, the new method for detecting ovarian cancer uses machine learning to examine spectral signatures of carbon nanotubes to detect and recognize the disease biomarkers in a very non-conventional fashion.

Daniel Heller, PhD
 
“Carbon nanotubes have interesting electronic properties,” said Daniel Heller, PhD (above), in the Lehigh University news release. “If you shoot light at them, they emit a different color of light, and that light’s color and intensity can change based on what’s sticking to the nanotube. We were able to harness the complexity of so many potential binding interactions by using a range of nanotubes with various wrappings. And that gave us a range of different sensors that could all detect slightly different things, and it turned out they responded differently to different proteins.” This method differs greatly from traditional clinical laboratory methods for identifying disease biomarkers. (Photo copyright: Memorial Sloan-Kettering Cancer Center.)

Perception-based Nanosensor Array for Detecting Disease

The researchers published their findings from the two studies in the journals Science Advances, titled, “A Perception-based Nanosensor Platform to Detect Cancer Biomarkers,” and Nature Biomedical Engineering, titled, “Detection of Ovarian Cancer via the Spectral Fingerprinting of Quantum-Defect-Modified Carbon Nanotubes in Serum by Machine Learning.”

In the Science Advances paper, the researchers described their development of “a perception-based platform based on an optical nanosensor array that leverages machine learning algorithms to detect multiple protein biomarkers in biofluids.

“Perception-based machine learning (ML) platforms, modeled after the complex olfactory system, can isolate individual signals through an array of relatively nonspecific receptors. Each receptor captures certain features, and the overall ensemble response is analyzed by the neural network in our brain, resulting in perception,” the researchers wrote.

“This work demonstrates the potential of perception-based systems for the development of multiplexed sensors of disease biomarkers without the need for specific molecular recognition elements,” the researchers concluded.

In the Nature Biomedical Engineering paper, the researchers described a fined-tuned toolset that could accurately differentiate ovarian cancer biomarkers from biomarkers in individuals who are cancer-free.

“Here we show that a ‘disease fingerprint’—acquired via machine learning from the spectra of near-infrared fluorescence emissions of an array of carbon nanotubes functionalized with quantum defects—detects high-grade serous ovarian carcinoma in serum samples from symptomatic individuals with 87% sensitivity at 98% specificity (compared with 84% sensitivity at 98% specificity for the current best [clinical laboratory] screening test, which uses measurements of cancer antigen 125 and transvaginal ultrasonography,” the researchers wrote.

“We demonstrated that a perception-based nanosensor platform could detect ovarian cancer biomarkers using machine learning,” said Yoona Yang, PhD, a postdoctoral research associate in Lehigh’s Department of Chemical and Biomolecular Engineering and co-first author of the Science Advances article, in the news release.

How Perception-based Machine Learning Platforms Work

According to Yang, perception-based sensing functions like the human brain.

“The system consists of a sensing array that captures a certain feature of the analytes in a specific way, and then the ensemble response from the array is analyzed by the computational perceptive model. It can detect various analytes at once, which makes it much more efficient,” Yang said.

The “array” the researchers are referring to are DNA strands wrapped around single-wall carbon nanotubes (DNA-SWCNTs).

“SWCNTs have unique optical properties and sensitivity that make them valuable as sensor materials. SWCNTS emit near-infrared photoluminescence with distinct narrow emission bands that are exquisitely sensitive to the local environment,” the researchers wrote in Science Advances.

“Carbon nanotubes have interesting electronic properties,” said Daniel Heller, PhD, Head of the Cancer Nanotechnology Laboratory at Memorial Sloan Kettering Cancer Center and Associate Professor in the Department of Pharmacology at Weill Cornell Medicine of Cornell University, in the Lehigh University news release.

“If you shoot light at them, they emit a different color of light, and that light’s color and intensity can change based on what’s sticking to the nanotube. We were able to harness the complexity of so many potential binding interactions by using a range of nanotubes with various wrappings. And that gave us a range of different sensors that could all detect slightly different things, and it turned out they responded differently to different proteins,” he added.

The researchers put their technology to practical test in the second study. The wanted to learn if it could differentiate symptomatic patients with high-grade ovarian cancer from cancer-free individuals. 

The research team used 269 serum samples. This time, nanotubes were bound with a specific molecule providing “an extra signal in terms of data and richer data from every nanotube-DNA combination,” said Anand Jagota PhD, Professor, Bioengineering and Chemical and Biomolecular Engineering, Lehigh University, in the news release.

This year, 19,880 women will be diagnosed with ovarian cancer and 12,810 will die from the disease, according to American Cancer Society data. While more research and clinical trials are needed, the above studies are compelling and suggest the possibility that one day clinical laboratories may detect ovarian cancer faster and more accurately than with current methods.   

—Donna Marie Pocius

Related Information:

Perception-Based Nanosensor Platform Could Advance Detection of Ovarian Cancer

Perception-Based Nanosensor Platform to Detect Cancer Biomarkers

Detection of Ovarian Cancer via the Spectral Fingerprinting of Quantum-Defect-Modified Carbon Nanotubes in Serum by Machine Learning

Machine Learning Nanosensor Platform Detects Early Cancer Biomarkers

;