News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

A Dark Daily Extra!

This is the third of a three-part series on revenue cycle management for molecular testing laboratories and pathology practices, produced in collaboration with XiFin Inc.

Automation and AI-Powered Workflow Paves the Way for Consistent, Optimized Molecular Diagnostics and Pathology RCM

Third in a three-part series, this article will discuss how sophisticated revenue cycle management technology, including artificial intelligence (AI) capabilities, drives faster, more efficient revenue reimbursement for molecular and pathology testing.

Financial and operational leaders of molecular testing laboratories and pathology groups are under pressure to maximize the revenue collected from their services rendered. This is no easy task. Molecular claims, in particular, can be especially complex. This article outlines the specific areas in which automation and artificial intelligence (AI)-based workflows can improve revenue cycle management (RCM) for molecular diagnostic and pathology organizations so they can better meet their operational and financial goals.

AI can play a number of important roles in business. When it comes to RCM for diagnostic organizations, first and foremost, AI can inform decision-making processes by generating new or derived data, which can be used in reporting and analytics. It can also help understand likely outcomes based on historical data, such as an organization’s current outstanding accounts receivable (AR) and what’s likely to happen with that AR based on historic performance.

AI is also deployed to accelerate the creation of configurations and workflows. For example, generated or derived data can be used to create configurations within a revenue cycle workflow to address changes or shifts in likely outcomes, such as denial rates. Suppose an organization is using AI to analyze historical denial data and predict denial rates. In that case, changes in those predicted denial rates can be used to modify a workflow to prevent those denials upfront or to automate appeals on the backend. This helps organizations adapt to changes more quickly and accelerates the time to reimbursement.

“Furthermore, AI is used to automate workflows by providing or informing decisions directly,“ says Clarisa Blattner,  XiFin Senior Director of Revenue and Payor Optimization. “In this case, when the AI sees shifts or changes, it knows what to do to address them. This enables an organization to take a process in the revenue cycle workflow that is very human-oriented and automate it.”

AI is also leveraged to validate data and identify outcomes that are anomalous, or that lie outside of the norm. This helps an organization:

  • Ensure that the results achieved meet the expected performance
  • Understand whether the appropriate configurations are in place
  • Identify if an investigation is required to uncover the reason behind any anomalies so that they can be addressed

Finally, AI can be employed to generate content, such as letters or customer support materials.

Everything AI starts with data

Everything AI-related starts with the data. Without good-quality data, organizations can’t generate AI models that will move a business forward. In order to build effective AI models, an organization must understand the data landscape and be able to monitor and measure performance and progress and adjust the activities being driven, as necessary.

Dirty, unstructured data leads to unintelligent AI. AI embodies the old adage, “garbage in, garbage out.” The quality of the AI decision or prediction is entirely based on the historical data that it’s seen. If that data is faulty, flawed, or incomplete, it can lead to bad decisions or the inability to predict or make a decision at all. Purposeful data modeling is critical to AI success, and having people and processes that can understand the complicated RCM data and structure it so it can be effectively analyzed is vital to success.

The next step is automation. Having effective AI models that generate strong predictions is only as valuable as the ability to get that feedback into the revenue cycle system effectively. If not, that value is minimal, because the organization must expend a lot of human energy to try to reconfigure or act on the AI predictions being generated.

There is a typical transformation path, illustrated below, that organizations go through to get from having data stored in individual silos to fully embedded AI. If an organization is struggling with aggregating data to build AI models, it’s at stage one. The goal is stage five, where an organization uses AI as a key differentiator and AI is a currency, driving activity.

The transformation starts with structuring data with an underlying data approach that keeps it future-ready. It is this foundation that allows organizations to realize the benefits of AI in a cost-effective and efficient way. Getting the automation embedded in the workflow is the key to getting to the full potential of AI in improving the RCM process.

Real-world examples of how AI and automation improve RCM

One example of how AI can improve the RCM process is using AI to discover complex payer information. One significant challenge for diagnostic service providers is ensuring that the right third-party insurance information for patients is captured. This is essential for clean claims submission. Often, the diagnostic provider is not the organization that actually sees the patient, in which case it doesn’t have the ability to collect that information directly. The organization must rely on the referring physician or direct outreach to the patient for this data when it’s incorrect or incomplete.

Diagnostic providers are sensitive to not burdening referring clients or patients with requests for demographic or payer information. It’s important to make this experience as simple and smooth as possible. Also, insurance information is complicated. A lot of data must be collected or corrected if the diagnostic provider doesn’t have the correct information.

Automating this process is difficult. Frequently, understanding who the payer is and how that payer translates into contracts and mapping within the revenue cycle process requires an agent to be on the phone with the patient. It can be very difficult for a patient to get precise payer plan information from their insurance card without the help of a customer service representative.

This is where AI can help. The goal is to require the smallest amount of information from a patient and be able to verify eligibility through electronic means with the payer. Using optical character recognition (OCR), an organization can take an image of the front and back of a patient’s insurance card, isolate the relevant text, and use an AI model to get the information needed in order to generate an eligibility request and confirm eligibility with that payer.

In the event that taking an image of the insurance card is problematic for a patient, the organization can have the patient walk through a simplified online process, for example, through a patient portal, and provide just a few pieces of data to be able to run eligibility verification and get to confirmed eligibility with the payer.

AI can help with this process too. For example, the patient can provide high-level payer information only, such as the name of the commercial payer or whether the coverage is Medicare or Medicaid, the state the patient resides in, and the subscriber ID and AI can use this high-level data to get an eligibility response and confirmed eligibility.

Once the eligibility response is received, the more detailed payer information can be presented back to the patient for confirmation. AI can map the eligibility response to the appropriate contract or payer plan within the RCM system.

Now that the patient’s correct insurance information is captured, the workflow moves on to collecting the patient’s financial responsibility payment. To do that, the organization needs to be able to calculate the patient’s financial responsibility estimate. The RCM system has accurate pricing information and now has detailed payer and plan information, a real-time eligibility response, as well as test or procedure information. This data can be used to estimate patient financial responsibility.

AI can also be used to address and adapt to changes in ordering patterns, payer responses, and payer reimbursement behavior. The RCM process can be designed to incorporate AI to streamline claims, denials, and appeals management, as well as to assign work queues and prioritize exception processing (EP) work based on the likelihood of reimbursement, which improves efficiency.

One other way AI can help is in understanding and or maintaining “expect” prices—what an organization can expect to collect from particular payers for particular procedures. For contracted payers, contracted rates are loaded into the RCM system. It’s important to track whether payers are paying those contracted rates and whether the organization is receiving the level of reimbursement expected. For non-contracted payers, it’s harder to know what the reimbursement rate will be. Historical data and AI can provide a good understanding of what can be expected. AI can also be used to determine if a claim is likely to be rejected because of incorrect or incomplete payer information or patient ineligibility, in which case automation can be applied to resolve most issues.

Another AI benefit relates to quickly determining the probability of reimbursement and assigning how claims are prioritized if a claim requires intervention that cannot be automated. With AI, these claims that require EP are directed to the best available team member, based on that particular team member’s past success with resolving a particular error type.

The goal with EP is to ensure that the claims are prioritized to optimize reimbursement. This starts with understanding the probability of the claim being reimbursed. An AI model can be designed to assess the likelihood of the claim being reimbursed and the likely amount of reimbursement for those expected to be paid. This helps prioritize activities and optimize labor resources. The AI model can also take important factors such as timely filing dates into account. If a claim is less likely to be collected than another procedure but is close to its timely filing deadline, it can be escalated. The algorithms can be run nightly to produce a prioritized list of claims with assignments to the specific team member best suited to address each error.

AI can also be used to create a comprehensive list of activities and the order in which those activities should be performed to optimize reimbursement. The result is a prioritized list for each team member indicating which claims should be worked on first and which specific activities need to be accomplished for each claim.

Summing it all up, organizations need an RCM partner with a solid foundation in data and data modeling. This is essential to being able to effectively harness the power of AI. In addition, the RCM partner must offer the supporting infrastructure to interface with referring clients, patients, and payers. This is necessary to maximize automation and smoothly coordinate RCM activities across the various stakeholders in the process.

Having good AI and insight into data and trends is important, but the ability to add automation to the RCM process based on the AI really solidifies the benefits and delivers a return on investment (ROI). Analytics are also essential for measuring and tracking performance over time and identifying opportunities for further improvement.

Diagnostic executives looking to maximize reimbursement and keep the cost of collection low will want to explore how to better leverage data, AI, automation, and analytics across their RCM process.

This is the third of a three-part series on revenue cycle management for molecular testing laboratories and pathology practices, produced in collaboration with XiFin Inc. Missed the first two articles? www.darkdaily.com

— Leslie Williams

:

UK Study Claims AI Reading of CT Scans Almost Twice as Accurate at Grading Some Cancers as Clinical Laboratory Testing of Sarcoma Biopsies

Radiological method using AI algorithms to detect, locate, and identify cancer could negate the need for invasive, painful clinical laboratory testing of tissue biopsies

Clinical laboratory testing of cancer biopsies has been the standard in oncology diagnosis for decades. But a recent study by the Institute of Cancer Research (ICR) and the Royal Marsden NHS Foundation Trust in the UK has found that, for some types of sarcomas (malignant tumors), artificial intelligence (AI) can grade the aggressiveness of tumors nearly twice as accurately as lab tests, according to an ICR news release.

This will be of interest to histopathologists and radiologist technologists who are working to develop AI deep learning algorithms to read computed tomography scans (CT scans) to speed diagnosis and treatment of cancer patients.

“Researchers used the CT scans of 170 patients treated at The Royal Marsden with the two most common forms of retroperitoneal sarcoma (RPS)—leiomyosarcoma and liposarcoma—to create an AI algorithm, which was then tested on nearly 90 patients from centers across Europe and the US,” the news release notes.

The researchers then “used a technique called radiomics to analyze the CT scan data, which can extract information about the patient’s disease from medical images, including data which can’t be distinguished by the human eye,” the new release states.

The scientists published their findings in The Lancet Oncology titled, “A CT-based Radiomics Classification Model for the Prediction of Histological Type and Tumor Grade in Retroperitoneal Sarcoma (RADSARC-R): A Retrospective Multicohort Analysis.”

The research team sought to make improvements with this type of cancer because these tumors have “a poor prognosis, upfront characterization of the tumor is difficult, and under-grading is common,” they wrote. The fact that AI reading of CT scans is a non-invasive procedure is major benefit, they added.

Christina Messiou, MD

“This is the largest and most robust study to date that has successfully developed and tested an AI model aimed at improving the diagnosis and grading of retroperitoneal sarcoma using data from CT scans,” said the study’s lead oncology radiologist Christina Messiou, MD, (above), Consultant Radiologist at The Royal Marsden NHS Foundation Trust and Professor in Imaging for Personalized Oncology at The Institute of Cancer Research, London, in a news release. Invasive medical laboratory testing of cancer biopsies may eventually become a thing of the past if this research becomes clinically available for oncology diagnosis. (Photo copyright: The Royal Marsden.)

Study Details

RPS is a relatively difficult cancer to spot, let alone diagnose. It is a rare form of soft-tissue cancer “with approximately 8,600 new cases diagnosed annually in the United States—less than 1% of all newly diagnosed malignancies,” according to Brigham and Women’s Hospital.

In their published study, the UK researchers noted that, “Although more than 50 soft tissue sarcoma radiomics studies have been completed, few include retroperitoneal sarcomas, and the majority use single-center datasets without independent validation. The limited interpretation of the quantitative radiological phenotype in retroperitoneal sarcomas and its association with tumor biology is a missed opportunity.”

According to the ICR news release, “The [AI] model accurately graded the risk—or how aggressive a tumor is likely to be—[in] 82% of the tumors analyzed, while only 44% were correctly graded using a biopsy.”

Additionally, “The [AI] model also accurately predicted the disease type [in] 84% of the sarcomas tested—meaning it can effectively differentiate between leiomyosarcoma and liposarcoma—compared with radiologists who were not able to diagnose 35% of the cases,” the news release states.

“There is an urgent need to improve the diagnosis and treatment of patients with retroperitoneal sarcoma, who currently have poor outcomes,” said the study’s first author Amani Arthur, PhD, Clinical Research Fellow at The Institute of Cancer Research, London, and Registrar at The Royal Marsden NHS Foundation Trust, in the ICR news release.

“The disease is very rare—clinicians may only see one or two cases in their career—which means diagnosis can be slow. This type of sarcoma is also difficult to treat as it can grow to large sizes and, due to the tumor’s location in the abdomen, involve complex surgery,” she continued. “Through this early research, we’ve developed an innovative AI tool using imaging data that could help us more accurately and quickly identify the type and grade of retroperitoneal sarcomas than current methods. This could improve patient outcomes by helping to speed up diagnosis of the disease, and better tailor treatment by reliably identifying the risk of each patient’s disease.

“In the next phase of the study, we will test this model in clinic on patients with potential retroperitoneal sarcomas to see if it can accurately characterize their disease and measure the performance of the technology over time,” Arthur added.

Importance of Study Findings

Speed of detection is key to successful cancer diagnoses, noted Richard Davidson, Chief Executive of Sarcoma UK, a bone and soft tissue cancer charity.

“People are more likely to survive sarcoma if their cancer is diagnosed early—when treatments can be effective and before the sarcoma has spread to other parts of the body. One in six people with sarcoma cancer wait more than a year to receive an accurate diagnosis, so any research that helps patients receive better treatment, care, information and support is welcome,” he told The Guardian.

According to the World Health Organization, cancer kills about 10 million people worldwide every year. Acquisition and medical laboratory testing of tissue biopsies is both painful to patients and time consuming. Thus, a non-invasive method of diagnosing deadly cancers quickly, accurately, and early would be a boon to oncology practices worldwide and could save thousands of lives each year.

—Kristin Althea O’Connor

Related Information:

AI Twice as Accurate as a Biopsy at Grading Aggressiveness of Some Sarcomas

AI Better than Biopsy at Assessing Some Cancers, Study Finds

AI Better than Biopsies for Grading Rare Cancer, New Research Suggests

A CT-based Radiomics Classification Model for the Prediction of Histological Type and Tumor Grade in Retroperitoneal Sarcoma (RADSARC-R): A Retrospective Multicohort Analysis

Stanford Researchers Use Text and Images from Pathologists’ Twitter Accounts to Train New Pathology AI Model

Researchers intend their new AI image retrieval tool to help pathologists locate similar case images to reference for diagnostics, research, and education

Researchers at Stanford University turned to an unusual source—the X social media platform (formerly known as Twitter)—to train an artificial intelligence (AI) system that can look at clinical laboratory pathology images and then retrieve similar images from a database. This is an indication that pathologists are increasingly collecting and storing images of representative cases in their social media accounts. They then consult those libraries when working on new cases that have unusual or unfamiliar features.

The Stanford Medicine scientists trained their AI system—known as Pathology Language and Image Pretraining (PLIP)—on the OpenPath pathology dataset, which contains more than 200,000 images paired with natural language descriptions. The researchers collected most of the data by retrieving tweets in which pathologists posted images accompanied by comments.

“It might be surprising to some folks that there is actually a lot of high-quality medical knowledge that is shared on Twitter,” said researcher James Zou, PhD, Assistant Professor of Biomedical Data Science and senior author of the study, in a Stanford Medicine SCOPE blog post, which added that “the social media platform has become a popular forum for pathologists to share interesting images—so much so that the community has widely adopted a set of 32 hashtags to identify subspecialties.”

“It’s a very active community, which is why we were able to curate hundreds of thousands of these high-quality pathology discussions from Twitter,” Zou said.

The Stanford researchers published their findings in the journal Nature Medicine titled, “A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter.”

James Zou, PhD

“The main application is to help human pathologists look for similar cases to reference,” James Zou, PhD (above), Assistant Professor of Biomedical Data Science, senior author of the study, and his colleagues wrote in Nature Medicine. “Our approach demonstrates that publicly shared medical information is a tremendous resource that can be harnessed to develop medical artificial intelligence for enhancing diagnosis, knowledge sharing, and education.” Leveraging pathologists’ use of social media to store case images for future reference has worked out well for the Stanford Medicine study. (Photo copyright: Stanford University.)

Retrieving Pathology Images from Tweets

“The lack of annotated publicly-available medical images is a major barrier for innovations,” the researchers wrote in Nature Medicine. “At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter.”

In this case, the goal “is to train a model that can understand both the visual image and the text description,” Zou said in the SCOPE blog post.

Because X is popular among pathologists, the United States and Canadian Academy of Pathology (USCAP), and Pathology Hashtag Ontology project, have recommended a standard series of hashtags, including 32 hashtags for subspecialties, the study authors noted.

Examples include:

“Pathology is perhaps even more suited to Twitter than many other medical fields because for most pathologists, the bulk of our daily work revolves around the interpretation of images for the diagnosis of human disease,” wrote Jerad M. Gardner, MD, a dermatopathologist and section head of bone/soft tissue pathology at Geisinger Medical Center in Danville, Pa., in a blog post about the Pathology Hashtag Ontology project. “Twitter allows us to easily share images of amazing cases with one another, and we can also discuss new controversies, share links to the most cutting edge literature, and interact with and promote the cause of our pathology professional organizations.”

The researchers used the 32 subspecialty hashtags to retrieve English-language tweets posted from 2006 to 2022. Images in the tweets were “typically high-resolution views of cells or tissues stained with dye,” according to the SCOPE blog post.

The researchers collected a total of 232,067 tweets and 243,375 image-text pairs across the 32 subspecialties, they reported. They augmented this with 88,250 replies that received the highest number of likes and had at least one keyword from the ICD-11 codebook. The SCOPE blog post noted that the rankings by “likes” enabled the researchers to screen for high-quality replies.

They then refined the dataset by removing duplicates, retweets, non-pathology images, and tweets marked by Twitter as being “sensitive.” They also removed tweets containing question marks, as this was an indicator that the practitioner was asking a question about an image rather than providing a description, the researchers wrote in Nature Medicine.

They cleaned the text by removing hashtags, Twitter handles, HTML tags, emojis, and links to websites, the researchers noted.

The final OpenPath dataset included:

  • 116,504 image-text pairs from Twitter posts,
  • 59,869 from replies, and
  • 32,041 image-text pairs scraped from the internet or obtained from the LAION dataset.

The latter is an open-source database from Germany that can be used to train text-to-image AI software such as Stable Diffusion.

Training the PLIP AI Platform

Once they had the dataset, the next step was to train the PLIP AI model. This required a technique known as contrastive learning, the researchers wrote, in which the AI learns to associate features from the images with portions of the text.

As explained in Baeldung, an online technology publication, contrastive learning is based on the idea that “it is easier for someone with no prior knowledge, like a kid, to learn new things by contrasting between similar and dissimilar things instead of learning to recognize them one by one.”

“The power of such a model is that we don’t tell it specifically what features to look for. It’s learning the relevant features by itself,” Zou said in the SCOPE blog post.

The resulting AI PLIP tool will enable “a clinician to input a new image or text description to search for similar annotated images in the database—a sort of Google Image search customized for pathologists,” SCOPE explained.

“Maybe a pathologist is looking at something that’s a bit unusual or ambiguous,” Zou told SCOPE. “They could use PLIP to retrieve similar images, then reference those cases to help them make their diagnoses.”

The Stanford University researchers continue to collect pathology images from X. “The more data you have, the more it will improve,” Zou said.

Pathologists will want to keep an eye on the Stanford Medicine research team’s progress. The PLIP AI tool may be a boon to diagnostics and improve patient outcomes and care.

—Stephen Beale

Related Information:

New AI Tool for Pathologists Trained by Twitter (Now Known as X)

A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter

AI + Twitter = Foundation Visual-Language AI for Pathology

Pathology Foundation Model Leverages Medical Twitter Images, Comments

A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter (Preprint)

Pathology Language and Image Pre-Training (PLIP)

Introducing the Pathology Hashtag Ontology

Rice University Researchers Are Developing an Implantable Cancer Therapeutic Device That May Reduce Cancer Deaths by Half

Immunotherapy device could also enable clinical laboratories to receive in vivo biomarker data wirelessly

Researchers from Rice University in Houston and seven other states in the US are working on a new oncotherapy sense-and-respond implant that could dramatically improve cancer outcomes. Called Targeted Hybrid Oncotherapeutic Regulation (THOR), the technology is intended primarily for the delivery of therapeutic drugs by monitoring specific cancer biomarkers in vivo.

Through a $45 million federal grant from the Advanced Research Projects Agency for Health (ARPA-H), the researchers set out to develop an immunotherapy implantable device that monitors a patient’s cancer and adjusts antibody treatment dosages in real time in response to the biomarkers it measures.

It’s not a far stretch to envision future versions of the THOR platform also being used diagnostically to measure biomarker data and transmit it wirelessly to clinical laboratories and anatomic pathologists.

ARPH-A is a federal funding agency that was established in 2022 to support the development of high-impact research to drive biomedical and health breakthroughs. THOR is the second program to receive funding under its inaugural Open Broad Agency Announcement solicitation for research proposals. 

“By integrating a self-regulated circuit, the THOR technology can adjust the dose of immunotherapy reagents based on a patient’s responses,” said Weiyi Peng, MD, PhD (above), Assistant Professor of Biology and Biochemistry at the University of Houston and co-principal investigator on the research, in a UH press release. “With this new feature, THOR is expected to achieve better efficacy and minimize immune-related toxicity. We hope this personalized immunotherapy will revolutionize treatments for patients with peritoneal cancers that affect the liver, lungs, and other organs.” If anatomic pathologists and clinical laboratories could receive biometric data from the THOR device, that would be a boon to cancer diagnostics. (Photo copyright: University of Houston.)

Antibody Therapy on Demand

Omid Veiseh, PhD, Associate Professor of Bioengineering at Rice University and principal investigator on the project, described the THOR device as a “living drug factory” inside the body. The device is a rod-like gadget that contains onboard electronics and a wireless rechargeable battery. It is three inches long and has a miniaturized bioreactor that contains human epithelial cells that have been engineered to produce immune modulating therapies.

“Instead of tethering patients to hospital beds, IV bags, and external monitors, we’ll use a minimally invasive procedure to implant a small device that continuously monitors their cancer and adjusts their immunotherapy dose in real time,” said Veiseh in a Rice University press release. “This kind of ‘closed-loop therapy’ has been used for managing diabetes, where you have a glucose monitor that continuously talks to an insulin pump.

But for cancer immunotherapy, it’s revolutionary.”

The team believes the THOR device will have the ability to monitor biomarkers and produce an antibody on demand that will trigger the immune system to fight cancer locally. They hope the sensor within THOR will be able to monitor biomarkers of toxicity for the purpose of fine-tuning therapies to a patient immediately in response to signals from a tumor. 

“Today, cancer is treated a bit like a static disease, which it’s not,” Veiseh said. “Clinicians administer a therapy and then wait four to six weeks to do radiological measurements to see if the therapy is working. You lose quite a lot of time if it’s not the right therapy. The tumor may have evolved into a more aggressive form.”

The THOR device lasts 60 days and can be removed after that time. It is designed to educate the immune system to recognize a cancer and prevent it from recurring. If the cancer is not fully eradicated after the first implantation, the patient can be implanted with THOR again. 

Use of AI in THOR Therapy

The researchers plan to spend the next two and a half years building prototypes of the THOR device, testing them in rodents, and refining the list of biomarkers to be utilized in the device. Then, they intend to take an additional year to establish protocols for the US Food and Drug Administration’s (FDA) good manufacturing practices requirements, and to test the final prototype on large animals. The researchers estimate the first human clinical trials for the device will begin in about four years. 

“The first clinical trial will focus on refractory recurrent ovarian cancer, and the benefit of that is that we have an ongoing trial for ovarian cancer with our encapsulated cytokine ‘drug factory’ technology,” said Veiseh in the UH press release. 

The group is starting with ovarian cancer because research in this area is lacking and it will provide the opportunity for THOR to activate the immune system against ovarian cancer, which is typically challenging to fight with immunotherapy approaches. If successful in ovarian cancer, the researchers hope to test THOR in other cancers that metastasize within the abdomen, such as:

All control and decision-making will initially be performed by a healthcare provider based on signals transmitted by THOR using a computer or smartphone. However, Veiseh sees the device ultimately being powered by artificial intelligence (AI) algorithms that could independently make therapeutic decisions.

“As we treat more and more patients [with THOR], the devices are going to learn what type of biomarker readout better predicts efficacy and toxicity and make adjustments based on that,” he predicted. “Between the information you have from the first patient versus the millionth patient you treat, the algorithm is just going to get better and better.”

Moving Forward

In addition to UH and Rice University, scientists working on the project come from several institutions, including:

More research and clinical trials are needed before THOR can be used in the clinical treatment of cancer patients. If the device reaches the commercialization stage, Veiseh plans to either form a new company or license the technology to an existing company for further development.

“We know that the further we advance it in terms of getting that human data, the more likely it is that this could then be transferred to another entity,” he told Precision Medicine Online.

Pathologists and clinical laboratories will want to monitor the progress of the THOR technology’s ability to sense changes in cancer biomarkers and deliver controlled dosages of antibiotic treatments.

—JP Schlingman

Related Information:

UH Researcher on Team Developing Sense-and-Respond Cancer Implant Technology

Feds Fund $45M Rice-Led Research That Could Slash US Cancer Deaths by 50%

$45M Awarded to Develop Sense-and-Respond Implant Technology for Cancer Treatment

Implantable Oncotherapeutic Bioreactor Device Lands $45M Government Funding

ARPA-H Fast Tracks Development of New Cancer Implant Tech

ARPA-H Announces Funding for Programs to Support Cancer Moonshot Objectives

ARPA-H Fast Tracks Development of New Cancer Implant Tech

Feds Investing Nearly $115 Million in Three New Cancer Technology Research Projects

Hopkins Engineers Join $45M Project to Develop Sense-and-Respond Cancer Implant Technology

ARPA-H Projects Aim to Develop Novel Cancer Technologies

Closed-Loop Insulin Delivery Systems: Past, Present, and Future Directions

Researchers Create Artificial Intelligence Tool That Accurately Predicts Outcomes for 14 Types of Cancer

Scientists in Italy Develop Hierarchical Artificial Intelligence System to Analyze Bacterial Species in Culture Plates

New artificial intelligence model agrees with interpretations of human medical technologists and microbiologists with extraordinary accuracy

Microbiology laboratories will be interested in news from Brescia University in Italy, where researchers reportedly have developed a deep learning model that can visually identify and analyze bacterial species in culture plates with a high level of agreement with interpretations made by medical technologists.

They initially trained and tested the system to digitally identify pathogens associated with urinary tract infections (UTIs). UTIs are the source for a large volume of clinical laboratory microbiological testing.

The system, known as DeepColony, uses hierarchical artificial intelligence technology. The researchers say hierarchical AI is better suited to complex decision-making than other approaches, such as generative AI.

The researchers published their findings in the journal Nature titled, “Hierarchical AI Enables Global Interpretation of Culture Plates in the Era of Digital Microbiology.”

In their Nature paper, the researchers explained that microbiologists use conventional methods to visually examine culture plates that contain bacterial colonies. The scientists hypothesize which species of bacteria are present, after which they test their hypothesis “by regrowing samples from each colony separately and then employing mass spectroscopy techniques,” to confirm their hypotheses.

However, DeepColony—which was designed for use with clinical laboratory automation systems—looks at high-resolution digital scans of cultured plates and attempts to identify the bacterial strains and analyze them in much the same way a microbiologist would. For example, it can identify species based on their appearance and determine which colonies are suitable for analysis, the researchers explained.

“Working on a large stream of clinical data, and a complete set of 32 pathogens, the proposed system is capable of effectively assisting plate interpretation with a surprising degree of accuracy in the widespread and demanding framework of urinary tract infections,” the study authors wrote. “Moreover, thanks to the rich species-related generated information, DeepColony can be used for developing trustworthy clinical decision support services in laboratory automation ecosystems from local to global scale.”

Alberto Signoroni, PhD

“Compared to the most common solutions based on single convolutional neural networks (CNN), multi-network architectures are attractive in our case because of their ability to fit into contexts where decision-making processes are stratified into a complex structure,” wrote the study’s lead author Alberto Signoroni, PhD (above), Associate Professor of Computer Science, University of Brescia, and his researcher team in their Nature paper. “The system must be designed to generate useful and easily interpretable information and to support expert decisions according to safety-by-design and human-in-the-loop policies, aiming at achieving cost-effectiveness and skill-empowerment respectively.” Microbiologists and clinical laboratory managers will want to follow the further development of this technology. (Photo copyright: University of Brescia.)

How Hierarchical AI Works

Writing in LinkedIn, patent attorney and self-described technology expert David Cain, JD, of Hauptman Ham, LLP, explained that hierarchical AI systems “are structured in layers, each with its own distinct role yet interconnected in a way that forms a cohesive whole. These systems are significant because they mirror the complexity of human decision-making processes, incorporating multiple levels of analysis and action. This multi-tiered approach allows for nuanced problem-solving and decision-making, akin to a seasoned explorer deftly navigating through a multifaceted terrain.”

DeepColony, the researchers wrote, consists of multiple convolutional neural networks (CNNs) that exchange information and cooperate with one another. The system is structured into five levels—labeled 0 through 4—each handling a different part of the analysis:

  • At level 0, the system determines the number of bacterial colonies and their locations on the plate.
  • At level 1, the system identifies “good colonies,” meaning those suitable for further identification and analysis.
  • At level 2, the system assigns each good colony to a bacterial species “based on visual appearance and growth characteristics,” the researchers wrote, referring to the determination as being “pathogen aware, similarity agnostic.”

The CNN used at this stage was trained by using images of 26,213 isolated colonies comprising 32 bacterial species, the researchers wrote in their paper. Most came from clinical laboratories, but some were obtained from the American Type Culture Collection (ATCC), a repository of biological materials and information resources available to researchers.

  • At level 3, the system attempts to improve accuracy by looking at the larger context of the plate. The goal here is to “determine if observed colonies are similar (pure culture) or different (mixed cultures),” the researchers wrote, describing this step as “similarity aware, pathogen agnostic.” This enables the system to recognize variants of the same strain, the researchers noted, and has the effect of reducing the number of strains identified by the system.

At this level, the system uses two “Siamese CNNs,” which were trained with a dataset of 200,000 image pairs.

Then, at level 4, the system “assesses the clinical significance of the entire plate,” the researchers added. Each plate is labeled as:

  • “Positive” (significant bacterial growth),
  • “No significant growth” (negative), or
  • “Contaminated,” meaning it has three or more “different colony morphologies without a particular pathogen that is prevalent over the others,” the researchers wrote.

If a plate is labeled as “positive,” it can be “further evaluated for possible downstream steps,” using MALDI-TOF mass spectrometry or tests to determine susceptibility to antimicrobial measures, the researchers stated.

“This decision-making process takes into account not only the identification results but also adheres to the specific laboratory guidelines to ensure a proper supportive interpretation in the context of use,” the researchers wrote.

Nearly 100% Agreement with Medical Technologists

To gauge DeepColony’s accuracy, the researchers tested it on a dataset of more than 5,000 urine cultures from a US laboratory. They then compared its analyses with those of human medical technologists who had analyzed the same samples.

Agreement was 99.2% for no-growth cultures, 95.6% for positive cultures, and 77.1% for contaminated or mixed growth cultures, the researchers wrote.

The lower agreement for contaminated cultures was due to “a deliberately precautionary behavior, which is related to ‘safety by design’ criteria,” the researchers noted.

Lead study author Alberto Signoroni, PhD, Associate Professor of Computer Science, University of Brescia, wrote in Nature that many of the plates identified by medical technologists as “contaminated” were labeled as “positive” by DeepColony. “We maximized true negatives while allowing for some false positives, so that DeepColony [can] focus on the most relevant or critical cases,” he said.

Will DeepColony replace medical technologists in clinical laboratories any time soon? Not likely. But the Brescia University study indicates the direction AI in healthcare is headed, with high accuracy and increasing speed. The day may not be far off when pathologists and microbiologists regularly employ AI algorithms to diagnose disease.

—Stephen Beale

Related Information:

Hierarchical AI Enables Global Interpretation of Culture Plates in the Era of Digital Microbiology

Hierarchical Deep Learning Neural Network (HiDeNN): An Artificial Intelligence (AI) Framework for Computational Science and Engineering

An AI System Helps Microbiologists Identify Bacteria

This AI Research Helps Microbiologists to Identify Bacteria

Deep Learning Meets Clinical Microbiology: Unveiling DeepColony for Automated Culture Plates Interpretation

;