News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Preparing for Z-Codes as DEX Genetic Testing Registry Rolls Out to Commercial Health Plans

Palmetto GBA’s Chief Medical Officer will cover how clinical laboratories billing for genetic testing should prepare for Z-Codes at the upcoming Executive War College in New Orleans

After multiple delays, UnitedHealthcare (UHC) commercial plans will soon require clinical laboratories to use Z-Codes when submitting claims for certain molecular diagnostic tests. Several private insurers, including UHC, already require use of Z-Codes in their Medicare Advantage plans, but beginning June 1, UHC will be the first to mandate use of the codes in its commercial plans as well. Molecular, anatomic, and clinical pathologist Gabriel Bien-Willner, MD, PhD, who oversees the coding system and is Chief Medical Officer at Palmetto GBA, expects that other private payers will follow.

“A Z-Code is a random string of characters that’s used, like a barcode, to identify a specific service by a specific lab,” Bien-Willner explained in an interview with Dark Daily. By themselves, he said, the codes don’t have much value. Their utility comes from the DEX Diagnostics Exchange registry, “where the code defines a specific genetic test and everything associated with it: The lab that is performing the test. The test’s intended use. The analytes that are being measured.”

The registry also contains qualitative information, such as, “Is this a good test? Is it reasonable and necessary?” he said.

Bien-Willner will answer those questions and more at the upcoming annual Executive War College on Diagnostics, Clinical Laboratory, and Pathology Management in New Orleans on April 30-May 1. Lab professionals still have time to register and attend this important presentation.

Molecular, anatomic, and clinical pathologist Gabriel Bien-Willner, MD, PhD (above), Palmetto GBA’s Chief Medical Officer, will speak about Z-Codes and the MolDX program during several sessions at the upcoming Executive War College on Diagnostics, Clinical Laboratory, and Pathology Management taking place in New Orleans on April 30-May 1. Clinical laboratories involved in genetic testing will want to attend these critical sessions. (Photo copyright: Bien-Willner Physicians Association.)

Palmetto GBA Takes Control

Palmetto’s involvement with Z-Codes goes back to 2011, when the company established the MolDX program on behalf of the federal Centers for Medicare and Medicaid Services (CMS). The purpose was to handle processing of Medicare claims involving genetic tests. The coding system was originally developed by McKesson, and Palmetto adopted it as a more granular way to track use of the tests.

In 2017, McKesson merged its information technology business with Change Healthcare Holdings LLC to form Change Healthcare. Palmetto GBA acquired the Z-Codes and DEX registry from Change in 2020. Palmetto GBA had already been using the codes in MolDX and “we felt we needed better control of our own operations,” Bien-Willner explained.

In addition to administering MolDX, Palmetto is one of four regional Medicare contractors who require Z-Codes in claims for genetic tests. Collectively, the contractors handle Medicare claims submissions in 28 states.

Benefits of Z-Codes

Why require use of Z-Codes? Bien-Willner explained that the system addresses several fundamental issues with molecular diagnostic testing.

“Payers interact with labs through claims,” he said. “A claim will often have a CPT code [Current Procedural Technology code] that doesn’t really explain what was done or why.”

In addition, “molecular diagnostic testing is mostly done with laboratory developed tests (LDTs), not FDA-approved tests,” he said. “We don’t see LDTs as a problem, but there’s no standardization of the services. Two services could be described similarly, or with the same CPT codes. But they could have different intended uses with different levels of sophistication and different methodologies, quality, and content. So, how does the payer know what they’re paying for and whether it’s any good?”

When the CPT code is accompanied by a Z-Code, he said, “now we know exactly what test was done, who did it, who’s authorized to do it, what analytes are measured, and whether it meets coverage criteria under policy.”

The process to obtain a code begins when the lab registers for the DEX system, he explained. “Then they submit information about the test. They describe the intended use, the analytes that are being measured, and the methodologies. When they’ve submitted all the necessary information, we give the test a Z-Code.”

Then, the test undergoes a technical assessment. Bien-Willner described this as a risk-based process where complex tests, such as those employing next-generation sequencing or gene expression profiling, get more scrutiny than less-complex methodologies such as a polymerase chain reaction (PCR) test.

The assessment could be as simple as a spreadsheet that asks the lab which cancer types were tested in validation, he said. On the other end of the scale, “we might want to see the entire validation summary documentation,” he said.

Commercial Potential

Bien-Willner joined the Palmetto GBA in 2018 primarily to direct the MolDX program. But he soon saw the potential use of Z-Codes and the DEX registry for commercial plans. “It became instantly obvious that this is a problem for all payers, not just Medicare,” he said.

Over time, he said, “we’ve refined these processes to make them more reproducible, scalable, and efficient. Now commercial plans can license the DEX system, which Z-Codes are a part of, to better automate claims processing or pre-authorizations.”

In 2021, the company began offering the coding system for Medicare Advantage plans, with UHC the first to come aboard. “It was much easier to roll this out for Medicare Advantage, because those programs have to follow the same policies that Medicare does,” he explained.

As for UHC’s commercial plans, the insurer originally planned to require Z-Codes in claims beginning Aug. 1, 2023, then pushed that back to Oct. 1, according to Dark Daily’s sister publication The Dark Report.

Then it was pushed back again to April 1 of this year, and now to June 1.

“The implementation will be in a stepwise fashion,” Bien-Willner advised. “It’s difficult to take an entirely different approach to claims processing. There are something like 10 switches that have to be turned on for everything to work, and it’s going to be one switch at a time.”

For Palmetto GBA, the commercial plans represent “a whole different line of business that I think will have a huge impact in this industry,” he said. “They have the same issues that Medicare has. But for Medicare, we had to create automated solutions up front because it’s more of a pay and chase model,” where the claim is paid and CMS later goes after errors or fraudulent claims.

“Commercial plans in general just thought they could manually solve this issue on a claim-by-claim basis,” he said. “That worked well when there was just a handful of genetic tests. Now there are tens of thousands of tests and it’s impossible to keep up.

They instituted programs to try to control these things, but I don’t believe they work very well.”

Bien-Willner is scheduled to speak about Palmetto GBA’s MolDX program, Z-Codes, and related topics during three sessions at the upcoming 29th annual Executive War College conference. Clinical laboratory and pathology group managers would be wise to attend his presentations. Visit here (or paste this URL into your browser: https://www.executivewarcollege.com/registration) to learn more and to secure your seat in New Orleans.

—Stephen Beale

Related Information:

Palmetto Issuing ‘Z-Codes’ to Track Molecular Dx Utilization, Gather Data CPT Codes Can’t Provide

McKesson and Change Healthcare Complete the Creation of New Healthcare Information Technology Company

UnitedHealthcare Commercial: Reimbursement Policy Update Bulletin: January 2024

UnitedHealthcare’s Z-Code Requirement for Genetic Testing Claims Impacts Laboratories and Payers

UHC Delays April 1st Z-Code Commercial Implementation to June 1, 2024

UHC Will Delay Enforcement of Z-Codes for Genetic Test Claims

Former FDA Director to Speak at Executive War College on FDA’s Coming Regulation of Laboratory Developed Tests

Tim Stenzel, MD, PhD, will discuss what clinical laboratories need to know about the draft LDT rule, FDA memo on assay reclassification, and ISO-13485 harmonization

Many clinical laboratories anxiously await a final rule from the US Food and Drug Administration (FDA) that is expected to establish federal policies under which the agency will regulate laboratory developed tests (LDTs). The agency released a proposed rule on Oct. 3, 2023, setting a Dec. 4 deadline for submission of comments. The White House’s Office of Management and Budget received a draft of the final rule less than three months later on March 1, 2024.

“Given how fast it moved through HHS, the final [rule] is likely pretty close” to the draft version, wrote former FDA commissioner Scott Gottlieb, MD, in a post on LinkedIn. Gottlieb and other regulatory experts expect the White House to submit the final rule to Congress no later than May 22, and perhaps as soon as this month.

But what will the final rule look like? Tim Stenzel, MD, PhD, former director of the FDA’s Office of In Vitro Diagnostics, suggests that it is too soon to tell.

Stenzel, who retired from the FDA last year, emphasized that he was not speaking on behalf of the federal agency and that he adheres to all FDA confidentiality requirements. He formed a new company—Grey Haven LLC—through which he is accepting speaking engagements in what he describes as a public service.

“I’m taking a wait and see approach,” said Tim Stenzel, MD, PhD (above), former director of the FDA’s Office of In Vitro Diagnostics, in an interview with Dark Daily. “The rule is not finalized. The FDA received thousands of comments. It’s my impression that the FDA takes those comments seriously. Until the rule is published, we don’t know what it will say, so I don’t think it does any good to make assumptions.” Clinical laboratory leaders will have an opportunity to learn how to prepare for FDA regulation of LDTs directly from Stenzel at the upcoming Executive War College in May. (Photo copyright: LinkedIn.)

FDA’s History of LDT Regulation

Prior to his five-year stint at the agency, Stenzel held high-level positions at diagnostics manufacturers Invivoscribe, Quidel Corporation, Asuragen, and Abbott Laboratories. He also directed the clinical molecular diagnostics laboratory at Duke University Medical Center in North Carolina. In the latter role, during the late 1990s, he oversaw development of numerous LDTs, he said.

The FDA, he observed, has long taken the position that it has authority to regulate LDTs. However, since the 1970s, after Congress passed the Medical Device Amendments to the federal Food, Drug, and Cosmetic Act, the agency has generally exercised “enforcement discretion,” he said, in which it declined to regulate most of these tests.

At the time, “many LDTs were lower risk, small volume, and used for specialized needs of a local patient population,” the agency stated in a press release announcing the proposed rule. “Since then, due to changes in business practices and increasing ability to ship patient specimens across the country quickly, many LDTs are now used more widely, for a larger and more diverse population, with large laboratories accepting specimens from across the country.”

Clinical Labs Need a Plan for Submission of LDTs to FDA

The FDA proposed the new rule after Congress failed to vote on the VALID Act (Verifying Accurate Leading-edge IVCT Development Act of 2021), which would have established a statutory framework for FDA oversight of LDTs. Citing public comments from FDA officials, Stenzel believes the agency would have preferred the legislative approach. But when that failed, “they thought they needed to act, which left them with the rulemaking path,” he said.

The new rule, as proposed, would phase out enforcement discretion in five stages over four years, he noted. Labs would have to begin submitting high-risk tests for premarket review about three-and-a-half years from publication of the final rule, but not before Oct. 1, 2027. Premarket review requirements for moderate- or low-risk tests would follow about six months later.

While he suggested a “wait and see” approach to the final rule, he advises labs that might be affected to develop a plan for dealing with it.

Potential Lawsuits

Stenzel also noted the likelihood of litigation in which labs or other stakeholders will seek to block implementation of the rule. “It’s a fairly widespread belief that there will be a lawsuit or lawsuits that will take this issue through the courts,” he said. “That could take several years. There is no guarantee that the courts will ultimately side with the FDA.”

In “Perfect Storm of Clinical Lab and Pathology Practice Regulatory Changes to Be Featured in Discussions at 29th Annual Executive War College,” Dark Daily covers how the forces in play will directly impact the operations and financial stability of many of the nation’s clinical laboratories.

Stenzel is scheduled to speak about the LDT rule during three sessions at the upcoming Executive War College on Diagnostic, Clinical Laboratory, and Pathology Management conference taking place on April 30-May 1 in New Orleans.

He acknowledged that it is a controversial issue among clinical laboratories. Many labs have voiced opposition to the rule as well as the Valid Act.

Currently in retirement, Stenzel says he is making himself available as a resource through public speaking for laboratory professionals and other test developers who are seeking insights about the agency.

“The potential value that I bring is recent experience with the FDA and with stakeholders both inside and outside the FDA,” he said, adding that during his presentations he likes “to leave plenty of time for open-ended questions.”

In the case of his talks at the Executive War College, Stenzel said he anticipates “a robust conversation.”

He also expects to address other FDA-related issues, including:

  • A recent memo in which the agency said it would begin reclassifying most high-risk In Vitro Diagnostic (IVD) tests—those in class III (high risk)—into class II (moderate to high risk).
  • The emergence of multi-cancer detection (MCD) tests, which he described as a “hot topic in the LDT world.” The FDA has not yet approved any MCD tests, but some are available as LDTs.
  • A new voluntary pilot program in which the FDA will evaluate LDTs in situations where the agency has approved a treatment but has not authorized a corresponding companion diagnostic.
  • An FDA effort to harmonize ISO 13485—a set of international standards governing development of medical devices and diagnostics—with the agency’s own quality system regulations. Compliance with the ISO standards is necessary to market products in many countries outside the US, particularly in Europe, Stenzel noted. Harmonization will simplify product development, he said, because manufacturers won’t have to follow two or more sets of rules.

To learn how to prepare for the FDA’s future regulation of LDTs, clinical laboratory and pathology group managers would be wise to attend Stenzel’s presentations at this year’s Executive War College. Visit here to learn more and to secure your seat in New Orleans.

—Stephen Beale

Related Information:

FDA Proposes Rule Aimed at Helping to Ensure Safety and Effectiveness of Laboratory Developed Tests

Proposed Rule Webinar: Medical Devices; Laboratory Developed Tests (webinar transcript)

Proposed Rule Webinar: Medical Devices; Laboratory Developed Tests (slides)

FDA Proposed Rule on Medical Devices; Laboratory Developed Tests

CDRH Announces Intent to Initiate the Reclassification Process for Most High Risk IVDs

Questions and Answers about Multi-Cancer Detection Tests Oncology Drug Products Used with Certain In Vitro Diagnostics Pilot Program

Scientists Close in on Elusive Goal of Adapting Nanopore Technology for Protein Sequencing

Technology could enable medical laboratories to deploy inexpensive protein sequencing with a handheld device at point of care and remote locations

Clinical laboratories engaged in protein testing will be interested in several recent studies that suggest scientists may be close to adapting nanopore-sensing technology for use in protein identification and sequencing. The new proteomics techniques could lead to new handheld devices capable of genetic sequencing of proteins at low cost and with a high degree of sensitivity, in contrast to current approaches based on mass spectrometry.

But there are challenges to overcome, not the least of which is getting the proteins to cooperate. Compact devices based on nanopore technology already exist that can sequence DNA and RNA. But “there are lots of challenges with proteins” that have made it difficult to adapt the technology, Aleksei Aksimentiev, PhD, Professor of Biological Physics at the University of Illinois at Urbana-Champaign, told ASBMB Today, a publication of the American Society for Biochemistry and Molecular Biology. “In particular, they’re not uniformly charged; they’re not linear, most of the time they’re folded; and there are 20 amino acids, plus a zoo of post-translational modifications,” he added.

The ASBMB story notes that nanopore technology depends on differences in charges on either side of the membrane to force DNA or RNA through the hole. This is one reason why proteins pose such a challenge.

Giovanni Maglia, PhD, a Full Professor at the University of Groningen in the Netherlands and researcher into the fundamental properties of membrane proteins and their applications in nanobiotechnology, says he has developed a technique that overcomes these challenges.

“Think of a cell as a miniature city, with proteins as its inhabitants. Each protein-resident has a unique identity, its own characteristics, and function. If there was a database cataloging the fingerprints, job profiles, and talents of the city’s inhabitants, such a database would undoubtedly be invaluable!” said Behzad Mehrafrooz, PhD (above), Graduate Research Assistant at University of Illinois at Urbana-Champaign in an article he penned for the university website. This research should be of interest to the many clinical laboratories that do protein testing. (Photo copyright: University of Illinois.)

How the Maglia Process Works

In a Groningen University news story, Maglia said protein is “like cooked spaghetti. These long strands want to be disorganized. They do not want to be pushed through this tiny hole.”

His technique, developed in collaboration with researchers at the University of Rome Tor Vergata, uses electrically charged ions to drag the protein through the hole.

“We didn’t know whether the flow would be strong enough,” Maglia stated in the news story. “Furthermore, these ions want to move both ways, but by attaching a lot of charge on the nanopore itself, we were able to make it directional.”

The researchers tested the technology on what Maglia described as a “difficult protein” with many negative charges that would tend to make it resistant to flow.

“Previously, only easy-to-thread proteins were analyzed,” he said in the news story. “But we gave ourselves one of the most difficult proteins as a test. And it worked!”

Maglia now says that he intends to commercialize the technology through a new startup called Portal Biotech.

The Groningen University scientists published their findings in the journal Nature Biotechnology, titled “Translocation of Linearized Full-Length Proteins through an Engineered Nanopore under Opposing Electrophoretic Force.”

Detecting Post-Translational Modifications in the UK

In another recent study, researchers at the University of Oxford reported that they have adapted nanopore technology to detect post-translational modifications (PTMs) in protein chains. The term refers to changes made to proteins after they have been transcribed from DNA, explained an Oxford news story.

“The ability to pinpoint and identify post-translational modifications and other protein variations at the single-molecule level holds immense promise for advancing our understanding of cellular functions and molecular interactions,” said contributing author Hagan Bayley, PhD, Professor of Chemical Biology at University of Oxford, in the news story. “It may also open new avenues for personalized medicine, diagnostics, and therapeutic interventions.”

Bayley is the founder of Oxford Nanopore Technologies, a genetic sequencing company in the UK that develops and markets nanopore sequencing products.

The news story notes that the new technique could be integrated into existing nanopore sequencing devices. “This could facilitate point-of-care diagnostics, enabling the personalized detection of specific protein variants associated with diseases including cancer and neurodegenerative disorders,” the story states.

The Oxford researchers published their study’s findings in the journal Nature Nanotechnology titled, “Enzyme-less Nanopore Detection of Post-Translational Modifications within Long Polypeptides.”

Promise of Nanopore Protein Sequencing Technology

In another recent study, researchers at the University of Washington reported that they have developed their own method for protein sequencing with nanopore technology.

“We hacked the [Oxford Nanopore] sequencer to read amino acids and PTMs along protein strands,” wrote Keisuke Motone, PhD, one of the study authors in a post on X (formerly Twitter) following the study’s publication on the preprint server bioRxiv titled, “Multi-Pass, Single-Molecule Nanopore Reading of Long Protein Strands with Single-Amino Acid Sensitivity.”

“This opens up the possibility for barcode sequencing at the protein level for highly multiplexed assays, PTM monitoring, and protein identification!” Motone wrote.

In a commentary they penned for Nature Methods titled, “Not If But When Nanopore Protein Sequencing Meets Single-Cell Proteomics,” Motone and colleague Jeff Nivala, PhD, Principal Investigator at University of Washington, pointed to the promise of the technology.

Single-cell proteomics, enabled by nanopore protein sequencing technology, “could provide higher sensitivity and wider throughput, digital quantification, and novel data modalities compared to the current gold standard of protein MS [mass spectrometry],” they wrote. “The accessibility of these tools to a broader range of researchers and clinicians is also expected to increase with simpler instrumentation, less expertise needed, and lower costs.”

There are approximately 20,000 human genes. However, there are many more proteins. Thus, there is strong interest in understanding the human proteome and the role it plays in health and disease.

Technology that makes protein testing faster, more accurate, and less costly—especially with a handheld analyzer—would be a boon to the study of proteomics. And it would give clinical laboratories new diagnostic tools and bring some of that testing to point-of-care settings like doctor’s offices.

—Stephen Beale

Related Information:

Nanopores as the Missing Link to Next Generation Protein Sequencing

Nanopore Technology Achieves Breakthrough in Protein Variant Detection

The Scramble for Protein Nanopore Sequencing

The Emerging Landscape of Single-Molecule Protein Sequencing Technologies

ASU Researcher Advances the Science of Protein Sequencing with NIH Innovator Award          

The Missing Link to Make Easy Protein Sequencing Possible?

Engineered Nanopore Translocates Full Length Proteins

Not If But When Nanopore Protein Sequencing Meets Single-Cell Proteomics

Enzyme-Less Nanopore Detection of Post-Translational Modifications within Long Polypeptides

Unidirectional Single-File Transport of Full-Length Proteins through a Nanopore

Translocation of Linearized Full-Length Proteins through an Engineered Nanopore under Opposing Electrophoretic Force

Interpreting and Modeling Nanopore Ionic Current Signals During Unfoldase-Mediated Translocation of Single Protein Molecules

Multi-Pass, Single-Molecule Nanopore Reading of Long Protein Strands with Single-Amino Acid Sensitivity

Stanford Researchers Use Text and Images from Pathologists’ Twitter Accounts to Train New Pathology AI Model

Researchers intend their new AI image retrieval tool to help pathologists locate similar case images to reference for diagnostics, research, and education

Researchers at Stanford University turned to an unusual source—the X social media platform (formerly known as Twitter)—to train an artificial intelligence (AI) system that can look at clinical laboratory pathology images and then retrieve similar images from a database. This is an indication that pathologists are increasingly collecting and storing images of representative cases in their social media accounts. They then consult those libraries when working on new cases that have unusual or unfamiliar features.

The Stanford Medicine scientists trained their AI system—known as Pathology Language and Image Pretraining (PLIP)—on the OpenPath pathology dataset, which contains more than 200,000 images paired with natural language descriptions. The researchers collected most of the data by retrieving tweets in which pathologists posted images accompanied by comments.

“It might be surprising to some folks that there is actually a lot of high-quality medical knowledge that is shared on Twitter,” said researcher James Zou, PhD, Assistant Professor of Biomedical Data Science and senior author of the study, in a Stanford Medicine SCOPE blog post, which added that “the social media platform has become a popular forum for pathologists to share interesting images—so much so that the community has widely adopted a set of 32 hashtags to identify subspecialties.”

“It’s a very active community, which is why we were able to curate hundreds of thousands of these high-quality pathology discussions from Twitter,” Zou said.

The Stanford researchers published their findings in the journal Nature Medicine titled, “A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter.”

James Zou, PhD

“The main application is to help human pathologists look for similar cases to reference,” James Zou, PhD (above), Assistant Professor of Biomedical Data Science, senior author of the study, and his colleagues wrote in Nature Medicine. “Our approach demonstrates that publicly shared medical information is a tremendous resource that can be harnessed to develop medical artificial intelligence for enhancing diagnosis, knowledge sharing, and education.” Leveraging pathologists’ use of social media to store case images for future reference has worked out well for the Stanford Medicine study. (Photo copyright: Stanford University.)

Retrieving Pathology Images from Tweets

“The lack of annotated publicly-available medical images is a major barrier for innovations,” the researchers wrote in Nature Medicine. “At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter.”

In this case, the goal “is to train a model that can understand both the visual image and the text description,” Zou said in the SCOPE blog post.

Because X is popular among pathologists, the United States and Canadian Academy of Pathology (USCAP), and Pathology Hashtag Ontology project, have recommended a standard series of hashtags, including 32 hashtags for subspecialties, the study authors noted.

Examples include:

“Pathology is perhaps even more suited to Twitter than many other medical fields because for most pathologists, the bulk of our daily work revolves around the interpretation of images for the diagnosis of human disease,” wrote Jerad M. Gardner, MD, a dermatopathologist and section head of bone/soft tissue pathology at Geisinger Medical Center in Danville, Pa., in a blog post about the Pathology Hashtag Ontology project. “Twitter allows us to easily share images of amazing cases with one another, and we can also discuss new controversies, share links to the most cutting edge literature, and interact with and promote the cause of our pathology professional organizations.”

The researchers used the 32 subspecialty hashtags to retrieve English-language tweets posted from 2006 to 2022. Images in the tweets were “typically high-resolution views of cells or tissues stained with dye,” according to the SCOPE blog post.

The researchers collected a total of 232,067 tweets and 243,375 image-text pairs across the 32 subspecialties, they reported. They augmented this with 88,250 replies that received the highest number of likes and had at least one keyword from the ICD-11 codebook. The SCOPE blog post noted that the rankings by “likes” enabled the researchers to screen for high-quality replies.

They then refined the dataset by removing duplicates, retweets, non-pathology images, and tweets marked by Twitter as being “sensitive.” They also removed tweets containing question marks, as this was an indicator that the practitioner was asking a question about an image rather than providing a description, the researchers wrote in Nature Medicine.

They cleaned the text by removing hashtags, Twitter handles, HTML tags, emojis, and links to websites, the researchers noted.

The final OpenPath dataset included:

  • 116,504 image-text pairs from Twitter posts,
  • 59,869 from replies, and
  • 32,041 image-text pairs scraped from the internet or obtained from the LAION dataset.

The latter is an open-source database from Germany that can be used to train text-to-image AI software such as Stable Diffusion.

Training the PLIP AI Platform

Once they had the dataset, the next step was to train the PLIP AI model. This required a technique known as contrastive learning, the researchers wrote, in which the AI learns to associate features from the images with portions of the text.

As explained in Baeldung, an online technology publication, contrastive learning is based on the idea that “it is easier for someone with no prior knowledge, like a kid, to learn new things by contrasting between similar and dissimilar things instead of learning to recognize them one by one.”

“The power of such a model is that we don’t tell it specifically what features to look for. It’s learning the relevant features by itself,” Zou said in the SCOPE blog post.

The resulting AI PLIP tool will enable “a clinician to input a new image or text description to search for similar annotated images in the database—a sort of Google Image search customized for pathologists,” SCOPE explained.

“Maybe a pathologist is looking at something that’s a bit unusual or ambiguous,” Zou told SCOPE. “They could use PLIP to retrieve similar images, then reference those cases to help them make their diagnoses.”

The Stanford University researchers continue to collect pathology images from X. “The more data you have, the more it will improve,” Zou said.

Pathologists will want to keep an eye on the Stanford Medicine research team’s progress. The PLIP AI tool may be a boon to diagnostics and improve patient outcomes and care.

—Stephen Beale

Related Information:

New AI Tool for Pathologists Trained by Twitter (Now Known as X)

A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter

AI + Twitter = Foundation Visual-Language AI for Pathology

Pathology Foundation Model Leverages Medical Twitter Images, Comments

A Visual-Language Foundation Model for Pathology Image Analysis Using Medical Twitter (Preprint)

Pathology Language and Image Pre-Training (PLIP)

Introducing the Pathology Hashtag Ontology

Separate Reports Shed Light on Why CDC SARS-CoV-2 Test Kits Failed During Start of COVID-19 Pandemic

HHS Office of Inspector General was the latest to examine the quality control problems that led to distribution of inaccurate test to clinical laboratories nationwide

Failure on the part of the Centers for Disease Control and Prevention (CDC) to produce accurate, dependable SARS-CoV-2 clinical laboratory test kits at the start of the COVID-19 pandemic continues to draw scrutiny and criticism of the actions taken by the federal agency.

In the early weeks of the COVID-19 pandemic, the CDC distributed faulty SARS-CoV-2 test kits to public health laboratories (PHLs), delaying the response to the outbreak at a critical juncture. That failure was widely publicized at the time. But within the past year, two reports have provided a more detailed look at the shortcomings that led to the snafu.

The most recent assessment came in an October 2023 report from the US Department of Health and Human Services Office of Inspector General (OIG), following an audit of the public health agency. The report was titled, “CDC’s Internal Control Weaknesses Led to Its Initial COVID-19 Test Kit Failure, but CDC Ultimately Created a Working Test Kit.”

“We identified weaknesses in CDC’s COVID-19 test kit development processes and the agencywide laboratory quality processes that may have contributed to the failure of the initial COVID-19 test kits,” the OIG stated in its report.

Prior to the outbreak, the agency had internal documents that were supposed to provide guidance for how to respond to public health emergencies. However, “these documents do not address the development of a test kit,” the OIG stated.

Jill Taylor, PhD

“If the CDC can’t change, [its] importance in health in the nation will decline,” said microbiologist Jill Taylor, PhD (above), Senior Adviser for the Association of Public Health Laboratories in Washington, DC. “The coordination of public health emergency responses in the nation will be worse off.” Clinical laboratories that were blocked from developing their own SARS-CoV-2 test during the pandemic would certainly agree. (Photo copyright: Columbia University.)

Problems at the CDC’s RVD Lab

Much of the OIG’s report focused on the CDC’s Respiratory Virus Diagnostic (RVD) lab which was part of the CDC’s National Center for Immunization and Respiratory Diseases (NCIRD). The RVD lab had primary responsibility for developing, producing, and distributing the test kits. Because it was focused on research, it “was not set up to develop and manufacture test kits and therefore had no policies and procedures for developing and manufacturing test kits,” the report stated.

The RVD lab also lacked the staff and funding to handle test kit development in a public health emergency, the report stated. As a result, “the lead scientist not only managed but also participated in all test kit development processes,” the report stated. “In addition, when the initial test kit failed at some PHLs, the lead scientist was also responsible for troubleshooting and correcting the problem.”

To verify the test kit, the RVD lab needed samples of viral material from the agency’s Biotechnology Core Facility Branch (BCFB) CORE Lab, which also manufactured reagents for the kit.

“RVD Lab, which was under pressure to quickly create a test kit for the emerging health threat, insisted that CORE Lab deviate from its usual practices of segregating these two activities and fulfill orders for both reagents and viral material,” the report stated.

This increased the risk of contamination, the report said. An analysis by CDC scientists “did not determine whether a process error or contamination was at fault for the test kit failure; however, based on our interviews with CDC personnel, contamination could not be ruled out,” the report stated.

The report also cited the CDC’s lack of standardized systems for quality control and management of laboratory documents. Labs involved in test kit development used two different incompatible systems for tracking and managing documents, “resulting in staff being unable to distinguish between draft, obsolete, and current versions of laboratory procedures and forms.”

Outside Experts Weigh In

The OIG report followed an earlier review by the CDC’s Laboratory Workgroup (LW), which consists of 12 outside experts, including academics, clinical laboratory directors, state public health laboratory directors, and a science advisor from the Association of Public Health Laboratories. Members were appointed by the CDC Advisory Committee to the Director.

This group cited four major issues:

  • Lack of adequate planning: For the “rapid development, validation, manufacture, and distribution of a test for a novel pathogen.”
  • Ineffective governance: Three labs—the RVD Lab, CORE Lab, and Reagent and Diagnostic Services Branch—were involved in test kit development and manufacturing. “At no point, however, were these three laboratories brought together under unified leadership to develop the SARS-CoV-2 test,” the report stated.
  • Poor quality control and oversight: “Essentially, at the start of the pandemic, infectious disease clinical laboratories at CDC were not held to the same quality and regulatory standards that equivalent high-complexity public health, clinical and commercial reference laboratories in the United States are held,” the report stated.
  • Poor test design processes: The report noted that the test kit had three probes designed to bind to different parts of the SARS-CoV-2 nucleocapsid gene. The first two—N1 (topology) and N2 (intracellular localization)—were designed to match SARS-CoV-2 specifically, whereas the third—N3 (functions of the protein)—was designed to match all Sarbecoviruses, the family that includes SARS-CoV-2 as well as the coronavirus responsible for the 2002-2004 SARS outbreak.

The N1 probe was found to be contaminated, the group’s report stated, while the N3 probe was poorly designed. The report questioned the decision to include the N3 probe, which was not included in European tests.

Also lacking were “clearly defined pass/fail threshold criteria for test validation,” the report stated.

Advice to the CDC

Both reports made recommendations for changes at the CDC, but the LW’s were more far-reaching. For example, it advised the agency to establish a senior leader position “with major responsibility and authority for laboratories at the agency.” This individual would oversee a new Center that would “focus on clinical laboratory quality, laboratory safety, workforce training, readiness and response, and manufacturing.”

In addition, the CDC should consolidate its clinical diagnostic laboratories, the report advised, and “laboratories that follow a clinical quality management system should have separate technical staff and space from those that do not follow such a system, such as certain research laboratories.”

The report also called for collaboration with “high functioning public health laboratories, hospital and academic laboratories, and commercial reference laboratories.” For example, collaborating on test design and development “should eliminate the risk of a single point of failure for test design and validation,” the LW suggested.

CBS News reported in August that the CDC had already begun implementing some of the group’s suggestions, including agencywide quality standards and better coordination with state labs.

However, “recommendations for the agency to physically separate its clinical laboratories from its research laboratories, or to train researchers to uphold new quality standards, will be heavy lifts because they require continuous funding,” CBS News reported, citing an interview with Jim Pirkle, MD, PhD, Director, Division of Laboratory Sciences, National Center for Environmental Health, at the CDC.

—Stephen Beale

Related Information:

CDC’s Internal Control Weaknesses Led to Its Initial COVID-19 Test Kit Failure, but CDC Ultimately Created a Working Test Kit  

Review of the Shortcomings of CDC’s First COVID-19 Test and Recommendations for the Policies, Practices, and Systems to Mitigate Future Issues      

Collaboration to Improve Emergency Laboratory Response: Open Letter from the Association of Pathology Chairs to the Centers for Disease Control and Prevention    

The CDC Works to Overhaul Lab Operations after COVID Test Flop

;