A top public hospital CEO says AI could soon take over radiology functions to cut costs—raising similar automation questions for clinical labs—though critics warn the technology is not ready to replace physicians.
The CEO of NYC Health + Hospitals says his system is prepared to begin replacing radiologists with artificial intelligence (AI) in certain use cases, once regulatory barriers are addressed. From a clinical lab professional perspective, health systems are actively evaluating where AI can reduce reliance on highly trained specialists while maintaining diagnostic throughput.
“We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,” Katz said.
AI as a Cost and Workflow Strategy
Katz noted that AI could expand access to screening—particularly in breast cancer—while lowering operational costs. One proposed model would shift radiologists into a secondary review role, validating only abnormal findings flagged by AI.
For clinical laboratories, this mirrors ongoing discussions around digital pathology, AI-assisted test interpretation, and automated workflows in areas such as hematology, microbiology, and molecular diagnostics. If imaging adopts a “AI-first, specialist-second” model, similar expectations could follow in the lab.
This approach could deliver what Katz described as “major savings,” particularly for large systems facing staffing shortages and increasing test volumes.
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said, adding that the technology is “actually better than human beings.” (Photo credit: Westchester Medical Center Health Network)
Katz also questioned whether regulations should evolve to allow AI to interpret imaging independently—potentially establishing a precedent that could influence how regulators approach AI in laboratory medicine.
Why Clinical Labs Should Pay Attention
While the discussion centers on radiology, the underlying drivers—cost containment, workforce shortages, and demand for faster turnaround times—are identical pressures facing clinical laboratories.
If regulators permit AI to operate with reduced physician oversight in imaging, labs could see accelerated adoption of AI-driven decision support, automated result interpretation, and even reduced hands-on review in certain testing workflows.
At the same time, the debate highlights a key risk of balancing efficiency gains with diagnostic accuracy and patient safety.
Pushback Raises Safety Concerns
Not all healthcare professionals agree with the direction. Some radiologists warn that current AI tools are not ready for independent clinical use.
“Undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care,” said Mohammed Suhail, MD, of North Coast Imaging.
“Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive.”
The debate signals what may be ahead for the broader diagnostics industry. As health systems test AI-driven models in radiology, clinical laboratories may soon face similar expectations to leverage automation for cost savings—while defending the continued role of expert oversight in ensuring quality and patient safety.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.
Accuracy gaps in pathology AI affecting nearly 30% of diagnostic tasks highlight risks for clinical decision-making and patient outcomes, according to new research.
A new study is raising important questions for pathologists as artificial intelligence (AI) becomes more embedded in diagnostic workflows. Researchers report that AI systems used to interpret pathology slides for cancer diagnosis do not perform equally across all patient populations, with accuracy varying by race, gender, and age. The findings highlight why pathologists, who rely on objective tissue evaluation to guide treatment decisions, need to understand how bias can enter AI tools designed to support their work.
The study, published in Cell Reports Medicine, shows that pathology AI models can extract demographic information directly from tissue images, even though such details are invisible to human experts. That capability can influence diagnostic performance and potentially reinforce disparities in cancer care if left unaddressed.
Testing Pathology AI Reveals Widespread Diagnostic Gaps
To assess the scope of the problem, Yu and his colleagues evaluated four commonly used deep-learning models under development for cancer diagnosis. These systems are trained on large collections of labeled pathology slides, learning visual patterns associated with disease that can then be applied to new samples. The team tested the models using a large, multi-institutional dataset spanning 20 cancer types.
Across all four models, the researchers found consistent performance gaps linked to patient demographics. Diagnostic accuracy was lower for certain groups defined by race, gender, and age. For example, the models struggled to distinguish lung cancer subtypes in African American patients and in male patients. They also showed reduced accuracy when classifying breast cancer subtypes in younger patients, and lower detection performance for breast, renal, thyroid, and stomach cancers in specific demographic groups. Overall, these disparities appeared in roughly 29% of the diagnostic tasks analyzed.
The findings were unexpected, Yu said, because pathology has long been considered one of the most objective areas of medicine. “Because we would expect pathology evaluation to be objective,” he said, “when evaluating images, we don’t necessarily need to know a patient’s demographics to make a diagnosis.” The results raised a fundamental question for the research team: why were AI systems failing to meet the same standard of objectivity expected of human pathologists?
Further analysis revealed three main contributors to bias in pathology AI. One factor is uneven training data. Pathology samples are often easier to obtain from some populations than others, resulting in imbalanced datasets that make accurate diagnosis more difficult for underrepresented groups. But Yu noted that data imbalance alone did not fully explain the observed disparities. “The problem turned out to be much deeper than that,” he said.
From Demographic Shortcuts to Fairer Diagnosis
Differences in disease incidence also play a role. Some cancers occur more frequently in certain populations, allowing AI models to become highly accurate for those groups while struggling in populations where those diseases are less common. In addition, the models appear capable of detecting subtle molecular and biological differences linked to demographics, such as mutations in cancer driver genes.
Kun-Hsing Yu (Photo credit: Harvard Medical School)
Kun-Hsing Yu, associate professor of biomedical informatics at Harvard Medical School and assistant professor of pathology at Brigham and Women’s Hospital noted, “We found that because AI is so powerful, it can differentiate many obscure biological signals that cannot be detected by standard human evaluation.”
When models rely on these demographic-linked signals as shortcuts, accuracy can suffer across diverse patient groups.
To address these issues, the researchers developed a new framework called FAIR-Path. Built on a machine-learning approach known as contrastive learning, FAIR-Path trains models to focus on clinically meaningful differences—such as distinctions between cancer types—while minimizing attention to less relevant features, including demographic characteristics.
When applied to the tested models, FAIR-Path reduced diagnostic disparities by about 88%. “We show that by making this small adjustment, the models can learn robust features that make them more generalizable and fairer across different populations,” Yu said. Importantly, the improvement did not require perfectly balanced training datasets.
For pathologists, the findings underscore why careful evaluation of AI tools is essential as these technologies move closer to routine clinical use. The authors are now working with institutions worldwide to study pathology AI bias in different regions and clinical settings, and to adapt FAIR-Path for use in data-limited environments.
Finally, Yu said, the goal is not to replace human expertise, but to support it. “I think there’s hope that if we are more aware of and careful about how we design AI systems, we can build models that perform well in every population,” he said. For pathologists, the study reinforces the importance of remaining actively involved in how AI is developed, validated, and deployed, so that these tools enhance diagnostic confidence and equity, rather than introducing new sources of error into cancer care.
New ranking highlights the pathologists with the biggest social media reach—and what their influence means for diagnostics, research, and industry engagement.
A new analysis identifies the most influential US pathologists on Twitter/X, highlighting a growing link between social media reach and clinical impact.
The ranking, based on follower counts, spotlights pathologists who have built substantial audiences across subspecialties including dermatopathology, hematopathology, neuropathology, pulmonary pathology, and gastrointestinal pathology.
Topping the list is Jerad M. Gardner, MD (50.5K followers), a leading dermatopathologist known for his extensive educational content. He’s followed by Malak Althgafi, MBBS-MD, MBA (43.1K), Chair of Pathology at Tufts Medical Center, and Sanjay Mukhopadhyay, MD (34.6K), Director of Pulmonary Pathology at Cleveland Clinic.
The New Value of Visibility for Today’s Pathologists
While the list is not exhaustive, it illustrates the rising importance of digital visibility in pathology.
For pathologists, this list offers more than a snapshot of who’s popular online—it reflects how the profession itself is evolving. Social platforms have become powerful spaces for case sharing, rapid education, research dissemination, and community building across institutions and subspecialties.
As digital influence grows, so does the opportunity for pathologists to shape public understanding of diagnostics, mentor the next generation, and amplify their own work. Whether you’re looking to learn, collaborate, or simply see how peers are using these tools, understanding who leads the conversation can help decide how and where one wants to engage next.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.
Association for Molecular Pathology gathering also served up an advocacy push for RESULTS Act passage.
The Association for Molecular Pathology (AMP) 2025 Annual Meeting brought together just over 3,000 attendees, and an estimated 420 of them—an impressive 14% of total attendance—sought out information about cell-free DNA (cfDNA) testing.
Walking through the convention center in Boston that hosted AMP 2025 earlier this month, it was hard to ignore the standing-room only crowd that jammed into a session room to hear more about cfDNA diagnostics.
Cell-free DNA comprises fragments of DNA circulating in the blood, either from dying cells or infection. For clinical laboratory professionals and pathologists, cfDNA testing sits at the forefront of innovation for detecting cancer.
“We want to find these cancers early,” said presenter Trevor Pugh, PhD, a senior scientist at Princess Margaret Cancer Centre in Toronto and director of genomics at the Ontario Institute for Cancer Research.
At the AMP 2025 Annual Meeting in Boston, a session about cfDNA testing attracted more than 400 attendees. (Photo credit: Scott Wallask)
Machine Learning Will Play a Role in cfDNA Research
Part of the effort to advance cfDNA testing will involve datasets and mining, Pugh said. For example, one of his graduate students is working with him on training a cfDNA foundation model, which could lead to the ability to reconstruct the complete cancer genome.
“This is a machine learning person’s dream,” Pugh explained.
Foundation models are artificial intelligence (AI) networks trained on large datasets. The models allow for more specialized applications, such as those that can analyze digital diagnostic images.
AMP Pushes for Passage of the RESULTS Act
Elsewhere at the AMP 2025 conference, the association provided an update on delayed lab test reimbursement cuts under the Protecting Access to Medicare Act of 2014 (PAMA). Organizers also addressed the latest attempt to reform PAMA, the proposed Reforming and Enhancing Sustainable Updates to Laboratory Testing Services (RESULTS) Act.
As Dark Daily previously reported, momentum for the RESULTS Act is growing. Congress delayed upcoming PAMA cuts from Jan. 1, 2026, to Jan. 30, 2026, and there is hope during this brief extension that the RESULTS can get a vote or least greater support among lawmakers.
AMP endorses the RESULTS Act. “Congress needs to act,” said Jay Patel, MD, MBA, a member of AMP’s board of directors, during the PAMA update.
AMP has asked the Centers for Medicare and Medicaid Services to delay PAMA-related reporting requirements for labs until Congress can vote on the RESULTS Act, Patel added.
AI Featured in AMP 2025 Poster Sessions
More than 500 posters presentations occurred during AMP 2025. The space to house that many posters took up nearly half of the exhibition hall allotted to AMP.
The association noted several poster sessions that centered on how AI is improving diagnostic processes and accuracy within molecular pathology:
Researchers from The Hospital for Sick Children developed a web-based AI platform to integrate RNA sequencing into clinical workflows. The model achieved 93% diagnostic accuracy on subtypes covered by the platform.
Scientists at Soonchunhyang University created two AI models to classify samples. Both models showed strong accuracy.
Researchers at Wake Forest University School of Medicine used an AI-trained algorithm to analyze chromosomal abnormalities in GATA2 deficiency syndrome-related leukemia. The technology can quickly review hundreds of images, improving detection.
Members of our sibling brand, The Dark Report, can read more about the state of AI in clinical labs in our three-part series.
A new analysis shows why models fall short in practice, how liability and equity issues slow adoption, and what lab leaders should consider as AI becomes a growing part of diagnostic workflows.
Artificial intelligence (AI) has made notable advances in medical imaging, but radiologists are not being displaced. For laboratory and diagnostic leaders, a recent analysis in Works in Progress highlights why AI has not replaced human expertise in radiology—and what this means for managing technology adoption in labs and hospitals.
In 2016, AI pioneer Geoffrey Hinton declared that “people should stop training radiologists now.” Since then, more than 700 FDA-cleared radiology AI models have entered the market, covering everything from stroke detection to lung cancer screening.
Companies such as Annalise.ai, Lunit, Aidoc, and Qure.ai offer tools that can identify dozens of diseases across modalities, reorder worklists, or generate structured draft reports. “On paper, radiology looks like the perfect target for automation,” the article noted, citing its reliance on digital images, pattern recognition, and quantitative benchmarks. Yet demand for radiologists has never been higher. In 2025, US residency programs offered a record 1,208 positions, and vacancy rates remain high as well.
Why Hasn’t AI Taken Over?
For leaders overseeing diagnostic services, three key elements are why AI has not replaced radiologists.
First, models struggle in real-world deployment. “Performance can drop by as much as 20 percentage points” when systems trained on narrow datasets are applied across different scanners, imaging protocols, or patient populations, the article explained. What works in a benchmark test may falter in a hospital with diverse workflows.
Second, liability and regulatory hurdles remain high. Assistive models that require physician review face fewer barriers, but autonomous systems must self-abort on poor image quality, identify unfamiliar equipment, and withstand rigorous scrutiny. Insurers have also drawn hard lines: one malpractice policy states that “coverage applies solely to interpretations reviewed and authenticated by a licensed physician; no indemnity is afforded for diagnoses generated autonomously by software.” Another bluntly imposes an “Absolute AI Exclusion.” For labs, this underscores the importance of risk management before deploying AI tools.
Photo credit: “Artificial Intelligence – Resembling Human Brain” by deepakiqlect is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
Photo credit: “Cancer” by davis.steve32 is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
Third, radiologists do much more than read scans. “Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians,” the commentary pointed out. Oversight of imaging protocols, interdisciplinary consultations, and patient communication all fall outside the reach of algorithms. Even as AI improves, demand for imaging may increase rather than decrease—a version of the Jevons paradox where greater efficiency leads to higher use. “The better the machines, the busier radiologists have become,” the article observed.
For laboratory leaders, the takeaway is not to fear replacement but to prepare for integration. AI tools are proving valuable in triaging urgent cases, flagging abnormalities, and drafting reports, but they remain narrow in scope—stroke, lung cancer, and breast lesions account for about 60% of models, yet represent only a fraction of total imaging work. As the article concluded, “Models can lift productivity, but their implementation depends on behavior, institutions and incentives.”
The challenge for labs is to create environments where AI augments human expertise rather than attempts to replace it. That means aligning technology adoption with clinical needs, providing training for staff, and working with insurers and regulators to ensure coverage and compliance.
For now, radiologists and the labs that support them are not going away. They are adapting, and AI will be a partner in that evolution.
New program draws bipartisan criticism and concern from patients and doctors.
Shrewd labs will keep an eye on the latest Centers for Medicare & Medicaid Services (CMS) prior authorization pilot that leans on artificial intelligence (AI) to determine treatment options for Medicare patients. While the Wasteful and Inappropriate Service Reduction Model pilot (WISeR) doesn’t directly mention lab tests, staying on the pulse of this growing trend will keep labs thinking ahead on how to minimize impact on bottom line, paperwork, and workflows when these pilots infiltrate lab testing.
An article from POLITICO reported that CMS will start a pilot version of the program as early as January 2026 in six states including Ohio, Texas, Oklahoma, Ariz., N.J., and Wash. Private AI companies will assist and focus on “services that have been vulnerable to fraud, waste and abuse in the past,” the article noted. The voluntary model is slated to span six years through December 31, 2031, according to the Centers for Disease Control and Prevention (CDC).
Among the types of procedures encumbered by the pilot program are knee arthroscopy for osteoarthritis, skin and tissue substitutions, and electrical nerve stimulator implants, CMS noted. All outpatient and emergency services would currently be excluded, they added, as well as “services that would pose a substantial risk to patients if substantially delayed.”
“All recommendations for non-payment will be determined by appropriately licensed clinicians who will apply standardized, transparent, and evidence-based procedures to their review,” CMS added.
The premise of the pilot is to eliminate wasteful spending, with CMS citing 25% of US healthcare spending falling in this category. “According to the Medicare Payment Advisory Commission Medicare spent up to $5.8 billion in 2022 on unnecessary or inappropriate services with little to no clinical benefit,” their website noted.
A Sour Reception
The pilot program is receiving a less-than-warm welcome from both parties—doctors, and patients alike, Politico noted. “It’s been referred to as the AI death panel. You get more money if you’re that AI tech company if you deny more claims. That is going to lead to people getting hurt,” Greg Landsman (D-Ohio) said during the committee hearing.
Landsman noted in the article from POLITICO that a bipartisan desire to put a halt to the program exists among growing concerns about patient harm coming from the program. Landsman “called for the program to be shut down until an independent review board could be erected to review the liability questions and ensure the AI prior authorization pilot doesn’t harm patients.”
“I’m concerned that this AI model will result in denials of lifesaving care and incentivize companies to restrict care,” Frank Pallone (D-N.J.) and House Energy and Commerce Committee ranking member said at the subcommittee meeting on the use of AI in health care held on Sept. 3.
“We have pretty good evidence that prior authorization as a process itself is fraught, adding that AI’s ability to improve the process for patients remains unproven,” Michelle Mello, Stanford University health law professor and witness at the hearing, said.
Looking Ahead
The involvement of AI in healthcare will only continue, and learning what aspects positively impact healthcare versus cause damage will continue to evolve.
Worth noting, there are already two unrelated lawsuits, against UnitedHealthcare and Cigna, that challenge the safety of AI use to deny patient care, POLITICO noted in the article.
Laboratory leaders should keep their eyes open and their ears to the ground on not only the pilot but all AI healthcare trends.