News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

Sign In

ADLM Urges Federal Action to Ensure Safe, Equitable AI in Clinical Labs

The association urges stronger regulations and data standards to keep AI safe and fair in clinical laboratories.

The Association for Diagnostics & Laboratory Medicine (ADLM) is calling on Congress and federal regulators to modernize laboratory oversight as artificial intelligence (AI) becomes more embedded in clinical testing, warning that without updated safeguards, AI tools could put patients—particularly those from historically marginalized groups—at risk.

In a recently released position statement, ADLM cautions that while AI has the potential to improve diagnostic accuracy, streamline laboratory workflows, and strengthen data-driven decision-making, poorly governed systems could amplify bias and undermine patient safety. For laboratory professionals evaluating or deploying AI-enabled tools, the message is clear: Oversight, validation, and data integrity must keep pace with innovation.

Bias Risks Drive Push to Update CLIA and Standardize AI Oversight in Clinical Labs

AI models are only as reliable as the data used to train them, the organization noted. When systems are built on limited, inconsistent, or historically skewed datasets, they may replicate societal inequities. In healthcare, that can translate into underestimating disease risk or misclassifying conditions in racial and ethnic minorities, older adults, and underserved populations. Because many AI health tools rely on historical datasets that underrepresent certain groups, laboratories could unknowingly implement algorithms that perform unevenly across patient demographics.

To address those risks, ADLM is urging federal policymakers to explicitly incorporate AI systems into existing laboratory regulations, including updates to the Clinical Laboratory Improvement Amendments (CLIA). The group also recommends that federal health agencies work with professional societies to convene laboratory medicine and informatics experts to establish consensus guidelines for validating and verifying AI tools used in test interpretation and clinical decision support.

As reported on in 2025 by The Dark Report, the call comes in the wake of the federal government’s decision last year to eliminate the Clinical Laboratory Improvement Advisory Committee (CLIAC), a key advisory body to CMS and CDC that could have served as a natural forum for advancing ADLM’s proposed updates to CLIA to address artificial intelligence oversight.

Clinical Labs Push for Data Standards and Clear AI Accountability

In addition, ADLM is calling for expanded federal efforts to harmonize laboratory test results and standardize data reporting. Foundational steps, the organization argues, for reducing variability that can compromise algorithm performance. The statement also presses AI developers to increase data diversity, minimize bias in training datasets, and ensure laboratories have access to the technical information needed to independently evaluate algorithm performance.

“Clinical laboratories are uniquely positioned to help develop and assess the integration of AI health tools into testing workflows and, most importantly, how they influence patient test results and health outcomes,” said ADLM President Paul J. Jannetto, PhD.

Jannetto added, “We therefore urge the federal government to draw on the expertise of laboratory medicine professionals in order to develop AI regulations that support innovation, as well as transparent, consistent performance monitoring of this potentially revolutionary technology.” (Photo credit: ADLM)

For lab executives and medical directors, the position statement reinforces a growing reality: AI governance is quickly becoming a core operational and compliance issue. As adoption accelerates, laboratories may face heightened expectations from regulators, payers, and health system partners to demonstrate that AI-driven tools are analytically sound, clinically validated, and equitable across patient populations.

—Janette Wider

;