Recent laws in California, Utah, and Texas define new compliance standards for clinical laboratories employing AI in diagnostic and clinical messaging.
When it comes to oversight of artificial intelligence (AI) use in clinical laboratory, it behooves lab leaders to watch what is happening on the state level. In some cases, disclosure of AI use is a threshold states are monitoring.
For example, California Assembly Bill 3030, which went into effect Jan. 1, 2025, mandates transparency when generative AI is used in healthcare. Any health facility, laboratory, clinic, physician’s office, or group practice that employs generative AI to create patient communications about clinical information must include:
- A prominent disclaimer stating the content was AI-generated.
- Clear instructions that inform patients how to speak directly with a human clinician.
If a licensed provider reviews and approves the AI-generated communication, these requirements are waived. AB 3030 applies only to clinical—not administrative—messages. Non‑compliance can result in disciplinary actions from state regulators.
Laboratories using AI in patient-facing contexts should ensure their workflows include AI‑disclaimers, human‑review triggers, and clear ways for patients to contact providers.

“Symposium Cisco Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole polytechnique / Paris / France is licensed under CC BY-SA 2.0.
AI Disclosure in Utah
Meanwhile, Utah Senate Bill 226 updates its Artificial Intelligence Policy Act, tightening rules around how healthcare entities—including clinical labs—use generative AI in patient interactions. The rules went into effect May 7, 2025.
Under the state’s law, labs must disclose AI use only when:
- A patient explicitly asks whether they’re interacting with AI, or
- The lab uses AI in high-risk communications, such as delivering test interpretations, diagnostic results, or clinical advice.
Routine AI use in back-end operations or non-clinical messaging does not require disclosure.
A safe harbor provision protects labs from penalties if the AI system clearly identifies itself as non-human at the beginning and throughout the interaction.
Labs that use AI-generated content in patient portals, chatbots, or outreach must ensure compliance or face consumer protection penalties.
New Texas Law on AI
Texas passed a law in June that goes into effect Sept. 1, 2025, the regulates how AI is used within electronic health records (EHRs).
According to the law, providers that use AI for recommendations on diagnosis or treatment based on a patient’s medical record must review all information obtained through AI to ensure its accuracy before entering the information into a patient’s EHR.
The law also “imposes a strict data localization mandate, prohibiting the physical offshoring of electronic medical records,” law firm Holland & Knight noted. “This requirement applies not only to records stored directly by healthcare providers but also to those maintained by third-party vendors or cloud service providers.”
—Scott Wallask


