News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

AMA Issues Proposal to Help Circumvent False and Misleading Information When Using Artificial Intelligence in Medicine

Pathologists and clinical laboratory managers will want to stay alert to the concerns voiced by tech experts about the need to exercise caution when using generative AI to assist medical diagnoses

Even as many companies push to introduce use of GPT-powered (generative pre-trained transformer) solutions into various healthcare services, both the American Medical Association (AMA) and the World Health Organization (WHO) as well as healthcare professionals urge caution regarding use of AI-powered technologies in the practice of medicine. 

In June, the AMA House of Delegates adopted a proposal introduced by the American Society for Surgery of the Hand (ASSH) and the American Association for Hand Surgery (AAHS) titled, “Regulating Misleading AI Generated Advice to Patients.” The proposal is intended to help protect patients from false and misleading medical information derived from artificial intelligence (AI) tools such as GPTs.

GPTs are an integral part of the framework of a generative artificial intelligence that creates text, images, and other media using generative models. These neural network models can learn the patterns and structure of inputted information and then develop new data that contains similar characteristics.

Through their proposal, the AMA has developed principles and recommendations surrounding the benefits and potentially harmful consequences of relying on AI-generated medical advice and content to advance diagnoses.

Alexander Ding, MD

“We’re trying to look around the corner for our patients to understand the promise and limitations of AI,” said Alexander Ding, MD (above), AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in a press release. “There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.” Clinical laboratory professionals following advances in AI may want to remain informed on the use of generative AI solutions in healthcare. (Photo copyright: American Medical Association.)

Preventing Spread of Mis/Disinformation

GPTs are “a family of neural network models that uses the transformer architecture and is a key advancement in artificial intelligence (AI) powering generative AI applications such as ChatGPT,” according to Amazon Web Services.

In addition to creating human-like text and content, GPTs have the ability to answer questions in a conversational manner. They can analyze language queries and then predict high-quality responses based on their understanding of the language. GPTs can perform this task after being trained with billions of parameters on massive language datasets and then generate long responses, not just the next word in a sequence. 

“AI holds the promise of transforming medicine,” said diagnostic and interventional radiologist Alexander Ding, MD, AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in an AMA press release.

“We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation,” he added.

The AMA plans to work with the federal government and other appropriate organizations to advise policymakers on the optimal ways to use AI in healthcare to protect patients from misleading AI-generated data that may or may not be validated, accurate, or relevant.

Advantages and Risks of AI in Medicine

The AMA’s proposal was prompted by AMA-affiliated organizations that stressed concerns about the lack of regulatory oversight for GPTs. They are encouraging healthcare professionals to educate patients about the advantages and risks of AI in medicine. 

“AI took a huge leap with large language model tool and generative models, so all of the work that has been done up to this point in terms of regulatory and governance frameworks will have to be treated or at least reviewed with this new lens,” Sha Edathumparampil, Corporate Vice President, Digital and Data, Baptist Health South Florida, told Healthcare Brew.

According to the AMA press release, “the current limitations create potential risks for physicians and patients and should be used with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients, and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools.”

According to the AMA press release, the organization will propose state and federal regulations for AI tools at next year’s annual meeting in Chicago.

In a July AMA podcast, AMA’s President, Jesse Ehrenfeld, MD, stressed that more must be done through regulation and development to bolster trust in these new technologies.

“There’s a lot of discomfort around the use of these tools among Americans with the idea of AI being used in their own healthcare,” Ehrenfeld said. “There was a 2023 Pew Research Center poll [that said] 60% of Americans would feel uncomfortable if their own healthcare provider relied on AI to do things like diagnose disease or recommend a treatment.”

WHO Issues Cautions about Use of AI in Healthcare

In May, the World Health Organization (WHO) issued a statement advocating for caution when implementing AI-generated large language GPT models into healthcare.

A current example of such a GPT is ChatGPT, a large language-based model (LLM) that enables users to refine and lead conversations towards a desired length, format, style, level of detail and language. Organizations across industries are now utilizing GPT models for Question and Answer bots for customers, text summarization, and content generation and search features. 

“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” commented WHO in the statement.

WHO’s concerns regarding the need for prudence and oversight in the use of AI technologies include:

  • Data used to train AI may be biased, which could pose risks to health, equity, and inclusiveness.
  • LLMs generate responses that can appear authoritative and plausible, but which may be completely incorrect or contain serious errors.
  • LLMs may be trained on data for which consent may not have been given.
  • LLMs may not be able to protect sensitive data that is provided to an application to generate a response.
  • LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video that may be difficult for people to differentiate from reliable health content.

Tech Experts Recommended Caution

Generative AI will continue to evolve. Therefore, clinical laboratory professionals may want to keep a keen eye on advances in AI technology and GPTs in healthcare diagnosis.

“While generative AI holds tremendous potential to transform various industries, it also presents significant challenges and risks that should not be ignored,” wrote Edathumparampil in an article he penned for CXOTECH Magazine. “With the right strategy and approach, generative AI can be a powerful tool for innovation and differentiation, helping businesses to stay ahead of the competition and better serve their customers.”

GPT’s may eventually be a boon to healthcare providers, including clinical laboratories, and pathology groups. But for the moment, caution is recommended.

JP Schlingman

Related Information:

AMA Adopts Proposal to Protect Patients from False and Misleading AI-generated Medical Advice

Regulating Misleading AI Generated Advice to Patients

AMA to Develop Recommendations for Augmented Intelligence

What is GPT?

60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care

Navigating the Risks of Generative AI: A Guide for Businesses

Contributed: Top 10 Use Cases for AI in Healthcare

Anatomic Pathology at the Tipping Point? The Economic Case for Adopting Digital Technology and AI Applications Now

ChatGPT, AI in Healthcare and the future of Medicine with AMA President Jesse Ehrenfeld, MD, MPH

What is Generative AI? Everything You Need to Know

WHO Calls for Safe and Ethical AI for Health

GPT-3

Unstructured Data Is a Target for New Collaboration Involving IBM’s Watson Health and Others; Could Help Pathologists and Radiologists Generate New Revenue

If this medical imaging collaborative develops a way to use the unstructured data in radiology images and anatomic pathology reports, it could create a new revenue stream for pathologists

Unstructured data has been regularly recognized as one Achilles heel for the anatomic pathology profession. It means invaluable information about the cancers and other diseases diagnosed by surgical pathologists are “locked up,” making it difficult for this information to be accessed in efforts to advance population health management (PHM) or conduct clinical studies.

Similarly, medical imaging has an essential role in the diagnosis of cancer and other diseases. And, like most anatomic pathology reports, medical imaging also is considered to be “unstructured” by data experts because it is not easily accessible by computers, reported Fortune magazine.

Unstructured Data in Anatomic Pathology and Radiology

Now one of the world’s largest information technology companies wants to tackle the challenge of unstructured data in radiology images. IBM (NYSE: IBM) Watson Health launched a global initiative involving 16 health systems, radiology providers, and imaging technology companies.

The Watson Health medical imaging collaborative is working to apply cognitive computing of radiology images to clinical practice. IBM aims to transform how physicians use radiology images to diagnose and monitor patients. (more…)

;