News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Boston Children’s Hospital Hires a Prompt Engineer to Help the Healthcare Organization Deploy and Use Artificial Intelligence Applications

This may be a new ‘sign of the times’ as hospitals, clinical laboratories, and other healthcare providers working with AI find they also need to hire their own prompt engineers

Boston Children’s Hospital last year hired a “prompt engineer” to propel the hospital forward in using Artificial intelligence (AI) as part of its business model. But what is AI prompting? It’s a relatively new term and may not be familiar to clinical laboratory and pathology leaders.

AI “prompting,” according to Florida State University, “refers to the process of interacting with an AI system by providing specific instructions or queries to achieve a desired outcome.”

According to workable.com, prompt engineers specialize “in developing, refining, and optimizing AI-generated text prompts to ensure they are accurate, engaging, and relevant for various applications. They also collaborate with different teams to improve the prompt generation process and overall AI system performance.” 

Healthcare institutions are getting more serious about using AI to improve daily workflows and clinical care, including in the clinical laboratory and pathology departments. But adopting the new technology can be disruptive. To ensure the implementation goes smoothly, hospitals are now seeking prompt engineers to guide the organization’s strategy for using AI. 

When Boston Children’s Hospital leaders set out to find such a person, they looked for an individual who had “a clinical background [and] who knows how to use these tools. Someone who had experience coding for large language models and natural language processing, but who could also understand clinical language,” according to MedPage Today.

“We got many, many applications, some really impressive people, but we were looking for a specific set of skills and background,” John Brownstein, PhD, Chief Innovation Officer at Boston Children’s Hospital and Professor of Biomedical Informatics at Harvard Medical School, told MedPage Today.

“It was not easy to find [someone]—a bit of a unicorn-type candidate,” noted Brownstein, who is also a medical contributor to ABC News.

After a four-month search, the hospital hired Dinesh Rai, MD, emergency room physician and AI engineer, for the position. According to Brownstein, Rai had “actually practiced medicine, lived in a clinical environment,” and had “successfully launched many [AI] applications on top of large language models,” MedPage Today reported.

“Some of the nuances I bring to the table in terms of being a physician and having worked clinically and understanding really deeply the clinical workflows and how we can implement the [AI] technology—where its limits are, where it can excel, and the quickest way to get things [done],” Dinesh Rai, MD (above), told MedPage Today. “I’m happy to be able to help with all of that.” Hospital clinical laboratory and pathology managers may soon by engaging with prompt engineers to ensure the smooth use of AI in their departments. (Photo copyright: LinkedIn.)

Prompt Engineers are like F1 Drivers

“It’s kind of like driving a car, where basically anyone can drive an automatic car, and anyone can go onto ChatGPT, write some text, and get a pretty solid response,” said Rai, describing the act of AI prompting to MedPage today.

Then, there are “people who know how to drive manual, and there are people who will know different prompting techniques, like chain-of-thought or zero-shot prompting,” he added. “Then you have those F1 drivers who are very intimate with the mechanics of their car, and how to use it most optimally.”

The American Hospital Association (AHA) believes that AI “holds great promise in helping healthcare providers gain insights and improve health outcomes.” In an article titled, “How AI Is Improving Diagnostics, Decision-Making and Care,” the AHA noted that, “Although many questions remain regarding its safety, regulation, and impact, the use of AI in clinical care is no longer in its infancy and is expected to experience exponential growth in the coming years.

“AI is improving data processing, identifying patterns, and generating insights that otherwise might elude discovery from a physician’s manual effort. The next five years will be critical for hospitals and health systems to build the infrastructure needed to support AI technology, according to the recently released Futurescan 2023,” the AHA wrote.

The graphic above is taken from the American Hospital Association’s article about Futurescan’s 2023 survey results on AI in healthcare. “Healthcare executives from across the nation were asked how likely it is that by 2028 a federal regulatory body will determine that Al for clinical care delivery augmentation (e.g., assisted diagnosis and prescription, personalized medication and care) is safe for use by our hospital or health systems,” AHA stated. This would include the use of AI in clinical laboratories and pathology group practices. (Graphic copyright: American Hospital Association.)

The AHA listed the top three opportunities for AI in clinical care as:

  • Clinical Decision Tools: “AI algorithms analyze a vast amount of patient data to assist medical professionals in making more informed decisions about care.”
  • Diagnostic and Imaging: The use of AI “allows healthcare professionals to structure, index, and leverage diagnostic and imaging data for more accurate diagnoses.”
  • Patient Safety: The use of AI improves decision making and optimizes health outcomes by evaluating patient data. “Systems that incorporate AI can improve error detection, stratify patients, and manage drug delivery.”

The hiring of a prompt engineer by Boston Children’s Hospital is another example of how AI is gaining traction in clinical healthcare. According to the Futurescan 2023 survey, nearly half of hospital CEOs and strategy leaders believe that health systems will have the infrastructure in place by 2028 to successfully utilize AI in clinical decision making. 

“I’m lucky to [be] in an organization that has recognized the importance of AI as part of the future practice of medicine,” Rai told MedPage Today.

Pathologists and managers of clinical laboratories and genetic testing companies will want to track further advancements in artificial intelligence. At some point, the capabilities of future generations of AI solutions may encourage labs to hire their own prompt engineers.

—JP Schlingman

Related Information:

Why One Hospital Hired an AI Prompt Engineer

This Children’s Hospital is Integrating AI with Healthcare

How Five Healthcare Organizations Are Investing in AI for Patient Care

What is a Large Language Model (LLM)?

How AI is Improving Diagnostics, Decision-making and Care

Prompt Engineer Job Description

Chain-of-Thought Prompting

Zero-Shot Prompting

Futurescan 2023: Health Care Trends and Implications

Artificial Intelligence in the Operating Room: Dutch Scientists Develop AI Application That Informs Surgical Decision Making during Cancer Surgery

UK Study Claims AI Reading of CT Scans Almost Twice as Accurate at Grading Some Cancers as Clinical Laboratory Testing of Sarcoma Biopsies

Stanford Researchers Use Text and Images from Pathologists’ Twitter Accounts to Train New Pathology AI Model

AMA Issues Proposal to Help Circumvent False and Misleading Information When Using Artificial Intelligence in Medicine

Pathologists and clinical laboratory managers will want to stay alert to the concerns voiced by tech experts about the need to exercise caution when using generative AI to assist medical diagnoses

Even as many companies push to introduce use of GPT-powered (generative pre-trained transformer) solutions into various healthcare services, both the American Medical Association (AMA) and the World Health Organization (WHO) as well as healthcare professionals urge caution regarding use of AI-powered technologies in the practice of medicine. 

In June, the AMA House of Delegates adopted a proposal introduced by the American Society for Surgery of the Hand (ASSH) and the American Association for Hand Surgery (AAHS) titled, “Regulating Misleading AI Generated Advice to Patients.” The proposal is intended to help protect patients from false and misleading medical information derived from artificial intelligence (AI) tools such as GPTs.

GPTs are an integral part of the framework of a generative artificial intelligence that creates text, images, and other media using generative models. These neural network models can learn the patterns and structure of inputted information and then develop new data that contains similar characteristics.

Through their proposal, the AMA has developed principles and recommendations surrounding the benefits and potentially harmful consequences of relying on AI-generated medical advice and content to advance diagnoses.

Alexander Ding, MD

“We’re trying to look around the corner for our patients to understand the promise and limitations of AI,” said Alexander Ding, MD (above), AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in a press release. “There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.” Clinical laboratory professionals following advances in AI may want to remain informed on the use of generative AI solutions in healthcare. (Photo copyright: American Medical Association.)

Preventing Spread of Mis/Disinformation

GPTs are “a family of neural network models that uses the transformer architecture and is a key advancement in artificial intelligence (AI) powering generative AI applications such as ChatGPT,” according to Amazon Web Services.

In addition to creating human-like text and content, GPTs have the ability to answer questions in a conversational manner. They can analyze language queries and then predict high-quality responses based on their understanding of the language. GPTs can perform this task after being trained with billions of parameters on massive language datasets and then generate long responses, not just the next word in a sequence. 

“AI holds the promise of transforming medicine,” said diagnostic and interventional radiologist Alexander Ding, MD, AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in an AMA press release.

“We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation,” he added.

The AMA plans to work with the federal government and other appropriate organizations to advise policymakers on the optimal ways to use AI in healthcare to protect patients from misleading AI-generated data that may or may not be validated, accurate, or relevant.

Advantages and Risks of AI in Medicine

The AMA’s proposal was prompted by AMA-affiliated organizations that stressed concerns about the lack of regulatory oversight for GPTs. They are encouraging healthcare professionals to educate patients about the advantages and risks of AI in medicine. 

“AI took a huge leap with large language model tool and generative models, so all of the work that has been done up to this point in terms of regulatory and governance frameworks will have to be treated or at least reviewed with this new lens,” Sha Edathumparampil, Corporate Vice President, Digital and Data, Baptist Health South Florida, told Healthcare Brew.

According to the AMA press release, “the current limitations create potential risks for physicians and patients and should be used with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients, and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools.”

According to the AMA press release, the organization will propose state and federal regulations for AI tools at next year’s annual meeting in Chicago.

In a July AMA podcast, AMA’s President, Jesse Ehrenfeld, MD, stressed that more must be done through regulation and development to bolster trust in these new technologies.

“There’s a lot of discomfort around the use of these tools among Americans with the idea of AI being used in their own healthcare,” Ehrenfeld said. “There was a 2023 Pew Research Center poll [that said] 60% of Americans would feel uncomfortable if their own healthcare provider relied on AI to do things like diagnose disease or recommend a treatment.”

WHO Issues Cautions about Use of AI in Healthcare

In May, the World Health Organization (WHO) issued a statement advocating for caution when implementing AI-generated large language GPT models into healthcare.

A current example of such a GPT is ChatGPT, a large language-based model (LLM) that enables users to refine and lead conversations towards a desired length, format, style, level of detail and language. Organizations across industries are now utilizing GPT models for Question and Answer bots for customers, text summarization, and content generation and search features. 

“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” commented WHO in the statement.

WHO’s concerns regarding the need for prudence and oversight in the use of AI technologies include:

  • Data used to train AI may be biased, which could pose risks to health, equity, and inclusiveness.
  • LLMs generate responses that can appear authoritative and plausible, but which may be completely incorrect or contain serious errors.
  • LLMs may be trained on data for which consent may not have been given.
  • LLMs may not be able to protect sensitive data that is provided to an application to generate a response.
  • LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video that may be difficult for people to differentiate from reliable health content.

Tech Experts Recommended Caution

Generative AI will continue to evolve. Therefore, clinical laboratory professionals may want to keep a keen eye on advances in AI technology and GPTs in healthcare diagnosis.

“While generative AI holds tremendous potential to transform various industries, it also presents significant challenges and risks that should not be ignored,” wrote Edathumparampil in an article he penned for CXOTECH Magazine. “With the right strategy and approach, generative AI can be a powerful tool for innovation and differentiation, helping businesses to stay ahead of the competition and better serve their customers.”

GPT’s may eventually be a boon to healthcare providers, including clinical laboratories, and pathology groups. But for the moment, caution is recommended.

JP Schlingman

Related Information:

AMA Adopts Proposal to Protect Patients from False and Misleading AI-generated Medical Advice

Regulating Misleading AI Generated Advice to Patients

AMA to Develop Recommendations for Augmented Intelligence

What is GPT?

60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care

Navigating the Risks of Generative AI: A Guide for Businesses

Contributed: Top 10 Use Cases for AI in Healthcare

Anatomic Pathology at the Tipping Point? The Economic Case for Adopting Digital Technology and AI Applications Now

ChatGPT, AI in Healthcare and the future of Medicine with AMA President Jesse Ehrenfeld, MD, MPH

What is Generative AI? Everything You Need to Know

WHO Calls for Safe and Ethical AI for Health

GPT-3

;