News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

Florida Hospital Utilizes Machine Learning Artificial Intelligence Platform to Reduce Clinical Variation in Its Healthcare, with Implications for Medical Laboratories

Pathologists and clinical laboratory scientists may find one hospital’s use of a machine-learning platform to help improve utilization of lab tests both an opportunity and a threat

Variation in how individual physicians order, interpret, and act upon clinical laboratory test results is regularly shown by studies in peer-reviewed medical journals to be one reason why some patients get great outcomes and other patients get less-than-desirable outcomes. That is why many healthcare providers are initiating efforts to improve how physicians utilize clinical laboratory tests and other diagnostic procedures.

At Flagler Hospital, a 335-bed not-for-profit healthcare facility in St. Augustine, Fla., a new tool is being used to address variability in clinical care. It is a machine learning platform called Symphony AyasdiAI for Clinical Variation Management (AyasdiAI) from Silicon Valley-based SymphonyAI Group. Flagler is using this system to develop its own clinical order set built from clinical data contained within the hospital’s electronic health record (EHR) and financial systems.

This effort came about after clinical and administrative leadership at Flagler Hospital realized that only about one-third of its physicians regularly followed certain medical decision-making guidelines or clinical order sets. Armed with these insights, staff members decided to find a solution that reduced or removed variability from their healthcare delivery.

Reducing Variability Improves Care, Lowers Cost

Variability in physician care has been linked to increased healthcare costs and lower quality outcomes, as studies published in JAMA and JAMA Internal Medicine indicate. Such results do not bode well for healthcare providers in today’s value-based reimbursement system, which rewards increased performance and lowered costs.

“Fundamentally, what these technologies do is help us recognize important patterns in the data,” Douglas Fridsma, PhD, an expert in health informatics, standards, interoperability, and health IT strategy, and CEO of the American Medical Informatics Association (AMIA), told Modern Healthcare.

Clinical order sets are designed to be used as part of clinical decision support systems (CDSS) installed by hospitals for physicians to standardize care and support sound clinical decision making and patient safety.

However, when doctors don’t adhere to those pre-defined standards, the results can be disadvantageous, ranging from unnecessary services and tests being performed to preventable complications for patients, which may increase treatment costs.

“Over the past few decades we’ve come to realize clinical variation plays an important part in the overuse of medical care and the waste that occurs in healthcare, making it more expensive than it should be,” Michael Sanders, MD (above) Flagler’s Chief Medical Information Officer, told Modern Healthcare. “Every time we’re adding something that adds cost, we have to make sure that we’re adding value.” (Photo copyright: Modern Healthcare.)

Flagler’s AI project involved uploading clinical, demographic, billing, and surgical information to the AyasdiAI platform, which then employed machine learning to analyze the data and identify trends. Flagler’s physicians are now provided with a fuller picture of their patients’ conditions, which helps identify patients at highest risk, ensuring timely interventions that produce positive outcomes and lower costs.

How Symphony AyasdiAI Works

The AyasdiAI application utilizes a category of mathematics called topological data analysis (TDA) to cluster similar patients together and locate parallels between those groups. “We then have the AI tools generate a carepath from this group, showing all events which should occur in the emergency department, at admission, and throughout the hospital stay,” Sanders told Healthcare IT News. “These events include all medications, diagnostic tests, vital signs, IVs, procedures and meals, and the ideal timing for the occurrence of each so as to replicate the results of this group.”

Caregivers then examine the data to determine the optimal plan of care for each patient. Cost savings are figured into the overall equation when choosing a treatment plan. 

Flagler first used the AI program to examine trends among their pneumonia patients. They determined that nebulizer treatments should be started as soon as possible with pneumonia patients who also have chronic obstructive pulmonary disease (COPD).

“Once we have the data loaded, we use [an] unsupervised learning AI algorithm to generate treatment groups,” Sanders told Healthcare IT News. “In the case of our pneumonia patient data, Ayasdi produced nine treatments groups. Each group was treated similarly, and statistics were given to us to understand that group and how it differed from the other groups.”

Armed with this information, the hospital achieved an 80% greater physician adherence to order sets for pneumonia patients. This resulted in a savings of $1,350 per patient and reduced the readmission rates for pneumonia patients from 2.9% to 0.4%, reported Modern Healthcare.

The development of a machine-learning platform designed to reduce variation in care (by helping physicians become more consistent at following accepted clinical care guidelines) can be considered a warning shot across the bow of the pathology profession.

This is a system that has the potential to become interposed between the pathologist in the medical laboratory and the physicians who refer specimens to the lab. Were that to happen, the deep experience and knowledge that have long made pathologists the “doctor’s doctor” will be bypassed. Physicians will stop making that first call to their pathologists, clinical chemists, and laboratory scientists to discuss a patient’s condition and consult on which test to order, how to interpret the results, and get guidance on selecting therapies and monitoring the patient’s progress.

Instead, a “smart software solution” will be inserted into the clinical workflow of physicians. This solution will automatically guide the physician to follow the established care protocol. In turn, this will give the medical laboratory the simple role of accepting a lab test order, performing the analysis, and reporting the results.

If this were true, then it could be argued that a laboratory test is a commodity and hospitals, physicians, and payers would argue that they should buy these commodity lab tests at the cheapest price.

—JP Schlingman

Related Information:

Flagler Hospital Combines AI, Physician Committee to Minimize Clinical Variation

Flagler Hospital Uses AI to Create Clinical Pathways That Enhance Care and Slash Costs

Case Study: Flagler Hospital, How One of America’s Oldest Cities Became Home to One of America’s Most Innovative Hospitals

How Using Artificial Intelligence Enabled Flagler Hospital to Reduce Clinical Variation

Florida Hospital to Save $20M Through AI-enabled Clinical Variation

The Journey from Volume to Value-Based Care Starts Here

The Science of Clinical Carepaths

Hospitals Worldwide Are Deploying Artificial Intelligence and Predictive Analytics Systems for Early Detection of Sepsis in a Trend That Could Help Clinical Laboratories, Microbiologists

Though medical laboratory testing is key to confirming sepsis, predictive analytics systems can identify early indications and alert caregivers, potentially saving lives

Medical laboratory testing has long been the key element in hospitals’ fight to reduce deaths caused by sepsis, a complication caused by the human body’s response to infection which can injure organs and turn fatal. But clinical laboratory testing takes time, particularly if infectious agents must be cultured in the microbiology lab. And sepsis acts so quickly, by the time the condition is diagnosed it is often too late to prevent the patient’s death.

To speed detection and diagnosis, several large healthcare providers are adding predictive analytics, artificial intelligence (AI) and machine learning technologies to their efforts to reduce sepsis-related mortality.

One example is HCA Healthcare (NYSE:HCA), the for-profit corporation with 185 hospitals, 119 freestanding surgery centers, and approximately 2,000 sites of care in 21 US states and in the United Kingdom.

HCA employs an electronic information and alert system called SPOT (Sepsis Prediction and Optimization of Therapy), which is embedded in each hospital patient’s electronic health record (EHR).

SPOT receives clinical data in real time directly from monitoring equipment at the patient’s bedside and uses predictive analytics to examine the data, including medical laboratory test results. If the data indicate that sepsis is present, SPOT alerts physicians and other caregivers.

With SPOT, HCA’s physicians have been detecting sepsis in its earliest stages and saving lives. This lends support to the growing belief that AI and machine learning can improve speed to diagnosis and diagnostic accuracy, which Dark Daily has covered in multiple e-briefings.

SPOT displays its data on screens that are monitored 24/7 (shown above). The clinical data include the patient’s vital signs as well as medical laboratory test results and nursing reports. HCA says the system has been used on about 2.5 million patients and has helped save up to 8,000 lives, Business Wire reported. (Photo copyright: HCA.)

Code Sepsis

HCA began developing the software in 2016. It was initially deployed in 2018 at TriStar Centennial Medical Center, HCA’s flagship hospital in Nashville,The Tennessean reported. It is now installed in most of the hospitals owned or operated by HCA.

Michael Nottidge, MD, is ICC Division Medical Director for Critical Care at HCA Healthcare Physician Services Group, and a critical care physician at TriStar Centennial. Nottidge told The Tennessean that unlike a heart attack or stroke, “sepsis begins quietly, then builds into a dangerous crescendo.”

Since its implementation, “[SPOT] has alerted clinicians to a septic patient nearly every day, often hours sooner than they would have been detected otherwise,” Nottidge told The Tennessean.

HCA’s SPOT system uses machine learning to ingest “millions of data points on which patients do and do not develop sepsis,” according to an HCA blog post. “Those computers monitor clinical data every second of a patient’s hospitalization. When a pattern of data consistent with sepsis risk occurs, it will signal with an alert to trained technicians who call a ‘code sepsis.’”

More Accurate than Clinicians

The federal Centers for Disease Control and Prevention (CDC) estimates that more than 250,000 Americans die from sepsis each year. The Sepsis Alliance describes the life-threatening complication as the “leading cause of death in US hospitals.”

Like most health systems, HCA has been battling sepsis for many years using guidelines and educational tools provided by the Surviving Sepsis Campaign (SSC), a joint initiative of the Society of Critical Care Medicine (SCCM) and the European Society of Intensive Care Medicine (ESICM), Modern Healthcare reported.

Early detection and treatment are key to reducing sepsis mortalities. However, a study in the journal Clinical Medicine reported that, despite recent advances in identifying at-risk patients, “there is still no molecular signature able to diagnose sepsis.”

And according to a study published in Critical Care Medicine, the survival rate is about 80% when treatment is administered in the first hour, but each hour of delay in treatment decreases the average survival rate by 7.6%.

In an interview with Becker’s Hospital Review, HCA’s Chief Medical Officer and President of Clinical Services, Jonathan Perlin, MD, PhD, touted SPOT’s reliability, having “very few false positives. In fact, it is more than 50% more accurate at excluding patients who don’t have sepsis than even the best clinician.”

Perlin also told The Tennessean that SPOT can detect sepsis “about eight to 10 hours before clinicians ever could.”

“It’s no coincidence that we call the technology ‘SPOT’—a common name for a child’s dog—because it really does act as our sepsis sniffer,” said Jonathan Perlin, MD, PhD (above), in the HCA blog post. “The whole point is for it to sniff smoke and put the ‘fire’ out before it becomes catastrophic. With SPOT, we’re identifying at least one-third more cases of sepsis that would not previously have come to caregivers’ attention until it was too late.” [Photo copyright: Nashville Business Journal.)

Other Healthcare Providers Using AI-Enabled Early-Warning Tools

In November 2018, the emergency department at Duke University Hospital in Durham, N.C., began a pilot program to test an AI-enabled system dubbed Sepsis Watch, reported Health Data Management. The software, developed by the Duke Institute for Health Innovation, “was trained via deep learning to identify cases based on dozens of variables, including vital signs, medical laboratory test results, and medical histories,” reported IEEE Spectrum. “In operation, it pulls information from patients’ medical records every five minutes to evaluate their conditions, offering intensive real-time analysis that human doctors can’t provide.”

Earlier this year, Sentara Norfolk General Hospital in Norfolk, Va., installed an AI-enabled sepsis-alert system developed by Jvion, a maker of predictive analytics software. “The new AI tool grabs about 4,500 pieces of data about a patient that live in the electronic record—body temperature, heart rate, blood tests, past medical history, gender, where they live and so on—and runs it all through an algorithm that assesses risk for developing sepsis,” reported The Virginian Pilot.

Geisinger Health System, which operates 13 hospitals in Pennsylvania and New Jersey, is working on its own system to identify sepsis risk. It announced in a September news release that it had teamed with IBM to develop a predictive model using a decade’s worth of data from thousands of Geisinger patients.

“The model helped researchers identify clinical biomarkers associated with higher rates of mortality from sepsis by predicting death or survival of patients in the test data,” Geisinger stated in the news release. “The project revealed descriptive and clinical features such as age, prior cancer diagnosis, decreased blood pressure, number of hospital transfers, and time spent on vasopressor medicines, and even the type of pathogen, all key factors linked to sepsis deaths.”

So, can artificial intelligence and predictive analytics added to medical laboratory test results help prevent sepsis-related deaths in all hospitals? Perhaps so. Systems like SPOT, Sepsis Watch, and others certainly are logging impressive results.

It may not be long before similar technologies are aiding pathologists, microbiologists, and clinical laboratories achieve improved diagnostic and test accuracy as well.

—Stephen Beale

Related Information:

HCA Healthcare Using Algorithm Driven Technology to Detect Sepsis Early and Help Save 8,000 Lives

Surviving Sepsis: Young Mother and Caregivers Raise Awareness of ‘Silent Killer’

HCA Healthcare Technology Saved Nurse’s Life by Spotting Deadly Sepsis Signs

HCA Uses Predictive Analytics to Spot Sepsis Early

Duration of Hypotension before Initiation of Effective Antimicrobial Therapy Is the Critical Determinant of Survival in Human Septic Shock

SPOT: How HCA is “Sniffing Out” Sepsis Early

HCA Hospitals to Expand Computer Algorithm That Detects Sepsis and Saves Lives

Diagnosis and Management of Sepsis

Meet SPOT: HCA Healthcare’s ‘Smoke Detector’ for Sepsis

SPOT On: New Decision Support Tool Reduces Sepsis Mortality by 22.9%

HCA Healthcare Says Analytics System Can Detect Sepsis Quickly

HCA Develops Artificial Intelligence Tool for Early Sepsis Detection

To Catch A Killer: Electronic Sepsis Alert Tools Reaching A Fever Pitch?

Detecting Sepsis Without Alert Fatigue

Could Biases in Artificial Intelligence Databases Present Health Risks to Patients and Financial Risks to Healthcare Providers, including Medical Laboratories?

Clinical laboratories working with AI should be aware of ethical challenges being pointed out by industry experts and legal authorities

Experts are voicing concerns that using artificial intelligence (AI) in healthcare could present ethical challenges that need to be addressed. They say databases and algorithms may introduce bias into the diagnostic process, and that AI may not perform as intended, posing a potential for patient harm.

If true, the issues raised by these experts would have major implications for how clinical laboratories and anatomic pathology groups might use artificial intelligence. For that reason, medical laboratory executives and pathologists should be aware of possible drawbacks to the use of AI and machine-learning algorithms in the diagnostic process.

Is AI Underperforming?

AI’s ability to improve diagnoses, precisely target therapies, and leverage healthcare data is predicted to be a boon to precision medicine and personalized healthcare.

For example, Accenture (NYSE:ACN) says that hospitals will spend $6.6 billion on AI by 2021. This represents an annual growth rate of 40%, according to a report from the Dublin, Ireland-based consulting firm, which states, “when combined, key clinical health AI applications can potentially create $150 billion in annual savings for the United States healthcare economy by 2026.”

But are healthcare providers too quick to adopt AI?

Accenture defines AI as a “constellation of technologies from machine learning to natural language processing that allows machines to sense, comprehend, act, and learn.” However, some experts say AI is not performing as intended, and that it introduces biases in healthcare worthy of investigation.

Keith Dreyer, DO, PhD, is Chief Data Science Officer at Partners Healthcare and Vice Chairman of Radiology at Massachusetts General Hospital (MGH). At a World Medical Innovation Forum on Artificial Intelligence covered by HealthITAnalytics, he said, “There are currently no measures to indicate that a result is biased or how much it might be biased. We need to explain the dataset these answers came from, how accurate we can expect them to be, where they work, and where they don’t work. When a number comes back, what does it really mean? What’s the difference between a seven and an eight or a two?” (Photo copyright: Healthcare in Europe.)

What Goes in Limits What Comes Out

Could machine learning lead to machine decision-making that puts patients at risk? Some legal authorities say yes. Especially when computer algorithms are based on limited data sources and questionable methods, lawyers warn.

Pilar Ossorio PhD, JD, Professor of Law and Bioethics at the University of Wisconsin Law School (UW), toldHealth Data Management (HDM) that genomics databases, such as the Genome-Wide Association Studies (GWAS), house data predominantly about people of Northern European descent, and that could be a problem.

How can AI provide accurate medical insights for people when the information going into databases is limited in the first place? Ossorio pointed to lack of diversity in genomic data. “There are still large groups of people for whom we have almost no genomic data. This is another way in which the datasets that you might use to train your algorithms are going to exclude certain groups of people altogether,” she told HDM.

She also sounded the alarm about making decisions about women’s health when data driving them are based on studies where women have been “under-treated compared with men.”

“This leads to poor treatment, and that’s going to be reflected in essentially all healthcare data that people are using when they train their algorithms,” Ossorio said during a Machine Learning for Healthcare (MLHC) conference covered by HDM.

How Bias Happens 

Bias can enter healthcare data in three forms: by humans, by design, and in its usage. That’s according to David Magnus, PhD, Director of the Stanford Center for Biomedical Ethics (SCBE) and Senior Author of a paper published in the New England Journal of Medicine (NEJM) titled, “Implementing Machine Learning in Health Care—Addressing Ethical Challenges.”

The paper’s authors wrote, “Physician-researchers are predicting that familiarity with machine-learning tools for analyzing big data will be a fundamental requirement for the next generation of physicians and that algorithms might soon rival or replace physicians in fields that involve close scrutiny of images, such as radiology and anatomical pathology.”

In a news release, Magnus said, “You can easily imagine that the algorithms being built into the healthcare system might be reflective of different, conflicting interests. What if the algorithm is designed around the goal of making money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”

In addition to the possibility of algorithm bias, the authors of the NEJM paper have other concerns about AI affecting healthcare providers:

  • “Physicians must adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models designed to predict outcomes, understand how the models function and guard against becoming overly dependent on them.
  • “Data gathered about patient health, diagnostics, and outcomes become part of the ‘collective knowledge’ of published literature and information collected by healthcare systems and might be used without regard for clinical experience and the human aspect of patient care.
  • “Machine-learning-based clinical guidance may introduce a third-party ‘actor’ into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.”    
“We need to be cautious about caring for people based on what algorithms are showing us. The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically,” said Danton Char, MD, Lead Author and Assistant Professor of Anesthesiology, Perioperative, and Pain Medicine at Stanford, in the news release. “I think society has become very breathless in looking for quick answers,” he added. (Photo copyright: Stanford Medicine.)

Acknowledge Healthcare’s Differences

Still, the Stanford researchers acknowledge that AI can benefit patients. And that healthcare leaders can learn from other industries, such as car companies, which have test driven AI. 

“Artificial intelligence will be pervasive in healthcare in a few years,” said

Nigam Shah, PhD, co-author of the NEJM paper and Associate Professor of Medicine at Stanford, in the news release. He added that healthcare leaders need to be aware of the “pitfalls” that have happened in other industries and be cognizant of data. 

“Be careful about knowing the data from which you learn,” he warned.

AI’s ultimate role in healthcare diagnostics is not yet fully known. Nevertheless, it behooves clinical laboratory leaders and anatomic pathologists who are considering using AI to address issues of quality and accuracy of the lab data they are generating. And to be aware of potential biases in the data collection process.

—Donna Marie Pocius

Related Information:

Accenture: Healthcare Artificial Intelligence

Could Artificial Intelligence Do More Harm than Good in Healthcare?

AI Machine Learning Algorithms Are Susceptible to Biased Data

Implementing Machine Learning in Healthcare—Addressing Ethical Challenges

Researchers Say Use of AI in Medicine Raises Ethical Questions

Artificial Intelligence Systems, Like IBM’s Watson, Continue to Underperform When Compared to Oncologists and Anatomic Pathologists

Though the field of oncology has some AI-driven tools, overall, physicians report the reality isn’t living up to the hype

Artificial intelligence (AI) has been heavily touted as the next big thing in healthcare for nearly a decade. Much ink has been devoted to the belief that AI would revolutionize how doctors treat patients. That it would bring about a new age of point-of-care clinical decision support tools and clinical laboratory diagnostic tests. And it would enable remote telemedicine to render distance between provider and patient inconsequential.

But nearly 10 years after IBM’s Watson defeated two human contestants on the game show Jeopardy, some experts believe AI has under-delivered on the promise of a brave new world in medicine, noted IEEE Spectrum, a website and magazine dedicated to applied sciences and engineering.

In the years since Watson’s victory on Jeopardy, IBM (NYSE:IBM) has announced almost 50 partnerships, collaborations, and projects intended to develop AI-enabled tools for medical purposes. Most of these projects did not bear fruit.

However, IBM’s most publicized medical partnerships revolved around the field of oncology and the expectation that Watson could analyze data and patients’ records and help oncologists devise personalized and effective cancer treatment plans. Success in helping physicians more accurately diagnosis different types of cancer would require anatomic pathologists to understand this new role for Watson and how the pathology profession should respond to it, strategically and tactically.

But Watson and other AI systems often struggled to understand the finer points of medical text. “The information that physicians extract from an article, that they use to change their care, may not be the major point of the study,” Mark Kris, MD, Medical Oncologist at Memorial Sloan Kettering Cancer Center, told IEEE Spectrum. “Watson’s thinking is based on statistics, so all it can do is gather statistics about main outcomes. But doctors don’t work that way.” 

Ultimately, IEEE Spectrum reported, “even today’s best AI struggles to make sense of complex medical information.”

“Reputationally, I think they’re in some trouble,” Robert Wachter, MD, Professor and Chair, Department of Medicine, University of California, San Francisco, told IEEE Spectrum. “They came in with marketing first, product second, and got everybody excited. Then the rubber hit the road. This is an incredibly hard set of problems, and IBM, by being first out, has demonstrated that for everyone else.”

“It’s a difficult task to inject AI into healthcare, and it’s a challenge. But we’re doing it,” John Kelly III, PhD, (above), Executive Vice President, IBM, who previously oversaw IBM’s Watson platform as Senior Vice President, Cognitive Solutions and IBM Research, told IEEE Spectrum. “We’re continuing to learn, so our offerings change as we learn.” (Photo copyright: IBM.)

Over Promises and Under Deliveries

In 2016, MD Anderson Cancer Center canceled a project with IBM Watson after spending $62 million on it, Becker’s Hospital Review reported. That project was supposed to use natural language processing (NLP) to develop personalized treatment plans for cancer patients by comparing databases of treatment options with patients’ electronic health records.

“We’re doing incredibly better with NLP than we were five years ago, yet we’re still incredibly worse than humans,” Yoshua Bengio, PhD, Professor of Computer Science at the University of Montreal, told IEEE Spectrum.

The researchers hoped that Watson would be able to examine variables in patient records and keep current on new information by scanning and interpreting articles about new discoveries and clinical trials. But Watson was unable to interpret the data as humans can.

IEEE Spectrum reported that “The realization that Watson couldn’t independently extract insights from breaking news in the medical literature was just the first strike. Researchers also found that it couldn’t mine information from patients’ electronic health records as they’d expected.”

Researchers Lack Confidence in Watson’s Results

In 2018, the team at MD Anderson published a paper in The Oncologist outlining their experiences with Watson and cancer care. They found that their Watson-powered tool, called Oncology Expert Advisor, had “variable success in extracting information from text documents in medical records. It had accuracy scores ranging from 90% to 96% when dealing with clear concepts like diagnosis, but scores of only 63% to 65% for time-dependent information like therapy timelines.”

A team of researchers at the University of Nebraska Medical Center (UNMC) have experimented with Watson for genomic analytics and breast cancer patients. After treating the patients, scientists identify mutations using their own tools, then enter that data into Watson, which can quickly pick out some of the mutations that have drug treatments available.

“But the unknown thing here is how good are the results,” Babu Guda, PhD, Professor and Chief Bioinformatics and Research Computing Officer at UNMC, told Gizmodo. “There is no way to validate what we’re getting from IBM is accurate unless we test the real patients in an experiment.” 

Guda added that IBM needs to publish the results of studies and tests performed on thousands of patients if they want scientists to have confidence in Watson tools.

“Otherwise it’s very difficult for researchers,” he said. “Without publications, we can’t trust anything.”

Computer Technology Evolving Faster than AI Can Utilize It

The inability of Watson to produce results for medical uses may be exacerbated by the fact that the cognitive computing technologies that were cutting edge back in 2011 aren’t as advanced today.

IEEE Spectrum noted that professionals in both computer science and medicine believe that AI has massive potential for improving and enhancing the field of medicine. To date, however, most of AI’s successes have occurred in controlled experiments with only a few AI-based medical tools being approved by regulators. IBM’s Watson has only had a few successful ventures and more research and testing is needed for Watson to prove its value to medical professionals.

“As a tool, Watson has extraordinary potential,” Kris told IEEE Spectrum. “I do hope that the people who have the brainpower and computer power stick with it. It’s a long haul, but it’s worth it.”

Meanwhile, the team at IBM Watson Health continues to forge ahead. In February 2019, Healthcare IT News interviewed Kyu Rhee, MD, Vice President and Chief Health Officer at IBM Corp. and IBM Watson Health. He outlined the directions IBM Watson Health would emphasize at the upcoming annual meeting of the Healthcare Information and Management Systems Society (HIMSS).

IBM Watson Health is “using our presence at HIMSS19 this year to formally unveil the work we’ve been doing over the past year to integrate AI technology and smart, user-friendly analytics into the provider workflow, with a particular focus on real-world solutions for providers to start tackling these types of challenges head-on,” stated Rhee. “We will tackle these challenges by focusing our offerings in three core areas. First, is management decision support. These are the back-office capabilities that improve operational decisions.”

Clinical laboratory leaders and anatomic pathologists may or may not agree about how Watson is able to support clinical care initiatives. But it’s important to note that, though AI’s progress toward its predicted potential has been slow, it continues nonetheless and is worth watching.

—JP Schlingman

Related Information:

How IBM Watson Overpromised and Underdelivered on AI Health Care

Why Everyone is Hating on IBM Watson – Including the People Who Helped Make It

Memorial Sloan Kettering Trains IBM Watson to Help Doctors Make Better Cancer Treatment Choices

4 Reasons MD Anderson Put IBM Watson On Hold

IBM Watson Health’s Chief Health Officer Talks Healthcare Challenges and AI

Applying Artificial Intelligence to Address the Knowledge Gaps in Cancer Care

After Taking on Jeopardy Contestants, IBM’s Watson Supercomputer Might Be a Resource for Pathologists

Will IBM’s ‘Watson on Oncology’ Give Oncologists and Pathologists a Useful Tool for Diagnosing and Treating Various Cancers?

IBM’s Watson Not Living Up to Hype, Wall Street Journal and Other Media Report; ‘Dr. Watson’ Has Yet to Show It Can Improve Patient Outcomes or Accurately Diagnose Cancer

Can Artificial Intelligence Diagnose Skin Cancers More Accurately than Anatomic Pathologists? Heidelberg University Researchers Say “Yes”

New study conducted by an international team of researchers suggests that artificial intelligence (AI) may be better than highly-trained humans at detecting certain skin cancers

Artificial intelligence (AI) has been working its way into health technology for several years and, so far, AI tools have been a boon to physicians and health networks. Until now, though, the general view was that it was a supplemental tool for diagnosticians, not a replacement for them. But what if the AI was better at detecting disease than humans, including anatomic pathologists?

Researchers in the Department of Dermatology at Heidelberg University in Germany have concluded that AI can be more accurate at identifying certain cancers. The challenge they designed for their study involved skin biopsies and dermatologists.

They pitted a deep-learning convolutional neural network (CNN) against 58 dermatologists from 17 countries to determine which was more accurate at detecting malignant melanomas—humans or AI. A CNN is an artificial network based on the biological processes that occur when neurons in the brain are connected to each other and respond to what the eye sees.

The CNN won.

“For the first time we compared a CNN’s diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians’ experience, they may benefit from assistance by a CNN’s image classification,” the report noted.

The researchers published their report in the Annals of Oncology, a peer-reviewed medical journal published by Oxford University Press that is the official journal of the European Society for Medical Oncology.

“I expected only a performance on an even level with the physicians. The outperformance even of the average experienced and trained dermatologists was a major surprise,” Holger Haenssle, PhD, Professor of Dermatology at Heidelberg University and one of the authors of the study, told Healthline. Anatomic pathologists will want to follow the further development of this research and its associated diagnostic technologies. (Photo copyright: University of Heidelberg.)

Does AI Tech Have Superior Visual Acuity Compared to Human Eyes?

The dermatologists who participated in the study had varying degrees of experience in dermoscopy, also known as dermatoscopy. Thirty of the doctors had more than five-year’s experience and were considered to be expert level. Eleven of the dermatologists were considered “skilled” with two- to five-year’s experience. The remaining 17 doctors were termed beginners with less than two-year’s experience.

To perform the study, the researchers first compiled a set of 100 dermoscopic images that showed melanomas and benign moles called Nevi. Dermoscopes (or dermatoscopes) create images using a magnifying glass and light source pressed against the skin. The resulting magnified, high-resolution images allow for easier, more accurate diagnoses than inspection with the naked eye.

During the first stage of the research, the dermatologists were asked to diagnose whether a lesion was melanoma or benign by looking at the images with their naked eyes. They also were asked to render their opinions for any needed action, such as surgery and follow-up care based on their diagnoses.

After this part of the study, the dermatologists on average identified 86.6% of the melanomas and 71.3% of the benign moles. More experienced doctors identified the melanomas at 89%, which was slightly higher than the average of the group.

The researchers also showed 300 images of malignant and benign skin lesions to the CNN. The AI accurately identified 95% of the melanomas by analyzing the images.

“The CNN missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity. This would result in less unnecessary surgery,” Haenssle told CBS News.

In a later part of the research, the dermatologists were shown the images a second time and provided clinical information about the patients, including age, gender, and location of the lesion. They were again instructed to make diagnoses and projected care decisions. With the additional information, the doctors’ average detection of melanomas increased to 88.9% and their recognition of benign moles increased to 75.7%. Still below the results of the CNN.

These findings suggest that the visual pattern recognition of AI technology could be a meaningful tool to help physicians and researchers diagnose certain cancers.

“In the future, I think AI will be integrated into practice as a diagnostic aide, particularly in primary care, to support the decision to excise a lesion, refer, or otherwise to reassure that it is benign,” Victoria Mar, PhD, an Adjunct Senior Lecturer in the Department of Public Health and Preventative Medicine at Australia’s Monash University, told Healthline.

“There is the potential for AI technology to be integrated with 2D or 3D skin imaging systems, which means that the majority of benign lesions would be already filtered by the machine, so that we can spend more time concentrating on the difficult or more concerning lesions,” she said. “To me, this means a more productive interaction with the patient, where we can focus on appropriate management and provide more streamlined care.”

AI Performs Well in Other Studies Involving Skin Biopsies

This study is not the only research that suggests entities besides humans may be utilized in diagnosing some cancers from images. Last year, computer scientists at Stanford University performed similar research and found comparable results. For that study, the researchers created and trained an algorithm to visually diagnose potential skin cancers by looking at a database of skin images. They then showed photos of skin lesions to 21 dermatologists and asked for their diagnoses based on the images. They found the accuracy of their AI matched the performance of the doctors when diagnosing skin cancer from viewed images.

And in 2017, Dark Daily reported on three genomic companies developing AI/facial recognition software that could help anatomic pathologists diagnose rare genetic disorders. (See, “Genomic Companies Collaborate to Develop Facial Analysis Technology Pathologists Might Eventually Use to Diagnose Rare Genetic Disorders,” August 7, 2017.)

While many dermatologists read patient biopsies on their own, they also refer high volumes of skin biopsies to anatomic pathologists. A technology that can accurately diagnose skin cancers could potentially impact the workload received by clinical laboratories and anatomic pathology groups.

—JP Schlingman

Related Information:

Dermatologists Hate Him! Meet the Skin-cancer Detecting Robot

Man Against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dermatologists

AI Better than Dermatologists at Detecting Skin Cancer, Study Finds

AI May Be Better at Detecting Skin Cancer than Your Derm

Deep Learning Algorithm Does as Well as Dermatologists in Identifying Skin Cancer

Genomic Companies Collaborate to Develop Facial Analysis Technology Pathologists Might Eventually Use to Diagnose Rare Genetic Disorders

 

;