As the cancer registry expands it will increasing become more useful to anatomic pathologists, histopathologists, oncologists, and even clinical laboratories
Oncologists, histopathologists, anatomic pathologists, and other cancer physicians now have a powerful new Wikipedia-style tumor registry to help them with their diagnoses and in educating patients on their specific types of cancer. Clinical laboratory managers may find it useful to understand the value this searchable database, and it can help their staff pathologists as well.
Free to use by both physicians and patients the World Tumor Registry (WTR) is designed “to minimize diagnostic errors by giving doctors a searchable online database of cancers that have been collected and categorized with cellular images collected from around the world,” Pittsburg-Post Gazette reported.
Prompt, accurate cancer diagnoses offer cancer patients the best chance for optimal treatment outcomes. However, many medical professionals around the globe do not have the training and resources to offer superior cancer diagnoses. That deficiency can translate to inferior treatment options and lower survival rates among cancer patients.
To help improve cancer diagnoses, pathologist Yuri E. Nikiforov, MD, PhD, Division Director, Molecular and Genomic Pathology, Vice Chair of the Department of Pathology, and Professor of Pathology, University of Pittsburgh, developed the WTR to provide educational and practical resources for individuals and organizations involved in cancer research.
Officially announced at the United States and Canadian Academy of Pathology (USCAP) annual convention, the WTR is an open-access catalog of digital microscopic images of human cancer types and subtypes.
The lower cost of technology and improved speed of access via the internet are technologies enabling this effort.
“We are creating sort of a Wikipedia for cancer images,” said Alyaksandr V. Nikitski, MD, PhD (above), Research Assistant Professor of Pathology, Division of Molecular and Genomic Pathology at Pittsburg School of Medicine and Administrative Director of the WTR, in an exclusive interview with Dark Daily. “Anyone in the world, if they can access the internet, can look at the well-annotated, diagnostic digital slides of cancer,” said Nikitski. Clinical laboratories may also find this new pathology tool useful. (Photo copyright: Alyaksandr V. Nikitski)
Minimizing Diagnostic Errors
Based in Pittsburgh, the WTR is freely available to anyone for viewing digital pathology slides of known cancer tumors as well as borderline and questionable cases. On the website, individuals can search for pictures of tumors in the registry by diagnosis, specific cohorts, and by microscopic features. Individuals may search further by tumor type and subtype to receive a picture of related tumors.
According to the WTR website, the mission of the nonprofit “is to minimize diagnostic errors, eliminate inequality in cancer recognition, diagnosis, and treatment in diverse populations, and improve outcomes by increasing access to the diagnostic pathology expertise and knowledge of microscopic characteristics of cancers that occur in different geographic, environmental, and socio-economic settings.”
This new comprehensive initiative will eventually encompass cancer images from all over the world.
“Let’s assume that I am a pathologist or a trainee who has little experience, or I don’t have access to collections of atypical tumors,” Nikitski explained. “I can view tumor collections online [in the WTR database] and check how typical and rare tumors look in various geographic regions and environmental settings.”
Once an image of a slide is selected, users will then receive a brief case history of the tumor in addition to such data as the age of the patient, their geographic location, sex, family history of the disease, and the size and stage of the tumor.
Increasing Probability of Correct Diagnosis
Pathologists and clinicians may also predict the probability of a particular diagnosis by searching under the microscopic feature of the database. This feature utilizes an innovative classifier known as PathDxFinder, where users may compare a slide from their lab to slides in the database by certain criteria. This includes:
After completing the questions above, the user presses the “predict diagnosis” button to receive the probability of cancer and most likely diagnosis based on the answers provided in the questionnaire.
WTR Editorial Boards
The WTR represents collections for each type of cancer site, such as lung or breast. A chairperson and editorial board are responsible for reviewing submitted slides before they are placed online. The editorial boards include 20 pathologists who are experts in diagnosing cancer categories, Nikitski explained.
Thousands of identified microscopic whole slide images (WSI) representing various types of cancer are deposited by the editors and other contributors to the project. The editorial board then carefully analyzes and compiles the data before posting the images for public viewing.
The editorial boards are located in five world regions:
Africa and the Middle East
Asia and Oceania
Central and South America
North America and Europe
Northern Asia
Any physicians or pathologists can contribute images to the database, by “simply selecting the editor of their region on the website, writing their name, and asking if they can submit tumor cases,” Nikitski stated.
“We have established a platform that allows pathologists to contact editors who are in the same geographic region,” he added.
Helping Physicians Identify Cancer Types
In a YouTube video, Nikiforov states that the WTR is an “educational nonprofit organization rooted in [the] beliefs that every cancer patient deserves accurate and timely diagnosis as the first and essential step in better treatment and outcomes.”
“We believe this can be achieved only when modern diagnostic tools and technologies are freely available to every physician and pathologist. Only when we understand how microscopic features of cancer vary in different geographic, environmental and ethnic populations, and only by integrating histopathology with clinical immunohistochemical and molecular genetic information for every cancer type,” he stated.
Since patient privacy is important, the database contains only basic data about patients, and all patient information is protected.
Launched in March, there are currently more than 400 thyroid tumor slides available to view in the online database. At the time of the announcement, the WTR platform was planned to be implemented in three phases:
Thyroid cancer (released in March of this year).
Lung cancer and breast cancer (anticipated to be completed by the third quarter of 2026).
Remaining cancers, including brain, soft tissue and bone, colorectal, head and neck, hematolymphoid, female genital, liver, pancreatic, prostate and male genital, skin, urinary system, pediatric, other endocrine cancers, and rare cancers (anticipated to be completed by the end of 2029).
“We believe that this resource will help physicians and pathologists practicing in small or big or remote medical centers to learn how cancer looks under a microscope in their own communities,” Nikiforov said in the video. “We also see WTR as a platform that connects physicians and scientists from different parts of the world who can work together to better understand and treat cancer.”
Catalogs like the World Tumor Registry might potentially create a pool of information that that could be mined by analytical and artificial intelligence (AI) platforms to ferret out new ways to improve the diagnosis of certain types of cancer and even enable earlier diagnoses.
“It is an extremely useful resource,” Nikitski said.
Anatomic pathologists will certainly find it so. And clinical laboratory managers may find the information useful as well when interacting with histopathologists and oncologists.
Google designed the suite to ease radiologists’ workload and enable easy and secure sharing of critical medical imaging; technology may eventually be adapted to pathologists’ workflow
Clinical laboratory and pathology group leaders know that Google is doing extensive research and development in the field of cancer diagnostics. For several years, the Silicon Valley giant has been focused on digital imaging and the use of artificial intelligence (AI) algorithms and machine learning to detect cancer.
Now, Google Cloud has announced it is launching a new medical imaging suite for radiologists that is aimed at making healthcare data for the diagnosis and care of cancer patients more accessible. The new suite “promises to make medical imaging data more interoperable and useful by leveraging artificial intelligence,” according to MedCity News.
In a press release, medical technology company Hologic, and healthcare provider Hackensack Meridian Health in New Jersey, announced they were the first customers to use Google Cloud’s new suite of medical imaging products.
“Hackensack Meridian Health has begun using it to detect metastasis in prostate cancer patients earlier, and Hologic is using it to strengthen its diagnostic platform that screens women for cervical cancer,” MedCity News reported.
“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and now we’re making our imaging expertise, tools, and technologies available for healthcare and life sciences enterprises,” said Alissa Hsu Lynch (above), Global Lead of Google Cloud’s MedTech Strategy and Solutions, in a press release. “Our Medical Imaging Suite shows what’s possible when tech and healthcare companies come together.” Clinical laboratory companies may find Google’s Medical Imaging Suite worth investigating. (Photo copyright: Influencive.)
.
Easing the Burden on Radiologists
Clinical laboratory leaders and pathologists know that laboratory data drives most healthcare decision-making. And medical images make up 90% of all healthcare data, noted an article in Proceedings of the IEEE (Institute of Electrical and Electronics Engineers).
More importantly, medical images are growing in size and complexity. So, radiologists and medical researchers need a way to quickly interpret them and keep up with the increased workload, Google Cloud noted.
“The size and complexity of these images is huge, and, often, images stay sitting in data siloes across an organization,” said Alissa Hsu Lynch, Global Lead, MedTech Strategy and Solutions at Google, told MedCity News. “In order to make imaging data useful for AI, we have to address interoperability and standardization. This suite is designed to help healthcare organizations accelerate the development of AI so that they can enable faster, more accurate diagnosis and ease the burden for radiologists,” she added.
According to the press release, Google Cloud’s Medical Imaging Suite features include:
Imaging Storage: Easy and secure data exchange using the international DICOM (digital imaging and communications in medicine) standard for imaging. A fully managed, highly scalable, enterprise-grade development environment that includes automated DICOM de-identification. Seamless cloud data management via a cloud-native enterprise imaging PACS (picture archiving and communication system) in clinical use by radiologists.
Imaging Lab: AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images, and Google Cloud native integration with any DICOMweb viewer.
Imaging Datasets and Dashboards: Ability to view and search petabytes of imaging data to perform advanced analytics and create training datasets with zero operational overhead.
Imaging AI Pipelines: Accelerated development of AI pipelines to build scalable machine learning models, with 80% fewer lines of code required for custom modeling.
Imaging Deployment: Flexible options for cloud, on-prem (on-premises software), or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy requirements—while providing centralized management and policy enforcement with Google Distributed Cloud.
First Customers Deploy Suite
Hackensack Meridian Health hopes Google’s imaging suite will, eventually, enable the healthcare provider to predict factors affecting variance in prostate cancer outcomes.
“We are working toward building AI capabilities that will support image-based clinical diagnosis across a range of imaging and be an integral part of our clinical workflow,” said Sameer Sethi, Senior Vice President and Chief Data and Analytics Officer at Hackensack, in a news release.
The New Jersey healthcare network said in a statement that its work with Google Cloud includes use of AI and machine learning to enable notification of newborn congenital disorders and to predict sepsis risk in real-time.
Hologic, a medical technology company focused on women’s health, said its collaboration integrates Google Cloud AI with the company’s Genius Digital Diagnostics System.
“By complementing our expertise in diagnostics and AI with Google Cloud’s expertise in AI, we’re evolving our market-leading technologies to improve laboratory performance, healthcare provider decision making, and patient care,” said Michael Quick, Vice President of Research and Development and Innovation at Hologic, in the press release.
Hologic says its Genius Digital Diagnostics System combines AI with volumetric medical imaging to find pre-cancerous lesions and cancer cells. From a Pap test digital image, the system narrows “tens of thousands of cells down to an AI-generated gallery of the most diagnostically relevant,” according to the company website.
Hologic plans to work with Google Cloud on storage and “to improve diagnostic accuracy for those cancer images,” Hsu Lynch told MedCity News.
Medical image storage and sharing technologies like Google Cloud’s Medical Imaging Suite provide an opportunity for radiologists, researchers, and others to share critical image studies with anatomic pathologists and physicians providing care to cancer patients.
One key observation is that the primary function of this service that Google has begun to deploy is to aid in radiology workflow and productivity, and to improve the accuracy of cancer diagnoses by radiologists. Meanwhile, Google continues to employ pathologists within its medical imaging research and development teams.
Assuming that the first radiologists find the Google suite of tools effective in support of patient care, it may not be too long before Google moves to introduce an imaging suite of tools designed to aid the workflow of surgical pathologists as well.
Columbia University’s MediSCAPE enables surgeons to examine tissue structures in vivo and a large-scale clinical trial is planned for later this year
Scientists at Columbia University in New York City have developed a high-speed 3D microscope for diagnosis of cancers and other diseases that they say could eventually replace traditional biopsy and histology “with real-time imaging within the living body.”
The technology is designed to enable in situ tissue analysis. Known as MediSCAPE, the microscope is “capable of capturing images of tissue structures that could guide surgeons to navigate tumors and their boundaries without needing to remove tissues and wait for pathology results,” according to a Columbia University news story.
“The way that biopsy samples are processed hasn’t changed in 100 years, they are cut out, fixed, embedded, sliced, stained with dyes, positioned on a glass slide, and viewed by a pathologist using a simple microscope. This is why it can take days to hear news back about your diagnosis after a biopsy,” said Hillman in the Columbia news story.
“Our 3D microscope overcomes many of the limitations of prior approaches to enable visualization of cellular structures in tissues in the living body. It could give a doctor real-time feedback about what type of tissue they are looking at without the long wait,” she added in I News.
Hillman’s team previously used the technology—originally dubbed SCAPE for “Swept Confocally Aligned Planar Excitation” microscopy—to capture 3D images of neurological activity in living samples of worms, fish, and flies. In their recent study, the researchers tested the technology with human kidney tissue, a human volunteer’s tongue, and a mouse with pancreatic cancer.
How MediSCAPE Works
Unlike traditional 3D microscopes that use a laser to scan tiny spots of a tissue sample and then assemble those points into a 3D image, the MediSCAPE 3D microscope “illuminates the tissue with a sheet of light—a plane formed by a laser beam that is focused in a special way,” I News reported.
The MediSCAPE microscope thus captures 2D slices which are rapidly stacked into 3D images at a rate of more than 10 volumes per second, according to I News.
“One of the first tissues we looked at was fresh mouse kidney, and we were stunned to see gorgeous structures that looked a lot like what you get with standard histology,” said optical systems engineer and the study’s lead author, Kripa Patel, PhD, in the Columbia news story. “Most importantly, we didn’t add any dyes to the mouse—everything we saw was natural fluorescence in the tissue that is usually too weak to see.
“Our microscope is so efficient that we could see these weak signals well,” she continued, “even though we were also imaging whole 3D volumes at speeds fast enough to rove around in real time, scanning different areas of the tissue as if we were holding a flashlight.”
A big advantage of the technology, Hillman noted, is the ability to scan living tissue in the body.
“Understanding whether tissues are staying healthy and getting good blood supply during surgical procedures is really important,” she said in the Columbia news story. “We also realized that if we don’t have to remove (and kill) tissues to look at them, we can find many more uses for MediSCAPE, even to answer simple questions such as ‘what tissue is this?’ or to navigate around precious nerves. Both of these applications are really important for robotic and laparoscopic surgeries, where surgeons are more limited in their ability to identify and interact with tissues directly.”
Clinical Trials and FDA Clearance
Early versions of the SCAPE microscopes were too large for practical use by surgeons, so Columbia post-doctoral research scientist Wenxuan Liang, PhD, co-author of the study, helped the team develop a smaller version that would fit into an operating room.
Later this year, the researchers plan to launch a large-scale clinical trial, I News reported. The Columbia scientists hope to get clearance from the US Food and Drug Administration (FDA) to develop a commercialized version of the microscope.
“They will initially seek permission to use it for tumor screening and guidance during operations—a lower and easier class of approval—but ultimately, they hope to be allowed to use it for diagnosis,” Liang wrote.
Charles Evans, PhD, research information manager at Cancer Research UK, told I News, “Using surgical biopsies to confirm a cancer diagnosis can be time-consuming and distressing for patients. And ensuring all the cancerous tissue is removed during surgery can be very challenging unaided.”
He added, “more work will be needed to apply this technique in a device that’s practical for clinicians and to demonstrate whether it can bring benefits for people with cancer, but we look forward to seeing the next steps.”
Will the Light Microscope be Replaced?
In recent years, research teams at various institutions have been developing technologies designed to enhance or even replace the traditional light microscope used daily by anatomic pathologists across the globe.
And digital scanning algorithms for creating whole-slide images (WSIs) that can be analyzed by pathologists on computer screens are gaining in popularity as well.
Such developments may spark a revolution in surgical pathology and could signal the beginning of the end of the light microscope era.
Surgical pathologists should expect to see a steady flow of technologically advanced systems for tissue analysis to be submitted to the FDA for pre-market review and clearance for use in clinical settings. The light microscope may not disappear overnight, but there are a growing number of companies actively developing different technologies they believe can diagnose either or both tissue and digital images of pathology slides with accuracy comparable to a pathologist.
Newly combined digital pathology, artificial intelligence (AI), and omics technologies are providing anatomic pathologists and medical laboratory scientists with powerful diagnostic tools
Add “spatial transcriptomics” to the growing list of “omics” that have the potential to deliver biomarkers which can be used for earlier and more accurate diagnoses of diseases and health conditions. As with other types of omics, spatial transcriptomics might be a new tool for surgical pathologists once further studies support its use in clinical care.
Among this spectrum of omics is spatial transcriptomics, or ST for short.
Spatial Transcriptomics is a groundbreaking and powerful molecular profiling method used to measure all gene activity within a tissue sample. The technology is already leading to discoveries that are helping researchers gain valuable information about neurological diseases and breast cancer.
Marriage of Genetic Imaging and Sequencing
Spatial transcriptomics is a term used to describe a variety of methods designed to assign cell types that have been isolated and identified by messenger RNA (mRNA), to their locations in a histological section. The technology can determine subcellular localization of mRNA molecules and can quantify gene expression within anatomic pathology samples.
In “Spatial: The Next Omics Frontier,” Genetic Engineering and Biotechnology News (GEN) wrote, “Spatial transcriptomics gives a rich, spatial context to gene expression. By marrying imaging and sequencing, spatial transcriptomics can map where particular transcripts exist on the tissue, indicating where particular genes are expressed.”
In an interview with Technology Networks, George Emanuel, PhD, co-founder of life-science genomics company Vizgen, said, “Spatial transcriptomic profiling provides the genomic information of single cells as they are intricately spatially organized within their native tissue environment.
“With techniques such as single-cell sequencing, researchers can learn about cell type composition; however, these techniques isolate individual cells in droplets and do not preserve the tissue structure that is a fundamental component of every biological organism,” he added.
“Direct spatial profiling the cellular composition of the tissue allows you to better understand why certain cell types are observed there and how variations in cell state might be a consequence of the unique microenvironment within the tissue,” he continued. “In this way, spatial transcriptomics allows us to measure the complexity of biological systems along the axes that are most relevant to their function.”
According to 10x Genomics, “spatial transcriptomics utilizes spotted arrays of specialized mRNA-capturing probes on the surface of glass slides. Each spot contains capture probes with a spatial barcode unique to that spot.
“When tissue is attached to the slide, the capture probes bind RNA from the adjacent point in the tissue. A reverse transcription reaction, while the tissue is still in place, generates a cDNA [complementary DNA] library that incorporates the spatial barcodes and preserves spatial information.
“Each spot contains approximately 200 million capture probes and all of the probes in an individual spot share a barcode that is specific to that spot.”
“The highly multiplexed transcriptomic readout reveals the complexity that arises from the very large number of genes in the genome, while high spatial resolution captures the exact locations where each transcript is being expressed,” Emanuel told Technology Networks.
Spatial Transcriptomics for Breast Cancer and Neurological Diagnostics
In that paper, the authors wrote “we envision that in the coming years we will see simplification, further standardization, and reduced pricing for the ST protocol leading to extensive ST sequencing of samples of various cancer types.”
Spatial transcriptomics is also being used to research neurological conditions and neurodegenerative diseases. ST has been proven as an effective tool to hunt for marker genes for these conditions as well as help medical professionals study drug therapies for the brain.
“You can actually map out where the target is in the brain, for example, and not only the approximate location inside the organ, but also in what type of cells,” Malte Kühnemund, PhD, Director of Research and Development at 10x Genomics, told Labiotech.eu. “You actually now know what type of cells you are targeting. That’s completely new information for them and it might help them to understand side effects and so on.”
The field of spatial transcriptomics is rapidly moving and changing as it branches out into more areas of healthcare. New discoveries within ST methodologies are making it possible to combine it with other technologies, such as Artificial Intelligence (AI), which could lead to powerful new ways oncologists and anatomic pathologists diagnose disease.
“I think it’s going to be tricky for pathologists to look at that data,” Kühnemund said. “I think this will go hand in hand with the digital pathology revolution where computers are doing the analysis and they spit out an answer. That’s a lot more precise than what any doctor could possibly do.”
Spatial transcriptomics certainly is a new and innovative way to look at tissue biology. However, the technology is still in its early stages and more research is needed to validate its development and results.
Nevertheless, this is an opportunity for companies developing artificial intelligence tools for analyzing digital pathology images to investigate how their AI technologies might be used with spatial transcriptomics to give anatomic pathologists a new and useful diagnostic tool.
Hello primary diagnosis of digital pathology images via artificial intelligence! Goodbye light microscopes!
Digital pathology is poised to take a great leap forward. Within as few as 12 months, image analysis algorithms may gain regulatory clearance in the United States for use in primary diagnosis of whole-slide images (WSIs) for certain types of cancer. Such a development will be a true revolution in surgical pathology and would signal the beginning of the end of the light microscope era.
A harbinger of this new age of digital pathology and automated image analysis is a press release issued last week by Ibex Medical Analytics of Tel Aviv, Israel. The company announced that its Galen artificial intelligence (AI)-powered platform for use in the primary diagnosis of specific cancers will undergo an accelerated review by the Food and Drug Administration (FDA).
FDA’s ‘Breakthrough Device Designation’ for Pathology AI Platform
Ibex stated that “The FDA’s Breakthrough Device Designation is granted to technologies that have the potential to provide more effective treatment or diagnosis of life-threatening diseases, such as cancer. The designation enables close collaboration with, and expedited review by, the FDA, and provides formal acknowledgement of the Galen platform’s utility and potential benefit as well as the robustness of Ibex’s clinical program.”
“All surgical pathologists should recognize that, once the FDA begins to review and clear algorithms capable of using digital pathology images to make an accurate primary diagnosis of cancer, their daily work routines will be forever changed,” stated Robert L. Michel, Editor-in-Chief of Dark Daily and its sister publication The Dark Report. “Essentially, as FDA clearance is for use in clinical care, pathology image analysis algorithms powered by AI will put anatomic pathology on the road to total automation.
“Clinical laboratories have seen the same dynamic, with CBCs (complete blood counts) being a prime example. Through the 1970s, clinical laboratories employed substantial numbers of hematechnologists [hematechs],” he continued. “Hematechs used a light microscope to look at a smear of whole blood that was on a glass slide with a grid. The hematechs would manually count and record the number of red and white blood cells.
“That changed when in vitro diagnostics (IVD) manufacturers used the Coulter Principle and the Coulter Counter to automate counting the red and white blood cells in a sample, along with automatically calculating the differentials,” Michel explained. “Today, only clinical lab old-timers remember hematechs. Yet, the automation of CBCs eventually created more employment for medical technologists (MTs). That’s because the automated instruments needed to be operated by someone trained to understand the science and medicine involved in performing the assay.”
Primary Diagnosis of Cancer with an AI-Powered Algorithm
Surgical pathology is poised to go down a similar path. Use of a light microscope to conduct a manual review of glass slides will be supplanted by use of digital pathology images and the coming next generation of image analysis algorithms. Whether these algorithms are called machine learning, computational pathology, or artificial intelligence, the outcome is the same—eventually these algorithms will make an accurate primary diagnosis from a digital image, with comparable quality to a trained anatomic pathologist.
How much of a threat is automated analysis of digital pathology images? Computer scientist/engineer Ajit Singh, PhD, a partner at Artiman Ventures and an authority on digital pathology, believes that artificial intelligence is at the stage where it can be used for primary diagnosis for two types of common cancer: One is prostate cancer, and the other is dermatology.
“It is now possible to do a secondary read, and even a first read, in prostate cancer with an AI system alone. In cases where there may be uncertainty, a pathologist can review the images. Now, this is specifically for prostate cancer, and I think this is a tremendous positive development for diagnostic pathways,” he added.
Use of Digital Pathology with AI-Algorithms Changes Diagnostics
Pathologists who are wedded to their light microscopes will want to pay attention to the impending arrival of a fully digital pathology system, where glass slides are converted to whole-slide images and then digitized. From that point, the surgical pathologist becomes the coach and quarterback of an individual patient’s case. The pathologist guides the AI-powered image analysis algorithms. Based on the results, the pathologist then orders supplementary tests appropriate to developing a robust diagnosis and guiding therapeutic decisions for that patient’s cancer.
In his interview with The Dark Report, Singh explained that the first effective AI-powered algorithms in digital pathology will be developed for prostate cancer and skin cancer. Both types of cancer are much less complex than, say, breast cancer. Moreover, the AI developers have decades of prostate cancer and melanoma cases where the biopsies, diagnoses, and downstream patient outcomes create a rich data base from which the algorithms can be trained and tuned.
This webinar is organized as a roundtable discussion so participants can interact with the expert panelists. The Chair and Moderator is Ajit Singh, PhD, Adjunct Professor at the Stanford School of Medicine and Partner at Artiman Ventures.
The panelists (above) represent academic pathology, community hospital pathology, and the commercial sector. They are:
Because the arrival of automated analysis of digital pathology images will transform the daily routine of every surgical pathologist, it would be beneficial for all pathology groups to have one or more of their pathologists register and participate in this critical webinar.
The roundtable discussion will help them understand how quickly AI-powered image analysis is expected be cleared for use by the FDA in such diseases as prostate cancer and melanomas. Both types of cancers generate high volumes of case referrals to the nation’s pathologists, so potential for disruption to long-standing client relationships, and the possible loss of revenue for pathology groups that delay their adoption of digital pathology, can be significant.
On the flip side, community pathology groups that jump on the digital pathology bandwagon early and with the right preparation will be positioned to build stronger client relationships, increase subspecialty case referrals, and generate additional streams of revenue that boost partner compensation within their group.
Also, because so many pathologists are working remotely, Dark Daily has arranged special group rates for pathology practices that would like their surgical pathologists to participate in this important webinar and roundtable discussion on AI-powered primary diagnosis of pathology images. Inquire at info@darkreport.com or call 512-264-7103.