At-home genetic test kits face scrutiny for providing information that may provide consumers with an incomplete picture of their genetic health risks and ancestry
Genetic testing for disease risk and heritage are hugely popular. But though clinical laboratory and pathology professionals understand the difference between a doctor-ordered genetic health risk (GHR) test and a direct-to-consumer (DTC) genetic test, the typical genetic test customer may not. And misunderstanding the results of a DTC at-home genetic test can lead to confusion, loss of privacy, and potential harm, according to Consumer Reports.
To help educate consumers about the “potential pitfalls” of at-home DTC testing kits offered by companies such as Ancestry and 23andMe, Consumer Reports has published an article, titled, “Read This Before You Buy a Genetic Testing Kit.” The article covers “four common claims from the manufacturers of these products, whether they deliver, and what to know about their potential pitfalls.”
Are Genetic Ancestry Tests Accurate?
Ancestry and 23andMe are the DTC genetic test industry leaders, with databases of genetic information about 18 million individuals and 10 million individuals respectively. According to a Consumer Reports survey, as of October 2020 about one in five Americans had taken a DTC genetic test. Reported reasons for doing so included:
66% of respondents wanted to learn more about their ancestry.
20% wanted to locate relatives.
18% wanted to learn more about their health.
11% wanted to learn if they have or are a carrier for any medical conditions.
3% wanted to get a medical test they could not get through their doctor.
As Consumer Reports notes, doctor-ordered genetic health risk (GHR) testing typically aims to answer a specific question about a patient’s risk for a certain disease. DTC at-home genetic testing, on the other hand, examines a “whole range of variants that have been linked—sometimes quite loosely—to a number of traits, some not related to your health at all.
“Think of it this way: When your doctor orders genetic testing, it’s akin to fishing for a particular fish, in a part of the ocean where it’s known to live,” Consumer Reports noted, “A DTC test is more like throwing a net into the ocean and seeing what comes back.”
In its article, Consumer Reports addressed four common DTC genetic test claims:
The Tests Can Find Far-Flung Relatives: While the tests can unearth people in its database whom you might be related to, 9% of respondents in the Consumer Reports survey discovered unsettling information about a relative.
Testing Can Uncover Where Your Ancestors Are From: Genetic tests may show the percentage of your DNA that comes from Europe or Asia or Africa, but accuracy depends on how many DNA samples a company has from a particular region. As genetic test manufacturers’ reference databases widen, a customer’s genetic ancestry test results can “change over time.” Also, finding a particular variation in genetic code does not definitively place someone in a specific region, or ethnic or racial group.
Genetic Tests Can Reveal Your Risk for Certain Diseases: Testing companies such as 23andMe are authorized by the Food and Drug Administration (FDA) to offer physician-mediated tests, which are analyzed in a federally-certified clinical laboratory. However, test results may provide a false sense of security because DTC tests look for only select variants known to cause disease.
The Tests Can Tell What Diet Is Best for You: Incorporating genetic information into diet advice has the potential to be transformative, but the science is not yet there to offer personalized nutritional advice.
Consumer Reports pointed to a 2020 study published in the MDPI journal Nutrients, titled, “Direct-to-Consumer Nutrigenetics Testing: An Overview,” which evaluated 45 DTC companies offering nutrigenetics testing and found a need for “specific guidelines” and “minimum quality standards” for the services offered. For example, the study authors noted that more than 900 genetic variants contribute to obesity risk. However, weight-loss advice from DTC test companies was based on a “limited set of genetic markers.”
In the Consumer Reports article, Mwenza Blell, PhD, a biosocial medical anthropologist and Rutherford Fellow and NUAcT Fellow at Newcastle University in the United Kingdom, said “genetic ancestry tests are closer to palm reading than science.”
Seattle Cancer Care Alliance and an Associate Professor of Oncology at the University of Washington, fears consumers “miss important limitations on a test’s scope” or “misunderstand critical nuances in the results.”
Cheng says the ability to use flexible or health savings accounts (HSAs) to cover the cost of 23andMe’s GHR assessments, as well as the FDA’s approval of 23andMe’s Personal Genome Service Pharmacogenetic Reports test on medication metabolism, may have added to the confusion.
“This may further mislead people into thinking these tests are clinically sound. Again, they are not,” Cheng wrote.
As an oncologist, Cheng is particularly concerned about consumer GHR testing for heritable cancer risk, which screen for only a handful of genetic variants.
“The results are inadequate for most people at high risk of cancers associated with inherited mutations in BRCA1 or BRCA2 genes, including families whose members have experienced ovarian cancer, male breast cancer, multiple early breast cancers, pancreatic cancer, or prostate cancer,” Cheng wrote. “Put simply, this recreational test has zero value for the majority of people who may need it for true medical purposes.”
DTC genetic health-risk assessments may one day lead to consumers collecting samples at home for tests that aid in the diagnosis of disease. In the meantime, clinical laboratory professionals can play a role in educating the public about the limitations of current DTC genetic test offerings.
Painless technology could one day replace some phlebotomy blood draws as the go-to specimen-collection method for clinical laboratory testing and health monitoring
Clinical laboratories have long sought a non-invasive way to do useful medical laboratory testing without the need for either a venipuncture or a needle stick. Now engineers at the McKelvey School of Engineering at Washington University in St. Louis in Missouri have developed a disposable microneedle patch that one day could be a painless alternative to some blood draws for diagnostics tests and health monitoring.
The technology uses an easy-to-administer low-cost patch that can be applied to the skin like an adhesive bandage. The patch is virtually painless because the microneedles are too small to reach nerve receptors. Another unique aspect to this innovative approach to collecting a specimen for diagnostic testing is that the Washington University in St. Louis (WashU) research team designed the microneedle patch to include plasmonic-fluor. These are ultrabright gold nanolabels that light up target protein biomarkers and can make the biomarkers up to 1,400 times brighter at low concentrations, compared to traditional fluorescent labels.
The patch, states a WashU news release, “… can be applied to the skin, capture a biomarker of interest and, thanks to its unprecedented sensitivity, allow clinicians to detect its presence.”
The technology is low cost, easy for clinicians or patients themselves to use, and could eliminate the need for a trip to patient service center where a phlebotomist would draw blood for clinical laboratory testing, the news release states.
“We used the microneedle patch in mice for minimally invasive evaluation of the efficiency of a cocaine vaccine, for longitudinal monitoring of the levels of inflammatory biomarkers, and for efficient sampling of the calvarial periosteum [a skull membrane]—a challenging site for biomarker detection—and the quantification of its levels of the matricellular protein periostin, which cannot be accurately inferred from blood or other systemic biofluids,” the researchers wrote. “Microneedle patches for the minimally invasive collection and analysis of biomarkers in interstitial fluid might facilitate point-of-care diagnostics and longitudinal monitoring.”
Mark Prausnitz, PhD, Regents’ Professor, J. Erskine Love Jr. Chair in Chemical and Biomolecular Engineering, and Director of the Center for Drug Design, Development, and Delivery at Georgia Tech, told WIRED, “Blood is a tiny fraction of the fluid in our body. Other fluids should have something useful—it’s just hard to get those fluids.”
“Previously, concentrations of a biomarker had to be on the order of a few micrograms per milliliter of fluid,” said Zheyu (Ryan) Wang, a PhD candidate in Srikanth Singamaneni’s lab at McKelvey School of Engineering and a lead author of the paper, in the WashU news release. By using plasmonic-fluor, researchers were able to detect biomarkers on the order of picograms per milliliter—one millionth of the concentration.
“That’s orders of magnitude more sensitive,” Wang said.
Can Microneedles Be Used as a Diagnostic Tool?
As reported in WIRED, the polystyrene patch developed by Srikanth Singamaneni’s lab at McKelvey School of Engineering removes interstitial fluid from the skin and turns the needles into “biomarker traps” by coating them with antibodies known to bind to specific proteins, such as Interleukin 6 (IL-6). Once the microneedles are mixed with plasmonic-fluor, the patch will glow if the IL-6 biomarkers are present.
The development of such a highly sensitive biomarker-detection method means skin becomes a potential pathway for using microneedles to diagnose conditions, such as myocardial infarction or to measure COVID-19 antibodies in vaccinated persons.
“Now we can actually use this tool to understand what’s going on with interstitial fluid, and how we’re going to be able to use it to answer healthcare-related or medical problems,” Maral Mousavi, PhD, Assistant Professor of Biomedical Engineering, Viterbi School of Engineering at the University of Southern California, told WIRED. “I think it has the potential to be that kind of a game changer.”
Because the WashU study is a proof-of-concept in mice, it may be many years before this technology finds its way to clinical application. Many skin biomarkers will need to be verified for direct links to disease before microneedle patches will be of practical use to clinicians for diagnostics. However, microneedle patch technology has already proven viable for the collection of blood.
In 2017, Massachusetts-based Seventh Sense Biosystems (7SBio) received 510(k) clearance for a new microneedle blood collection device. Called TAP, the device is placed on the upper arm and blood collection starts with a press of a button. The process takes two to three minutes.
Initially, the FDA clearance permitted only healthcare workers to use the device “to collect capillary blood for hemoglobin A1c (HbA1c) testing, which is routinely used to monitor blood sugar levels in diabetic or pre-diabetic patients,” a Flagship Pioneering news release noted.
Then, in 2019, the FDA extended its authorization “to include blood collection by laypersons. Regulators are also allowing the device to be used ‘at-home’ for wellness testing,” a 7SBio news release stated. This opened the door for a microneedle device to be used for home care blood collection.
“No one likes getting blood drawn, but blood is the single-most important source of medical information in healthcare today, with about 90% of all diagnostic information coming from blood and its components,” Howard Weisman, former CEO of 7SBio and current CEO of PaxMedica, a clinical-stage biopharmaceutical company, said in the Flagship Pioneering news release. “TAP has the potential to transform blood collection from an inconvenient, stressful, and painful experience to one people can do themselves anywhere, making health monitoring much easier for both healthcare professionals and patients.”
As microneedle technology continues to evolve, clinical laboratories should expect patches to be used in a growing number of drug delivery systems and diagnostic tests. But further research will be needed to determine whether interstitial fluid can provide an alternate pathway for diagnosing disease.
The palm-sized device could one day be engineered to track down explosives and gas leaks or could even be used by medical laboratories to detect disease
Here’s a technology breakthrough with many implications for diagnostics and clinical laboratory testing. Researchers at the at the University of Washington (UW) are pushing the envelope on what can be achieved by combining technology with biology. They developed “Smellicopter,” a flying drone that uses a living moth antenna to hunt for odors.
According to their published study, the UW scientists believe an odor-guided drone could “reduce human hazard and drastically improve performance on tasks such as locating disaster survivors, hazardous gas leaks, incipient fires or explosives.”
“Nature really blows our human-made odor sensors out of the water,” lead author Melanie Anderson, a UW doctoral student in mechanical engineering, told UW News. “By using an actual moth antenna with Smellicopter, we’re able to get the best of both worlds: the sensitivity of a biological organism on a robotic platform where we can control its motion.”
The researchers believe their Smellicopter is the first odor-sensing flying biohybrid robot system to incorporate a live moth antenna that capitalizes on the insect’s excellent odor-detecting and odor-locating abilities.
In their paper, titled, “A Bio-Hybrid Odor-Guided Autonomous Palm-Sized Air Vehicle,” published in the IOPscience journal Bioinspiration and Biomimetics, the researchers wrote, “Biohybrid systems integrate living materials with synthetic devices, exploiting their respective advantages to solve challenging engineering problems. … Our robot is the first flying biohybrid system to successfully perform odor localization in a confined space, and it is able to do so while detecting and avoiding obstacles in its flight path. We show that insect antennae respond more quickly than metal oxide gas sensors, enabling odor localization at an improved speed over previous flying robots. By using the insect antennae, we anticipate a feasible path toward improved chemical specificity and sensitivity by leveraging recent advances in gene editing.”
How Does it Work?
In nature, a moth uses its antennae to sense chemicals in its environment and navigate toward sources of food or a potential mate.
“Cells in a moth antenna amplify chemical signals,” said study co-author Thomas Daniel, PhD, UW Professor of Biology, in UW News. “The moths do it really efficiently—one scent molecule can trigger lots of cellular responses, and that’s the trick. This process is super-efficient, specific, and fast.”
Because the moth antenna is hollow, researchers are able to add wires into the ends of the antenna. By connecting the antenna to an electrical circuit, they can measure the average signal from all of the cells in the antenna. When compared to a metal oxide gas sensor, the antenna-powered sensor responded more quickly to a floral scent. It also took less time to recover between tracking puffs of scent.
Anderson compared the antenna-drone circuitry to a human heart monitor.
“A lot like a heart monitor, which measures the electrical voltage that is produced by the heart when it beats, we measure the electrical signal produced by the antenna when it smells odor,” Anderson told WIRED. “And very similarly, the antenna will produce these spike-shaped pulses in response to patches of odor.”
Making a Drone Hunt Like a Moth
Anderson told WIRED her team programmed the drone to hunt for odors using the same technique moths employ to stay targeted on an odor, called crosswind casting.
“If the wind shifts, or you fly a little bit off-course, then you’ll lose the odor,” Anderson said. “And so, you cast crosswind to try and pick back up that trail. And in that way, the Smellicopter gets closer and closer to the odor source.”
However, the researchers had to figure out how to keep the commercially available $195 Crazyflie drone facing upwind. The fix, co-author and co-advisor Sawyer Fuller, PhD, UW Assistant Professor of Mechanical Engineering told UW News, was to add two plastic fins to create drag and keep the vehicle on course.
“From a robotics perspective, this is genius,” Fuller said. “The classic approach in robotics is to add more sensors, and maybe build a fancy algorithm or use machine learning to estimate wind direction. It turns out, all you need is to add a fin.”
Other Applications for Odor Detecting Robots
While any practical clinical application of this breakthrough is years away, the scientific team’s next step is to use gene editing to engineer moths with antennae sensitive to a specific desired chemical, such as those found in explosives.
“I think it is a powerful concept,” roboticist Antonio Loquercio, a PhD candidate in machine learning at the University of Zurich who researches drone navigation, told WIRED. “Nature provides us plenty of examples of living organisms whose life depends on this capacity. This could have as well a strong impact on autonomous machines—not only drones—that could use odors to find, for example, survivors in the aftermath of an earthquake or could identify gas leaks in a man-made environment.”
Could a palm-sized autonomous device one day be used to not only track down explosives and gas leaks but also to detect disease?
As clinical pathologists and medical laboratory scientists know, dogs have demonstrated keen ability to detect disease using their heightened sense of smell.
Therefore, it is not inconceivable that smell-seeking technology might one day be part of clinical laboratory testing for certain diseases.
This latest research is another example of how breakthroughs in unrelated fields of science offer the potential for creation of diagnostic tools that one day may be useful to medical laboratories.
The researchers also found that certain molecules, when added to cancer drugs, can prevent chromosome shattering from occurring in a discovery that may be useful to pathologists and oncologists
Anatomic pathologists who diagnose tissue and closely monitor advances in cancer diagnostics and therapy will be interested in a recent study into how a mutational process known as chromothripsis (chromosome shattering) can promote cancer cell growth in humans and increase resistance to cancer drug therapies.
The study, which was published in the journal Nature, titled, “Chromothripsis Drives the Evolution of Gene Amplification in Cancer,” provides insights into how cancer cells can adapt to different environments and also may suggest potential solutions to drug resistance among cancer patients.
Led by researchers from the University of California San Diego School of Medicine and the UC San Diego branch of the Ludwig Institute for Cancer Research, the discovery could open up a new field in cancer diagnostic testing, where the pathology laboratory analyzes a cancer patient’s tumor cells to determine where chromosomal damage exists. This knowledge could then inform efforts to repair damaged chromosomes or to identify which therapeutic drugs would be most effective in treating the patient, a key element of precision medicine.
Shattered Chromosomes
Chromosomes that undergo chromothripsis shatter or fragment into several pieces and then are stitched back together by a DNA repair processes. However, not all of the fragments make it back into the repaired chromosome, and this can be a problem.
“During chromothripsis, a chromosome in a cell is shattered into many pieces, hundreds in some cases, followed by reassembly in a shuffled order,” Shoshani told Genetic Engineering and Biotechnology News (GEN News). “Some pieces get lost while others persist as extra-chromosomal DNA (ecDNA). Some of these ecDNA elements promote cancer cell growth and form minute-sized chromosomes called double minutes.”
Studies have shown that up to half of all cancer cells contain cancer-promoting ecDNA chromosome fragments.
Some Cancer Drugs Could be Fueling Drug Resistance
To perform their study, the UC San Diego/Ludwig scientists sequenced entire genomes of cancer cells that had developed drug resistance. Their research revealed that chromothripsis prompts and drives the formation of ecDNA and that the process can also be induced by some chemotherapeutic drugs. The researchers also discovered that the particular type of damage these drugs may cause can provide an opening for ecDNA to reintegrate back into chromosomes.
“We show that when we break a chromosome, these ecDNAs have a tendency to jump into the break and seal them, serving almost like a DNA glue,” Shoshani said in the news release. “Thus, some of the very drugs used to treat cancers might also be driving drug resistance by generating double-stranded DNA breaks.”
Preventing DNA Shattering and Reducing Drug Resistance
The scientists also discovered that ecDNA formation could be halted by pairing certain cancer drugs with molecules that prevent DNA shattering from occurring in the first place, thus reducing drug resistance.
“This means that an approach in which we combine DNA repair inhibitors with drugs such as methotrexate or vemurafenib could potentially prevent the initiation of drug resistance in cancer patients and improve clinical outcomes,” Shoshani said.
“Our identifications of repetitive DNA shattering as a driver of anticancer drug resistance and of DNA repair pathways necessary for reassembling the shattered chromosomal pieces has enabled rational design of combination drug therapies to prevent development of drug resistance in cancer patients, thereby improving their outcome,” Don Cleveland, PhD, Head of the Cleveland Laboratory of Cell Biology at the Ludwig Institute for Cancer Research and one of the authors of the paper, told GEN News.
This research from the University of California San Diego School of Medicine and the UC San Diego branch of the Ludwig Institute for Cancer Research is the latest example of how scientists have gained useful insights into how human genomes operate. More research and clinical studies are needed to solidify the advantages of this study, but the preliminary results are promising and could lead to new cancer diagnostics and therapies.
Researchers find declining antibody levels in SARS-CoV-2 patients are offset by T cells and B cells that remain behind to fight off reinfection
Questions remain regarding how long antibodies produced by a COVID-19 vaccine or natural infection will provide ongoing protection against SARS-CoV-2. However, a new study showing COVID-19 immunity may be “robust” and “long lasting” may signal important news for clinical laboratories and in vitro diagnostics companies developing serological tests for the coronavirus disease.
The LJI research team analyzed blood samples from 188 COVID-19 patients, 7% of whom had been hospitalized. They measured not only virus-specific antibodies in the blood stream, but also memory B cell infections, T helper cells, and cytotoxic (killer) T cells.
While antibodies eventually disappear from the blood stream, T cells and B cells appear to remain to fight future reinfection.
“As far as we know, this is the largest study ever for any acute infection that has measured all four of those components of immune memory,” Crotty said in a La Jolla Institute news release.
The LJI researchers found that virus-specific antibodies remained in the blood stream months after infection while spike-specific memory B cells—which could trigger an accelerated and robust antibody-mediated immune response in the event of reinfection—actually increased in the body after six months. In addition, COVID-19 survivors had an army of T cells ready to halt reinfection.
“Our data show immune memory in at least three immunological compartments was measurable in ~95% of subjects five to eight months post symptom onset, indicating that durable immunity against secondary COVID-19 disease is a possibility in most individuals,” the study concludes. The small percentage of the population found not to have long-lasting immunity following COVID-19 infection could be vaccinated in an effort to stop reinfection from occurring on the way to achieving herd immunity, the LJI researchers maintained.
Do COVID-19 Vaccines Create Equal Immunity Against Reinfection?
Whether COVID-19 vaccinations will provide the same immune response as an active infection has yet to be determined, but indications are protection may be equally strong.
“It is possible that immune memory will be similarly long lasting similar following vaccination, but we will have to wait until the data come in to be able to tell for sure,”
LJI Research Professor Daniela Weiskopf, PhD, said in the LJI statement. “Several months ago, our studies showed that natural infection induced a strong response, and this study now shows that the response lasts. The vaccine studies are at the initial stages, and so far, have been associated with strong protection. We are hopeful that a similar pattern of responses lasting over time will also emerge for the vaccine-induced responses.”
The study’s authors cautioned that people previously diagnosed with COVID-19 should not assume they have protective immunity from reinfection, the Washington Post noted. In fact, according to the LJI news release, researchers saw a “100-fold range in the magnitude of immune memory.”
Previous Studies Found Little Natural Immunity Against SARS-CoV-2 Reinfection
The Scientist reported that several widely publicized previous studies raised concerns that immunity from natural infection was fleeting, perhaps dwindling in weeks or months. And a United Kingdom study published in Nature Microbiology found that COVID-19 generated “only a transient neutralizing antibody response that rapidly wanes” in patients who exhibited milder infection.
Daniel M. Davis, PhD, Professor of Immunology at the University of Manchester, says more research is needed before scientists can know for certain how long COVID-19 immunity lasts after natural infection.
“Overall, these results are interesting and provocative, but more research is needed, following large numbers of people over time. Only then, will we clearly know how many people produce antibodies when infected with coronavirus, and for how long,” Davis told Newsweek.
While additional peer-reviewed studies on the body’s immune response to COVID-19 will be needed, this latest study from the La Jolla Institute for Immunity may help guide clinical laboratories and in vitro diagnostic companies that are developing serological antibody tests for COVID-19 and lead to more definitive answers as to how long antibodies confer protective immunity.
By training a computer to analyze blood samples, and then automating the expert assessment process, the AI processed months’ worth of blood samples in a single day
New technologies and techniques for acquiring and transporting biological samples for clinical laboratory testing receive much attention. But what of the quality of the samples themselves? Blood products are expensive, as hospital medical laboratories that manage blood banks know all too well. Thus, any improvement to how labs store blood products and confidently determine their viability for transfusion is useful.
One such improvement is coming out of Canada. Researchers at the University of Alberta (U of A) in collaboration with scientists and academic institutions in five countries are looking into ways artificial intelligence (AI) and deep learning can be used to efficiently and quickly analyze red blood cells (RBCs). The results of the study may alter the way donated blood is evaluated and selected for transfusion to patients, according to an article in Folio, a U of A publication, titled, “AI Could Lead to Faster, Better Analysis of Donated Blood, Study Shows.”
Improving Blood Diagnostics through Precision Medicine and Deep Learning
“This project is an excellent example of how we are using our world-class expertise in precision health to contribute to the interdisciplinary work required to make fundamental changes in blood diagnostics,” said Jason Acker, PhD, a senior scientist at Canadian Blood Services’ Centre for Innovation, Professor of Laboratory Medicine and Pathology at the University of Alberta, and one of the lead authors of the study, in the Folio article.
The research took more than three years to complete and involved 19 experts from 12 academic institutions and blood collection facilities located in Canada, Germany, Switzerland, the United Kingdom, and the US.
To perform the study, the scientists first collected and manually categorized 52,000 red blood cell images. Those images were then used to train an algorithm that mimics the way a human mind works. The computer system was next tasked with analyzing the shape of RBCs for quality purposes.
Removing Human Bias from RBC Classification
“I was happy to collaborate with a group of people with diverse backgrounds and expertise,” said Tracey Turner, a senior research assistant in Acker’s laboratory and one of the authors of the study, in a Canadian Blood Services (CBS) article. “Annotating and reviewing over 52,000 images took a long time, however, it allowed me to see firsthand how much bias there is in manual classification of cell shape by humans and the benefit machine classification could bring.”
According to the CBS article, a red blood cell lasts about 115 days in the human body and the shape of the RBC reveals its age. Newer, healthier RBCs are shaped like discs with smooth edges. As they age, those edges become jagged and the cell eventually transforms into a sphere and loses the ability to perform its duty of transporting oxygen throughout the body.
Blood donations are processed, packed, and stored for later use. Once outside the body, the RBCs begin to change their shape and deteriorate. RBCs can only be stored for a maximum of 42 days before they lose the ability to function properly when transfused into a patient.
Scientists routinely examine the shape of RBCs to assess the quality of the cell units for transfusion to patients and, in some cases, diagnose and assess individuals with certain disorders and diseases. Typically, microscope examinations of red blood cells are performed by experts in medical laboratories to determine the quality of the stored blood. The RBCs are classified by shape and then assigned a morphology index score. This can be a complex, time-consuming, and laborious process.
“One of the amazing things about machine learning is that it allows us to see relationships we wouldn’t otherwise be able to see,” Acker said. “We categorize the cells into the buckets we’ve identified, but when we categorize, we take away information.”
Human analysis, apparently, is subjective and different professionals can arrive at different results after examining the same blood samples.
“Machines are naive of bias, and AI reveals some characteristics we wouldn’t have identified and is able to place red blood cells on a more nuanced spectrum of change in shape,” Acker explained.
The researchers discovered that the AI could accurately analyze and categorize the quality of the red blood cells. This ability to perform RBC morphology assessment could have critical implications for transfusion medicine.
“The computer actually did a better job than we could, and it was able to pick up subtle differences in a way that we can’t as humans,” Acker said.
“It’s not surprising that the red cells don’t just go from one shape to another. This computer showed that there’s actually a gradual progression of shape in samples from blood products, and it’s able to better classify these changes,” he added. “It radically changes the speed at which we can make these assessments of blood product quality.”
More Precision Matching Blood Donors to Recipients
According to the World Health Organization (WHO), approximately 118.5 million blood donations are collected globally each year. There is a considerable contrast in the level of access to blood products between high- and low-income nations, which makes accurate assessment of stored blood even more critical. About 40% of all blood donations are collected in high-income countries that home to only about 16% of the world’s population.
More studies and clinical trials will be necessary to determine if U of A’s approach to using AI to assess the quality of RBCs can safely transfer to clinical use. But these early results promise much in future precision medicine treatments.
“What this research is leading us to is the fact that we have the ability to be much more precise in how we match blood donors and recipients based on specific characteristics of blood cells,” Acker stated. “Through this study we have developed machine learning tools that are going to help inform how this change in clinical practice evolves.”
The AI tools being developed at the U of A could ultimately benefit patients as well as blood collection centers, and at hospitals where clinical laboratories typically manage the blood banking services, by making the process of matching transfusion recipients to donors more precise and ultimately safer.