Findings could lead to deeper understanding of why we age, and to medical laboratory tests and treatments to slow or even reverse aging
Can humans control aging by keeping their genes long and balanced? Researchers at Northwestern University in Evanston, Illinois, believe it may be possible. They have unveiled a “previously unknown mechanism” behind aging that could lead to medical interventions to slow or even reverse aging, according to a Northwestern news release.
Should additional studies validate these early findings, this line of testing may become a new service clinical laboratories could offer to referring physicians and patients. It would expand the test menu with assays that deliver value in diagnosing the aging state of a patient, and which identify the parts of the transcriptome that are undergoing the most alterations that reduce lifespan.
It may also provide insights into how treatments and therapies could be implemented by physicians to address aging.
“I find it very elegant that a single, relatively concise principle seems to account for nearly all of the changes in activity of genes that happen in animals as they change,” Thomas Stoeger, PhD, postdoctoral scholar in the Amaral Lab who led the study, told GEN. Clinical laboratories involved in omics research may soon have new anti-aging diagnostic tests to perform. (Photo copyright: Amaral Lab.)
Possible ‘New Instrument’ for Biological Testing
Researchers found clues to aging in the length of genes. A gene transcript length reveals “molecular-level changes” during aging: longer genes relate to longer lifespans and shorter genes suggest shorter lives, GEN summarized.
The phenomenon the researchers uncovered—which they dubbed transcriptome imbalance—was “near universal” in the tissues they analyzed (blood, muscle, bone, and organs) from both humans and animals, Northwestern said.
According to the National Human Genome Research Institute fact sheet, a transcriptome is “a collection of all the gene readouts (aka, transcript) present in a cell” shedding light on gene activity or expression.
The Northwestern study suggests “systems-level” changes are responsible for aging—a different view than traditional biology’s approach to analyzing the effects of single genes.
“We have been primarily focusing on a small number of genes, thinking that a few genes would explain disease,” said Luis Amaral, PhD, Senior Author of the Study and Professor of Chemical and Biological Engineering at Northwestern, in the news release.
“So, maybe we were not focused on the right thing before. Now that we have this new understanding, it’s like having a new instrument. It’s like Galileo with a telescope, looking at space. Looking at gene activity through this new lens will enable us to see biological phenomena differently,” Amaral added.
In their Nature Aging paper, Amaral and his colleagues wrote, “We hypothesize that aging is associated with a phenomenon that affects the transcriptome in a subtle but global manner that goes unnoticed when focusing on the changes in expression of individual genes.
“We show that transcript length alone explains most transcriptional changes observed with aging in mice and humans,” they continued.
In tissues studied, older animals’ long transcripts were not as “abundant” as short transcripts, creating “imbalance.”
“Imbalance” likely prohibited the researchers’ discovery of a “specific set of genes” changing.
As animals aged, shorter genes “appeared to become more active” than longer genes.
In humans, the top 5% of genes with the shortest transcripts “included many linked to shorter life spans such as those involved in maintaining the length of telomeres.”
Conversely, the researchers’ review of the leading 5% of genes in humans with the longest transcripts found an association with long lives.
Antiaging drugs—rapamycin (aka, sirolimus) and resveratrol—were linked to an increase in long-gene transcripts.
“The changes in the activity of genes are very, very small, and these small changes involve thousands of genes. We found this change was consistent across different tissues and in different animals. We found it almost everywhere,” Thomas Stoeger, PhD, postdoctoral scholar in the Amaral Lab who led the study, told GEN.
In their paper, the Northwestern scientists noted implications for creation of healthcare interventions.
“We believe that understanding the direction of causality between other age-dependent cellular and transcriptomic changes and length-associated transcriptome imbalance could open novel research directions for antiaging interventions,” they wrote.
While more research is needed to validate its findings, the Northwestern study is compelling as it addresses a new area of transcriptome knowledge. This is another example of researchers cracking open human and animal genomes and gaining new insights into the processes supporting life.
For clinical laboratories and pathologists, diagnostic testing to reverse aging and guide the effectiveness of therapies may one day be possible—kind of like science’s take on the mythical Fountain of Youth.
The technology is similar to the concept of a liquid biopsy, which uses blood specimens to identify cancer by capturing tumor cells circulating in the blood.
According to the American Cancer Society, lung cancer is responsible for approximately 25% of cancer deaths in the US and is the leading cause of cancer deaths in both men and women. The ACS estimates there will be about 236,740 new cases of lung cancer diagnosed in the US this year, and about 130,180 deaths due to the disease.
Early-stage lung cancer is typically asymptomatic which leads to later stage diagnoses and lowers survival rates, largely due to a lack of early disease detection tools. The current method used to detect early lung cancer lesions is low-dose spiral CT imaging, which is costly and can be risky due to the radiation hazards of repeated screenings, the news release noted.
MGH’s newly developed diagnostic tool detects lung cancer from alterations in blood metabolites and may lead to clinical laboratory tests that could dramatically improve survival rates of the deadly disease, the MGH scientist noted in a news release.
“Our study demonstrates the potential for developing a sensitive screening tool for the early detection of lung cancer,” said Leo Cheng, PhD (above), in the news release. Cheng is Associate Professor of Radiology at Harvard Medical School and Associate Biophysicist in Radiology at Massachusetts General Hospital. “The predictive model we constructed can identify which people may be harboring lung cancer. Individuals with suspicious findings would then be referred for further evaluation by imaging tests, such as low-dose CT, for a definitive diagnosis,” he added. Oncologists may soon have a clinical laboratory test for screening patients with early-stage lung cancer. (Photo copyright: OCSMRM.)
Detecting Lung Cancer in Blood Metabolomic Profiles
The MGH scientists created their lung-cancer predictive model based on magnetic resonance spectroscopy which can detect the presence of lung cancer from alterations in blood metabolites.
The researchers screened tens of thousands of stored blood specimens and found 25 patients who had been diagnosed with non-small-cell lung carcinoma (NSCLC), and who had blood specimens collected both at the time of their diagnosis and at least six months prior to the diagnosis. They then matched these individuals with 25 healthy controls.
The scientists first trained their statistical model to recognize lung cancer by measuring metabolomic profiles in the blood samples obtained from the patients when they were first diagnosed with lung cancer. They then compared those samples to those of the healthy controls and validated their model by comparing the samples that had been obtained from the same patients prior to the lung cancer diagnosis.
The predictive model yielded values between the healthy controls and the patients at the time of their diagnoses.
“This was very encouraging, because screening for early disease should detect changes in blood metabolomic profiles that are intermediate between healthy and disease states,” Cheng noted.
The MGH scientists then tested their model with a different group of 54 patients who had been diagnosed with NSCLC using blood samples collected before their diagnosis. The second test confirmed the accuracy of their model.
Predicting Five-Year Survival Rates for Lung Cancer Patients
Values derived from the MGH predictive model measured from blood samples obtained prior to a lung cancer diagnosis also could enable oncologists to predict five-year survival rates for patients. This discovery could prove to be useful in determining clinical strategies and personalized treatment decisions.
The researchers plan to analyze the metabolomic profiles of the clinical characteristics of lung cancer to understand the entire metabolic spectrum of the disease. They hope to create similar models for other illnesses and have already created a model that can distinguish aggressive prostate cancer by measuring the metabolomics profiles of more than 400 patients with that disease.
In addition, they are working on a similar model to screen for Alzheimer’s disease using blood samples and cerebrospinal fluid.
More research and clinical studies are needed to validate the utilization of blood metabolomics models as early screening tools in clinical practice. However, this technology might provide pathologists and clinical laboratories with diagnostic tests for the screening of early-stage lung cancer that could save thousands of lives each year.
Genomic sequencing continues to benefit patients through precision medicine clinical laboratory treatments and pharmacogenomic therapies
EDITOR’S UPDATE—Jan. 26, 2022: Since publication of this news briefing, officials from Genomics England contacted us to explain the following:
The “five million genome sequences” was an aspirational goal mentioned by then Secretary of State for Health and Social Care Matt Hancock, MP, in an October 2, 2018, press release issued by Genomics England.
As of this date a spokesman for Genomics England confirmed to Dark Daily that, with the initial goal of 100,000 genomes now attained, the immediate goal is to sequence 500,000 genomes.
This goal was confirmed in a tweet posted by Chris Wigley, CEO at Genomics England.
In accordance with this updated input, we have revised the original headline and information in this news briefing that follows.
What better proof of progress in whole human genome screening than the announcement that the United Kingdom’s 100,000 Genome Project has not only achieved that milestone, but will now increase the goal to 500,000 whole human genomes? This should be welcome news to clinical laboratory managers, as it means their labs will be positioned as the first-line provider of genetic data in support of clinical care.
Many clinical pathologists here in the United States are aware of the 100,000 Genome Project, established by the National Health Service (NHS) in England (UK) in 2012. Genomics England’s new goal to sequence 500,000 whole human genomes is to pioneer a “lasting legacy for patients by introducing genomic sequencing into the wider healthcare system,” according to Technology Networks.
The importance of personalized medicine and of the power of precise, accurate diagnoses cannot be understated. This announcement by Genomics England will be of interest to diagnosticians worldwide, especially doctors who diagnose and treat patients with chronic and life-threatening diseases.
Building a Vast Genomics Infrastructure
Genetic sequencing launched the era of precision medicine in healthcare. Through genomics, drug therapies and personalized treatments were developed that improved outcomes for all patients, especially those suffering with cancer and other chronic diseases. And so far, the role of genomics in healthcare has only been expanding, as Dark Daily covered in numerous ebriefings.
Genomics England, which is wholly owned by the Department of Health and Social Care in the United Kingdom, was formed in 2012 with the goal of sequencing 100,000 whole genomes of patients enrolled in the UK National Health Service. That goal was met in 2018, and now the NHS aspires to sequence 500,000 genomes.
“The last 10 years have been really exciting, as we have seen genetic data transition from being something that is useful in a small number of contexts with highly targeted tests, towards being a central part of mainstream healthcare settings,” Richard Scott, MD, PhD (above), Chief Medical Officer at Genomics England told Technology Networks. Much of the progress has found its way into clinical laboratory testing and precision medicine diagnostics. (Photo copyright: Genomics England.)
Genomics England’s initial goals included:
To create an ethical program based on consent,
To set up a genomic medicine service within the NHS to benefit patients,
To make new discoveries and gain insights into the use of genomics, and
To begin the development of a UK genomics industry.
To gain the greatest benefit from whole genome sequencing (WGS), a substantial amount of data infrastructure must exist. “The amount of data generated by WGS is quite large and you really need a system that can process the data well to achieve that vision,” said Richard Scott, MD, PhD, Chief Medical Officer at Genomics England.
In early 2020, Weka, developer of the WekaFS, a fully parallel and distributed file system, announced that it would be working with Genomics England on managing the enormous amount of genomic data. When Genomics England reached 100,000 sequenced genomes, it had already gathered 21 petabytes of data. The organization expects to have 140 petabytes by 2023, notes a Weka case study.
Putting Genomics England’s WGS Project into Action
WGS has significantly impacted the diagnosis of rare diseases. For example, Genomics England has contributed to projects that look at tuberculosis genomes to understand why the disease is sometimes resistant to certain medications. Genomic sequencing also played an enormous role in fighting the COVID-19 pandemic.
Scott notes that COVID-19 provides an example of how sequencing can be used to deliver care. “We can see genomic influences on the risk of needing critical care in COVID-19 patients and in how their immune system is behaving. Looking at this data alongside other omics information, such as the expression of different protein levels, helps us to understand the disease process better,” he said.
What’s Next for Genomics Sequencing?
As the research continues and scientists begin to better understand the information revealed by sequencing, other areas of scientific study like proteomics and metabolomics are becoming more important.
“There is real potential for using multiple strands of data alongside each other, both for discovery—helping us to understand new things about diseases and how [they] affect the body—but also in terms of live healthcare,” Scott said.
Along with expanding the target of Genomics England to 500,000 genomes sequenced, the UK has published a National Genomic Strategy named Genome UK. This plan describes how the research into genomics will be used to benefit patients. “Our vision is to create the most advanced genomic healthcare ecosystem in the world, where government, the NHS, research and technology communities work together to embed the latest advances in patient care,” according to the Genome UK website.
Clinical laboratories professionals with an understanding of diagnostics will recognize WGS’ impact on the healthcare industry. By following genomic sequencing initiatives, such as those coming from Genomics England, pathologists can keep their labs ready to take advantage of new discoveries and insights that will improve outcomes for patients.
Newly combined digital pathology, artificial intelligence (AI), and omics technologies are providing anatomic pathologists and medical laboratory scientists with powerful diagnostic tools
Add “spatial transcriptomics” to the growing list of “omics” that have the potential to deliver biomarkers which can be used for earlier and more accurate diagnoses of diseases and health conditions. As with other types of omics, spatial transcriptomics might be a new tool for surgical pathologists once further studies support its use in clinical care.
Among this spectrum of omics is spatial transcriptomics, or ST for short.
Spatial Transcriptomics is a groundbreaking and powerful molecular profiling method used to measure all gene activity within a tissue sample. The technology is already leading to discoveries that are helping researchers gain valuable information about neurological diseases and breast cancer.
Marriage of Genetic Imaging and Sequencing
Spatial transcriptomics is a term used to describe a variety of methods designed to assign cell types that have been isolated and identified by messenger RNA (mRNA), to their locations in a histological section. The technology can determine subcellular localization of mRNA molecules and can quantify gene expression within anatomic pathology samples.
In “Spatial: The Next Omics Frontier,” Genetic Engineering and Biotechnology News (GEN) wrote, “Spatial transcriptomics gives a rich, spatial context to gene expression. By marrying imaging and sequencing, spatial transcriptomics can map where particular transcripts exist on the tissue, indicating where particular genes are expressed.”
In an interview with Technology Networks, George Emanuel, PhD, co-founder of life-science genomics company Vizgen, said, “Spatial transcriptomic profiling provides the genomic information of single cells as they are intricately spatially organized within their native tissue environment.
“With techniques such as single-cell sequencing, researchers can learn about cell type composition; however, these techniques isolate individual cells in droplets and do not preserve the tissue structure that is a fundamental component of every biological organism,” he added.
“Direct spatial profiling the cellular composition of the tissue allows you to better understand why certain cell types are observed there and how variations in cell state might be a consequence of the unique microenvironment within the tissue,” he continued. “In this way, spatial transcriptomics allows us to measure the complexity of biological systems along the axes that are most relevant to their function.”
“Although spatial genomics is a nascent field, we are already seeing broad interest among the community and excitement across a range of questions, all the way from plant biology to improving our understanding of the complex interactions of the tumor microenvironment,” George Emanuel, PhD (above), told Technology Networks. Oncologists, anatomic pathologists, and medical laboratory scientists my soon see diagnostics that take advantage of spatial genomics technologies. (Photo copyright: Vizgen.)
According to 10x Genomics, “spatial transcriptomics utilizes spotted arrays of specialized mRNA-capturing probes on the surface of glass slides. Each spot contains capture probes with a spatial barcode unique to that spot.
“When tissue is attached to the slide, the capture probes bind RNA from the adjacent point in the tissue. A reverse transcription reaction, while the tissue is still in place, generates a cDNA [complementary DNA] library that incorporates the spatial barcodes and preserves spatial information.
“Each spot contains approximately 200 million capture probes and all of the probes in an individual spot share a barcode that is specific to that spot.”
“The highly multiplexed transcriptomic readout reveals the complexity that arises from the very large number of genes in the genome, while high spatial resolution captures the exact locations where each transcript is being expressed,” Emanuel told Technology Networks.
Spatial Transcriptomics for Breast Cancer and Neurological Diagnostics
In that paper, the authors wrote “we envision that in the coming years we will see simplification, further standardization, and reduced pricing for the ST protocol leading to extensive ST sequencing of samples of various cancer types.”
Spatial transcriptomics is also being used to research neurological conditions and neurodegenerative diseases. ST has been proven as an effective tool to hunt for marker genes for these conditions as well as help medical professionals study drug therapies for the brain.
“You can actually map out where the target is in the brain, for example, and not only the approximate location inside the organ, but also in what type of cells,” Malte Kühnemund, PhD, Director of Research and Development at 10x Genomics, told Labiotech.eu. “You actually now know what type of cells you are targeting. That’s completely new information for them and it might help them to understand side effects and so on.”
The field of spatial transcriptomics is rapidly moving and changing as it branches out into more areas of healthcare. New discoveries within ST methodologies are making it possible to combine it with other technologies, such as Artificial Intelligence (AI), which could lead to powerful new ways oncologists and anatomic pathologists diagnose disease.
“I think it’s going to be tricky for pathologists to look at that data,” Kühnemund said. “I think this will go hand in hand with the digital pathology revolution where computers are doing the analysis and they spit out an answer. That’s a lot more precise than what any doctor could possibly do.”
Spatial transcriptomics certainly is a new and innovative way to look at tissue biology. However, the technology is still in its early stages and more research is needed to validate its development and results.
Nevertheless, this is an opportunity for companies developing artificial intelligence tools for analyzing digital pathology images to investigate how their AI technologies might be used with spatial transcriptomics to give anatomic pathologists a new and useful diagnostic tool.
Experts list the top challenges facing widespread adoption of proteomics in the medical laboratory industry
Year-by-year, clinical
laboratories find new ways to use mass spectrometry to
analyze clinical specimens, producing results that may be more precise than
test results produced by other methodologies. This is particularly true in the
field of proteomics.
However, though mass spectrometry is highly accurate and
fast, taking only minutes to convert a specimen into a result, it is not fully
automated and requires skilled technologists to operate the instruments.
Thus, although the science of proteomics is advancing
quickly, the average pathology laboratory isn’t likely to be using mass
spectrometry tools any time soon. Nevertheless, medical
laboratory scientists are keenly interested in adapting mass spectrometry
to medical lab test technology for a growing number of assays.
Molly Campbell, Science Writer and Editor in Genomics, Proteomics, Metabolomics, and Biopharma at Technology Networks, asked proteomics experts “what, in their opinion, are the greatest challenges currently existing in proteomics, and how can we look to overcome them?” Here’s a synopsis of their answers:
Lack of High Throughput Impacts Commercialization
Proteomics isn’t as efficient as it needs to be to be
adopted at the commercial level. It’s not as efficient as its cousin genomics. For it to become
sufficiently efficient, manufacturers must be involved.
John Yates
III, PhD, Professor, Department of Molecular Medicine at Scripps Research California
campus, told Technology
Networks, “One of the complaints from funding agencies is that you can
sequence literally thousands of genomes very quickly, but you can’t do the same
in proteomics. There’s a push to try to increase the throughput of proteomics
so that we are more compatible with genomics.”
For that to happen, Yates says manufacturers need to
continue advancing the technology. Much of the research is happening at
universities and in the academic realm. But with commercialization comes
standardization and quality control.
“It’s always exciting when you go to ASMS [the conference for the American Society
for Mass Spectrometry] to see what instruments or technologies are going to be
introduced by manufacturers,” Yates said.
There are signs that commercialization isn’t far off. SomaLogic, a privately-owned American protein
biomarker discovery and clinical diagnostics company located in Boulder, Colo.,
has reached the commercialization stage for a proteomics assay platform called SomaScan. “We’ll be
able to supplant, in some cases, expensive diagnostic modalities simply from a
blood test,” Roy
Smythe, MD, CEO of SomaLogic, told Techonomy.
The graphic above illustrates the progression mass spectrometry took during its development, starting with small proteins (left) to supramolecular complexes of intact virus particles (center) and bacteriophages (right). Because of these developments, today’s medical laboratories have more assays that utilize mass spectrometry. (Photo copyright: Technology Networks/Heck laboratory, Utrecht University, the Netherlands.)
Achieving the Necessary Technical Skillset
One of the main reasons mass spectrometry is not more widely
used is that it requires technical skill that not many professionals possess.
“For a long time, MS-based proteomic analyses were technically demanding at
various levels, including sample processing, separation science, MS and the
analysis of the spectra with respect to sequence, abundance and
modification-states of peptides and proteins and false discovery rate
(FDR) considerations,” Ruedi
Aebersold, PhD, Professor of Systems Biology at the Institute of Molecular Systems Biology (IMSB) at
ETH Zurich, told Technology
Networks.
Aebersold goes on to say that he thinks this specific
challenge is nearing resolution. He says that, by removing the problem created
by the need for technical skill, those who study proteomics will be able to
“more strongly focus on creating interesting new biological or clinical
research questions and experimental design.”
Yates agrees. In a paper titled, “Recent Technical Advances in
Proteomics,” published in F1000 Research, a peer-reviewed open research
publishing platform for scientists, scholars, and clinicians, he wrote, “Mass
spectrometry is one of the key technologies of proteomics, and over the last
decade important technical advances in mass spectrometry have driven an
increased capability of proteomic discovery. In addition, new methods to
capture important biological information have been developed to take advantage
of improving proteomic tools.”
No High-Profile Projects to Stimulate Interest
Genomics had the Human Genome Project
(HGP), which sparked public interest and attracted significant funding. One of
the big challenges facing proteomics is that there are no similarly big,
imagination-stimulating projects. The work is important and will result in
advances that will be well-received, however, the field itself is complex and difficult
to explain.
Emanuel
Petricoin, PhD, is a professor and co-director of the Center for Applied
Proteomics and Molecular Medicine at George
Mason University. He told Technology
Networks, “the field itself hasn’t yet identified or grabbed onto a
specific ‘moon-shot’ project. For example, there will be no equivalent to the
human genome project, the proteomics field just doesn’t have that.”
He added, “The equipment needs to be in the background and
what you are doing with it needs to be in the foreground, as is what happened
in the genomics space. If it’s just about the machinery, then proteomics will
always be a ‘poor step-child’ to genomics.”
Democratizing Proteomics
Alexander
Makarov, PhD, is Director of Research in Life Sciences Mass Spectrometry
(MS) at Thermo Fisher
Scientific. He told Technology
Networks that as mass spectrometry grew into the industry we have today,
“each new development required larger and larger research and development teams
to match the increasing complexity of instruments and the skyrocketing
importance of software at all levels, from firmware to application. All this
extends the cycle time of each innovation and also forces [researchers] to
concentrate on solutions that address the most pressing needs of the scientific
community.”
Makarov describes this change as “the increasing democratization of MS,” and says that it “brings with it new requirements for instruments, such as far greater robustness and ease-of-use, which need to be balanced against some aspects of performance.”
One example of the increasing democratization of MS may be
several public proteomic datasets available to scientists. In European
Pharmaceutical Review, Juan
Antonio Viscaíno, PhD, Proteomics Team Leader at the European Bioinformatics Institute (EMBL-EBI)
wrote, “These datasets are increasingly reused for multiple applications, which
contribute to improving our understanding of cell biology through proteomics
data.”
Sparse Data and Difficulty Measuring It
Evangelia
Petsalaki, PhD, Group Leader EMBL-EBI, told Technology
Networks there are two related challenges in handling proteomic data.
First, the data is “very sparse” and second “[researchers] have trouble
measuring low abundance proteins.”
Petsalaki notes, “every time we take a measurement, we
sample different parts of the proteome or phosphoproteome and
we are usually missing low abundance players that are often the most important
ones, such as transcription
factors.” She added that in her group they take steps to mitigate those
problems.
“However, with the advances in MS technologies developed by
many companies and groups around the world … and other emerging technologies
that promise to allow ‘sequencing’ proteomes, analogous to genomes … I expect
that these will not be issues for very long.”
So, what does all this mean for clinical laboratories? At the
current pace of development, its likely assays based on proteomics could become
more common in the near future. And, if throughput and commercialization ever
match that of genomics, mass spectrometry and other proteomics tools could
become a standard technology for pathology laboratories.