Does Aspirin Lower Colorectal Cancer Risk in Older Adults? It Depends on When They Start. (Medicine)

Key Takeaways

  • Current guidelines support routine use of aspirin for prevention of cardiovascular events and colorectal cancer in many adults aged 50 to 59.
  • There is controversy over whether aspirin is beneficial for older adults, especially those over age 70.
  • This study looked at the impact of regular aspirin use in adults aged 70 and older.

As people get older, if they are not already taking aspirin, a discussion is warranted about whether to start aspirin after weighing the benefits against the risks.

— Andrew T. Chan MD, MPH
Clinical and Translational Epidemiology Unit, Massachusetts General Hospital

Regular aspirin use has clear benefits in reducing colorectal cancer incidence among middle-aged adults, but also comes with some risk, such as gastrointestinal bleeding. And when should adults start taking regular aspirin and for how long?

There is substantial evidence that a daily aspirin can reduce risk of colorectal cancer in adults up to age 70. But until now there was little evidence about whether older adults should start taking aspirin.

A team of scientists set out to study this question. They were led by Andrew T. Chan MD, MPH, a gastroenterologist and chief of the Clinical and Translational Epidemiology Unit at Massachusetts General Hospital (MGH). Their report appears in JAMA Oncology.

The researchers carried out a pooled analysis of two large U.S. cohort studies: The Nurses’ Health Study (January 1980 – June 2014) and the Health Professionals Follow-up Study (January 1986 – January 2014). These two studies contributed data on more than 94,500 participants’ use of aspirin over about 35 years, offering a unique opportunity to understand the effect of aspirin use across the lifespan on cancer risk.

The researchers found that regular aspirin use was linked to lower colorectal cancer risk among people aged 70 or older. However, this advantage was only significant among people who started taking aspirin before the age of 70.  People who started regular aspirin use at the age of 70 or older did not seem to reap any benefit.

“There is considerable evidence that aspirin can prevent colorectal cancer in adults between 50 and 70 years old,” says Chan. “But it has not been clear whether the effect is similar in older adults.”

Aspirin is considered the most well-established agent that protects against colorectal cancer (CRC). It is currently recommended by the U.S. Preventive Services Task Force for people aged 50-59 years with specific cardiovascular risk profiles because of its protective effect against heart disease.

However, the recent Aspirin in Reducing Events in the Elderly (ASPREE) trial reported that participants who took a daily low dose of aspirin (100 mg) after age 70 for about five years actually had an unexpected 30% higher risk of death from cancer. The vast majority of the ASPREE participants (89%) had never taken aspirin regularly before joining the study. Chan’s team also recently reported that ASPREE participants on aspirin did not experience an increase or decrease in risk of developing a cancer despite having an increase in risk of death from cancer.

That led to the questions: Does regular aspirin benefit or harm people older than 70 and does it matter when aspirin was started?

The current study confirms that initiating aspirin at an older age was not associated with a lower risk of colorectal cancer.  However, importantly, there is a potential benefit of continuing aspirin if is started at an earlier age. These results, the researchers say, “strongly suggest that there is a potential biological difference in the effect of aspirin at older ages which requires further research.”

Adds Chan: “As people get older, if they are not already taking aspirin, a discussion is warranted about whether to start aspirin after weighing the benefits against the risks.”

Reference: Chuan-Guo Guo, Wenjie Ma, David A. Drew, et al., “Aspirin Use and Risk of Colorectal Cancer Among Older Adults”, JAMA Oncol. Published online January 21, 2021. doi:10.1001/jamaoncol.2020.7338

Provided by Massachusetts General Hospital

About the Massachusetts General Hospital

Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The Mass General Research Institute conducts the largest hospital-based research program in the nation, with annual research operations of more than $1 billion and comprises more than 9,500 researchers working across more than 30 institutes, centers and departments. In August 2020, Mass General was named #6 in the U.S. News & World Report list of “America’s Best Hospitals.”

The CNIO Participates in a Study that Defines the Most Important Genes that Increase the Risk of Breast Cancer (Medicine)


The study will help to improve prevention programmes since it “defines the most useful genes” for breast cancer risk prediction tests, the authors write.

– This is the most ambitious international research ever conducted on the inheritance of breast cancer, based on the analysis of 113,000 samples.

The study will be published in the ‘New England Journal of Medicine’ (‘NEJM’) and is authored by 250 researchers from dozens of institutions in more than 25 countries.

Breast cancer is one of the most common cancers today. One in eight women will develop it in their lifetime.


Genetic inheritance affects the likelihood of developing breast cancer. Some genes are already known to increase cancer risk; other genes are suspected to be involved, but not to what extent. It is crucial to clarify this issue to improve prevention since it opens the way to more personalised follow-up and screening programs. A large international consortium, which includes the Spanish National Cancer Research Centre (CNIO), has studied 34 putative susceptibility genes on samples from 113,000 female breast cancer cases and controls, and its results confirm the importance of nine of them.

From left to right: Belén Herráez, Anna González-Neira, Rosario Alonso, Nuria Álvarez, Ana Osorio, Rocío Núñez, Guillermo Pita, and Javier Benítez. /A. Garrido, CNIO

The study “defines the genes most clinically useful for inclusion on panels for breast cancer risk prediction (…), to guide genetic counselling,” the authors write in the New England Journal of Medicine (NEJM).

This is the most ambitious study carried out to date to shed light on the role of heredity in breast cancer, one of the most common cancers today – one out of every eight women will have it at some point in their lives.

250 researchers from dozens of institutions in more than 25 countries participated in the genetic analysis. One-third of the samples were analysed at the CNIO, with the participation of Javier BenítezAnna González-Neira and Ana Osorio. Groups from seven other Spanish institutions also participated in the study.

Osorio, from the CNIO Human Genetics Group, stresses the importance of the information provided by this study when monitoring breast cancer patients and their families. “Genetic tests are already done on people with a family history of the disease, but in those tests we can only analyse genes of which we are certain that they affect the risk. Now we have more information, and we can improve the genetic counselling of patients and their families,” the CNIO researcher says.

Low- and moderate-risk genes

It has been known for some time that the risk of developing breast cancer depends partly on genetic inheritance, but determining exactly which genes increase this risk, and how much, remains a major challenge.

The genes that confer high risk when mutated, BRCA1 and BRCA2, were already identified in the mid-1990s. Having mutations in these genes confers a cumulative risk of around 70% of developing breast cancer by the age of 80, and a risk of 40% and 20% of developing ovarian cancer for carriers of mutations in BRCA1 and BRCA2, respectively. It was this strong effect that made it possible to identify these genes in families with a high incidence of cancer.

Today, genetic diagnosis alerts many patients’ family members, who can then act at very early stages of cancer or even prevent its appearance. But the BRCA genes explain only a small part of all cases. In the vast majority of cases, genes are involved that confer a lower risk and that may interact with each other or with other genetic and environmental factors, which can modify the risk.

In recent years, a number of studies have identified dozens of such candidate genes that increase the risk of breast cancer to some degree, but these studies were done with relatively few patients and their results were not conclusive. It was the aim of the study now published to improve this knowledge by determining precisely which genes are involved, and to what extent they affect the risk to develop which subtype of tumour.

“Genetic testing for breast cancer susceptibility is widely used, but for many genes the evidence for association with breast cancer is weak,” the authors write in NEJM, adding that the risk estimates are imprecise and that cancer subtype-specific risk estimates are lacking altogether.

“The most clinically useful genes”

The study is the result of the European project BRIDGES (Breast Cancer Risk after Diagnostic Gene Sequencing), in which the CNIO participates. The study was based on the analysis of 34 known or suspected breast cancer susceptibility genes in DNA samples from 60,400 women who had developed breast cancer and 53,400 healthy women.

The results pinpoint nine genes for which there is solid evidence of their involvement in the disease: ATM, BRCA1, BRCA2, CHEK2, PALB2, BARD1, RAD51C, RAD51D and TP53. For some of these genes, this was already known, but for others, such as RAD51C and D and BARD1, their involvement was not so well established. For all genes, more precise risk estimates can now be calculated and these estimates are tailored for each tumour subtype.

On the other hand, the study shows that about fifteen of the genes that have been used so far in some tests “are not indicative of an increased risk” for breast cancer and should therefore not be taken into account, at least at this time, in risk estimates.

However, the authors remind us that the probability of developing breast cancer is not determined by genes alone. Other risk factors such as agehormonal history and environmental factors also play a role, which in turn are influenced by the genetic background.

“In fact,” points out González-Neira, Head of the Human Genotyping – CEGEN Unit at the CNIO, “it is been already working on mathematical models that integrate current knowledge about all these factors and their interaction, which make possible to provide a good estimate of the adjusted risk for each woman and thus individualise clinical management”.

The final objective is to improve prevention with much more personalised screening programmes than those currently available. Implementation of such precision medicine protocols in breast cancer will improve early diagnosis and reduce morbidity and mortality rates for this disease.

This project is funded by the European Union’s Horizon 2020 Research and Innovation Programme (BRIDGES), the Wellcome Trust, and Cancer Research UK. In the CNIO group, by the Spanish Ministry of Science and Innovation and the National Institute of Health Carlos III.

Reference: Breast cancer risk genes: association analysis in more than 113,000 women. Leila Dorling et al (NEJM, 2020). DOI: 10.1056/NEJMoa1913948

Provided by CNIO

Size of Connections between Nerve Cells Determines Signaling Strength (Neuroscience)

Nerve cells communicate with one another via synapses. Neuroscientists at the University of Zurich and ETH Zurich have now found that these connections seem to be much more powerful than previously thought. The larger the synapse, the stronger the signal it transmits. These findings will enable a better understanding of how the brain functions and how neurological disorders arise.

The size of synapses directly determines the strength of their signal transmission – illustrated as three nerve cell connections of different size and brightness. (Image: Kristian Herrera and authors)

The neocortex is the part of the brain that humans use to process sensory impressions, store memories, give instructions to the muscles, and plan for the future. These computational processes are possible because each nerve cell is a highly complex miniature computer that communicates with around 10,000 other neurons. This communication happens via special connections called synapses.

The bigger the synapse, the stronger its signal

Researchers in Kevan Martin’s laboratory at the Institute of Neuroinformatics at the University of Zurich (UZH) and ETH Zurich have now shown for the first time that the size of synapses determines the strength of their information transmission. “Larger synapses lead to stronger electrical impulses. Finding this relationship closes a key knowledge gap in neuroscience” explains Martin. “The finding is also critical for advancing our understanding of how information flows through our brain’s circuits, and therefore how the brain operates.”

Reconstructing the connections between nerve cells of the neocortex

First, the neuroscientists set about measuring the strength of the synaptic currents between two connected nerve cells. To do this, they prepared thin sections of a mouse brain and, under a microscope, inserted glass microelectrodes into two neighboring nerve cells of the neocortex. This enabled the researchers to artificially activate one of the nerve cells and at the same time measure the strength of the resulting synaptic impulse in the other cell. They also injected a dye into the two neurons to reconstruct their branched-out cellular processes in three dimensions under a light microscope.

Synapse size correlates with signaling strength

Since synapses are so tiny, the scientists used the high resolution of an electron microscope to be able to reliably identify and precisely measure the neuronal contact points. First, in their light microscope reconstructions, they marked all points of contact between the cell processes of the activated neuron that forwarded the signal and the cell processes of the neuron that received the synaptic impulse. Then, they identified all synapses between the two nerve cells under the electron microscope. They correlated the size of these synapses with the synaptic impulses they had measured previously. “We discovered that the strength of the synaptic impulse correlates directly with the size and form of the synapse,” says lead author Gregor Schuhknecht, formerly a PhD student in Kevan Martin’s team.

Gaining a deeper understanding of the brain’s wiring diagrams

This correlation can now be used to estimate the strength of information transmission on the basis of the measured size of the synapse. “This could allow scientists to use electron microscopy to precisely map the wiring diagrams of the neocortex and then simulate and interpret the flow of information in these wiring diagrams in the computer,” explains Schuhknecht. Such studies will enable a better understanding of how the brain functions under normal circumstances and how “wiring defects” can lead to neurodevelopmental disorders.

More computing power and storage capacity than thought

The team was also able to resolve another longstanding puzzle in neuroscience. Until now, the conventional doctrine had been that only a single neurotransmitter-filled packet (a so-called vesicle) is released at a synapse upon activation. The researchers were able to use a novel mathematical analysis to prove that each synapse in fact has several sites that can release packets of neurotransmitter simultaneously. “This means that synapses are much more complex and can regulate their signal strength more dynamically than previously thought. The computational power and storage capacity of the entire neocortex therefore seems to be much greater than was previously believed”, says Kevan Martin.

Reference: Simone Holler, German Köstinger, Kevan A. C. Martin, Gregor F. P. Schuhknecht, Ken J. Stratford. Structure and function of a neocortical synapse. Nature. 13 January 2021. DOI: 10.1038/s41586-020-03134-2. Springer Nature SharedIt

Provided by University of Zurich

Strange Colon Discovery Explains Racial Disparities in Colorectal Cancer (Medicine)

One side of the colon ages faster than the other, scientists reveal.

The colons of African-Americans and people of European descent age differently, new research reveals, helping explain racial disparities in colorectal cancer – the cancer that killed beloved “Black Panther” star Chadwick Boseman at only 43.

Li Li © UVA health

Scientists led by UVA Health’s Li Li, MD, PhD; Graham Casey, PhD; and Matt Devall, PhD, of the Center for Public Health Genomics, found that one side of the colon ages biologically faster than the other in both African-Americans and people of European descent. In African-Americans, however, the right side ages significantly faster, explaining why African-Americans are more likely to develop cancerous lesions on the right side and why they are more likely to suffer colorectal cancer at a younger age, the researchers say.

“Our discovery provides novel insight of the mechanistic underpinning for the observed racial disparities in age-of-onset and anatomical distribution of colon neoplasia,” said Li, the leader of the Cancer Control and Population Health program at UVA Cancer Center. “Side-specific biological aging of the colon might emerge as a novel biomarker to guide the development of personalized prevention and intervention strategies.”


African-Americans are disproportionately affected by colorectal cancer. The American Cancer Society reports that African-Americans are 20% more likely to develop colorectal cancer and 40% more likely to die from it. Overall colorectal cancer rates have declined in America in recent years, but African-Americans have not seen the same decreases as people of European descent. And even as the overall rates have dropped, the rate among younger people has gone up.

While doctors have long appreciated these disparities, they haven’t really understood the causes. The new study helps answer those questions. It’s the first to show that the right and left side of the colon actually age differently.

The researchers made this determination by looking at the DNA in colon tissue, and the “epigenetic” changes that come with age. These epigenetic changes are not alterations to the genes but changes that affect how the genes work and how well they can do their jobs.

The scientists found that the right side of the colon in most African-Americans had suffered a unique pattern of “hypermethylation,” affecting gene expression. It was, in essence, like the right side was old beyond its years. This, the researchers believe, could contribute to African-Americans’ increased cancer risk and could explain why they are more likely to develop cancerous lesions on the right side.

The research could also explain why younger people of European descent are more likely to develop lesions on the left side – the side that tends to age faster in that group.

“These findings highlight the importance of colon sidedness to biology of colorectal cancer,” Casey said. “The fact that the colon biology of people of African and European ancestry differ further highlights the critical importance of more research involving participation of people of African descent.”

Li and his team say further investigation of what they have found could lead to better ways to treat and prevent colorectal cancers.

“We are working to validate our discovery in independent patient cohorts,” Li said. “Our discovery is a step forward in our effort to prevent colorectal cancer and reduce racial disparities in this deadly disease.”


The researchers have published their findings in the Journal of the National Cancer Institute. The research team consisted of Matthew Devall, Xiangqing Sun, Fangcheng Yuan, Gregory S. Cooper, Joseph Willis, Daniel J. Weisenberger, Graham Casey and Li Li.

The work was supported by the National Cancer Institute Cancer Disparities SPORE Planning Grant (P20 CA233216), Case Comprehensive Cancer Center GI SPORE (P50 CA150964), National Cancer Institute (CA143237) and a pilot grant from the UVA Cancer Center (P30CA044579).

Reference: Matthew Devall, PhD, Xiangqing Sun, PhD, Fangcheng Yuan, Mph, Gregory S Cooper, Md, Joseph Willis, Md, Daniel J Weisenberger, PhD, Graham Casey, PhD, Li Li, MD, PhD, Racial Disparities in Epigenetic Aging of the Right vs Left Colon, JNCI: Journal of the National Cancer Institute, , djaa206,

Provided by University of Virginia Health System

Study Suggests that Gut Fungi are not Associated with Parkinson’s Disease (Psychiatry)

Although the bacterial microbiome is strongly connected to PD and gut dysfunction is nearly universal in this disease.

The bacterial gut microbiome is strongly associated with Parkinson’s disease (PD), but no studies had previously investigated he role of fungi in the gut. In this novel study published in the Journal of Parkinson’s Disease, a team of investigators at the University of British Columbia examined whether the fungal constituents of the gut microbiome are associated with PD. Their research indicated that gut fungi are not a contributing factor, thereby refuting the need for any potential anti-fungal treatments of the gut in PD patients.

Stacked bar plot of the top nine most abundant fungal genera in controls and PD patients. Credit: Journal of Parkinson’s Disease. © Parkinson Canada/Parkinson Society British Columbia

“Several studies conducted since 2014 have characterized changes in the gut microbiome,” explained lead investigator Silke Appel-Cresswell, MD, Pacific Parkinson’s Research Centre and Djavad Mowafaghian Centre for Brain Health and Division of Neurology, Faculty of Medicine, University of British Columbia. “Most existing studies, however, employ bacterial-specific sequencing. To date, a potential role for the fungal constituents of the gut microbiome, also known as the “mycobiome,” has remained unexplored.”

In order to investigate whether the fungal constituents of the gut microbiome are associated with PD researchers enrolled 95 PD patients and 57 controls from the Pacific Parkinson’s Research Centre (PPRC) at the University of British Columbia. Participants provided a single fecal sample and completed a two-hour study visit during which their PD symptoms were assessed.

Analysis determined that the fungal microbiome in PD did not essentially differ from that of matched controls, and there were no strong associations between gut fungi and PD symptoms.

Fungi were very sparse among participants’ fecal microbiomes. After filtering, 106 of the 152 participants (64/95 PD and 42/57 control) remained for downstream compositional analysis; the remainder had virtually no detectable fungal genomic content. Most of the genera identified were environmental or dietary in origin.

Saccharomyces was by far the most dominant fungal genus detected. Although these investigations did not reveal any significant role for gut fungi in PD, interestingly, lower overall fungal abundance (relative to bacteria) in the PD gut were observed, which might reflect a less hospitable environment of the gut in PD.

This paper plays an important role by answering the call by the PD research community and funding organizations to publish negative results, crucial to avoid investing precious research funding into likely futile endeavors and providing a more balanced reflection of data in the field.

“The data are an important piece in the puzzle of understanding the overall role of the gut microbiome in PD,” continued Dr. Appel-Cresswell. “PD patients can rest assured that gut fungal overgrowth, or dysbiosis, is likely not a contributing factor to any of their PD symptoms, both motor and non-motor.”

“The gut microbiome in PD continues to be an exciting field of research where we are just at the beginning of unraveling potential mechanisms. It will be important to publish negative results as well as positive findings along with detailed methods to have a realistic reflection of the data in the literature to accelerate discovery,” she concluded.

PD is a slowly progressive disorder that affects movement, muscle control, and balance. It is the second most common age-related neurodegenerative disorder affecting about 3% of the population by the age of 65 and up to 5% of individuals over 85 years of age. In recent years, more attention has been given to the gut as a key player in the initiation and progression of PD.

Reference: Cirstea, Mihai S., Sundvick, Kristen, Golz, Ella et al., “The Gut Mycobiome in Parkinson’s Disease’. 1 Jan. 2020 : 1 – 6.

Provided by IOS Press

New Study on the Role of Monocytes in Sarcoidosis (Medicine)

The cause of the inflammatory lung disease sarcoidosis is unknown. In a new study, researchers at Karolinska Institutet have investigated whether a type of immune cell called a monocyte could be a key player in sarcoidosis pathogenesis and explain why some patients develop more severe and chronic disease than others. The study, which is published in The European Respiratory Journal, opens new possibilities for future diagnostic and therapeutic methods.

Computer generated 3D illustration of monocyte. Illustration: Getty Images.

Sarcoidosis is an inflammatory disease that in 90 percent of cases affects the lungs, but can also attack the heart, skin and lymph system. The cause of the disease is not yet established, and there is currently no cure.

Common symptoms of acute sarcoidosis are high fever, purple patches on the lower legs, swollen ankles and muscular/arthritic pain.

Most common in Sweden

While some 30 percent of patients recover after a couple of years, others can suffer extensive lung damage that in exceptional cases requires a lung transplant.

Approximately 16,000 people live with the disease in Sweden, and 1,200 are newly diagnosed every year, making the Sweden the most affected in the world. Most patients are between 30 and 60 years of age.

In a new study, researchers at Karolinska Institutet, Karolinska University Hospital and Umeå University have shown how certain white blood cells called monocytes could be a vital marker in understanding the inflammatory process of sarcoidosis.

Although monocytes are part of the immune system, they can, under certain circumstances, aggravate the inflammatory process in body tissue. Knowledge of the part played by monocytes in sarcoidosis has so far been limited. 

Elevated monocyte levels

The researchers also found that the higher level of monocytes in people diagnosed with the disease makes it possible to assess the risk of a more progressive disease process.

With the results come new possibilities for identifying biomarkers for sarcoidosis and for more personalised care.

Anna Smed Sörensen. Photo: Karolinska Institute

“If we identify at an earlier stage who risks being seriously affected by the disease, it can hopefully lead to prophylactic and more efficacious treatment,” says the paper’s last author Anna Smed Sörensen, docent at the Department of Medicine, Solna, Karolinska Institutet. “Eventually, our research could mean fewer cases of chronic sarcoidosis.”

The results are based on 108 individuals with sarcoidosis and 30 healthy controls that were followed over a two-year period. 

The study was financed by the Swedish Heart-Lung Foundation, the Swedish Research Council, the Knut and Alice Wallenberg Foundation, Karolinska Institutet and King Gustaf V and Queen Victoria’s Foundation of Freemasons. There are no declared conflicts of interest.


“Monocytes in sarcoidosis are potent TNF producers and predict disease outcome”, Rico Lepzien, Sang Liu, Paulo Czarnewski, Mu Nie, Björn Österberg, Faezzah Baharom, Jamshid Pourazar, Gregory Rankin, Anders Eklund, Matteo Bottai, Susanna Kullberg, Anders Blomberg, Johan Grunewald and Anna Smed Sörensen. European Respiratory Journal, online 14 January 2021, doi: 10.1183/13993003.03468-2020.

Provided by Karolinska Institutet

Neuronal Recycling: This is How Our Brain Allows Us to Read (Neuroscience)

There is no area in the brain that evolved specifically to subserve reading. However, behind this incredibly sophisticated task, there is a more general and evolutionarily remote mechanism. This is supported by new research conducted at SISSA and published in Current Biology.


Letters, syllables, words and sentences–spatially arranged sets of symbols that acquire meaning when we read them. But is there an area and cognitive mechanism in our brain that is specifically devoted to reading? Probably not; written language is too much of a recent invention for the brain to have developed structures specifically dedicated to it.

According to this novel paper published in Current Biology, underlying reading there is evolutionarily ancient function that is more generally used to process many other visual stimuli. To prove it, SISSA researchers subjected volunteers to a series of experiments in which they were shown different symbols and images. Some were very similar to words, others were very much unlike reading material, like nonsensical three-dimensional tripods, or entirely abstract visual gratings; the results showed no difference between the way participants learned to recognise novel stimuli across these three domains. According to the scholars, these data suggest that we process letters and words similarly to how to process any visual stimulus to navigate the world through our visual experiences: we recognise the basic features of a stimulus – shape, size, structure and, yes, even letters and words – and we capture their statistics: how many times they occur, how often they present themselves together, how well one predicts the presence of the other. Thanks to this system, based on the statistical frequency of specific symbols (or combinations thereof), we can recognise orthography, understand it and therefore immerse ourselves in the pleasure of reading.

Reading is a cultural invention, not an evolutionary acquisition

“Written language was invented about 5000 years ago, there was no enough time in evolutionary terms to develop an ad hoc system”, explain Yamil Vidal and Davide Crepaldi, lead author and coordinator of the research, respectively, which was also carried out by Eva Viviani, a PhD graduate from SISSA and now post-doc at the university of Oxford, and Davide Zoccolan, coordinator of the Visual Neuroscience Lab, at SISSA, too.

“And yet, a part of our cortex would appear to be specialised in reading in adults: when we have a text in front of us, a specific part of the cortex, the left fusiform gyrus, is activated to carry out this specific task. This same area is implicated in the visual recognition of objects, and faces in particular”. On the other hand, explain the scientists, “there are animals such as baboons that can learn to visually recognise words, which suggests that behind this process there is a processing system that is not specific for language, and that get “recycled” for reading as we humans become literate”.

Pseudocharacters, 3D objects and abstract shapes to prove the theory

How to shed light on this question? “We started from an assumption: if this theory is true, some effects that occur when we are confronted with orthographic signs should also be found when we are subjected to non-orthographic stimuli. And this is exactly what this study shows”. In the research, volunteers were subjected to four different tests. In the first two, they were shown short “words” composed of few pseudocharacters, similar to numbers or letters, but with no real meaning. The scholars explain that this was done to prevent the participants, all adults, from being influenced in their performance by their prior knowledge. “We found that the participants learned to recognise groups of letters – words, in this invented language — on the basis of the frequency of co-occurrence between their parts: words that were made up of more frequent pairs of pseudocharacters were identified more easily”. In the third experiment, they were shown 3D objects that were characterised by triplet of terminal shapes–very much like the invented words were characterised by triplets of letters. In experiment 4, the images were even more abstract and dissimilar from letters. In all the experiments, the response was the same, giving full support to their theory.

From human beings to artificial intelligence: the unsupervised learning

“What emerged from this investigation”, explain the authors, “not only supports our hypothesis but also tells us something more about the way we learn. It suggests that a fundamental part of it is the appreciation of statistical regularities in the visual stimuli that surround us”. We observe what is around us and, without any awareness, we decompose it into elements and see their statistics; by so doing, we give everything an identity. In jargon, we call it “unsupervised learning”. The more often these elements compose themselves in a precise organisation, the better we will be at giving that structure a meaning, be it a group of letters or an animal, a plant or an object. And this, say the scientists, occurs not only in children, but also in adults. “There is, in short, an adaptive development to stimuli which regularly occur. And this is important not only to understand how our brain functions, but also to enhance artificial intelligence systems that base their “learning” on these same statistical principles”.

Reference: Yamil Vidal, Eva Viviani, Davide Zoccolan, Davide Crepaldi, “A general-purpose mechanism of visual feature association in visual word identification and beyond”, Current Biology, 2021.

Provided by SISSA

Breakthrough in Understanding ‘Tummy Bug’ Bacteria (Biology)

Scientists have discovered how bacteria commonly responsible for seafood-related stomach upsets can go dormant and then “wake up”.

A selective agar called TCBS agar which was used to help identify and grow Vibrio parahaemolyticus. Credit Sariqa Wagley

Vibrio parahaemolyticus is a marine bacterium that can cause gastroenteritis in humans when eaten in raw or undercooked shellfish such as oysters and mussels.

Some of these bacteria are able to turn dormant in poor growth conditions such as cold temperatures – and can remain in that state of hibernation for long periods before resuscitating.

University of Exeter scientists have identified a population of these dormant cells that are better at waking up, and have discovered an enzyme involved in that waking up process.

“Most of these bacteria die when they encounter poor growth conditions, but we identified sub-populations of bacteria that are able to stay dormant for long periods of time,” said lead author Dr Sariqa Wagley, of the University of Exeter.

“We found that this population has a better ability to revive when conditions improve.

“Our tests show that when these dormant bacteria are revived they are just as virulent and able to cause disease.”

The findings could have implications for seafood safety, as dormant cells are not detectable using routine microbiological screening tests and the true bacterial load (amount of bacteria) could be underestimated.

“When they go dormant, these bacteria change shape, reduce respiration activities and they don’t grow like healthy bacteria on agar plates used in standard laboratory tests, so they are much harder to detect,” Dr Wagley explained.

“Using a range of tools, we were able to find dormant bacteria in seafood samples and laboratory cultures and look at their genetic content to look for clues in how they might survive for long periods.

“It is important to note that thorough cooking kills bacteria in seafood.

“Our results may also help us predict the conditions that dormant bacteria need in order to revive.”

Working with the seafood industry, the Exeter team identified a lactate dehydrogenase enzyme that breaks down lactic acid into pyruvate, a key component of several metabolic pathways (chemical reactions in a cell).

The findings suggest that lactate dehydrogenase is essential both for maintaining bacterial dormancy and resuscitation back to an active form.

Vibrio parahaemolyticus usually grows in warm and tropical marine environments, although Dr Wagley said that due to rising sea temperatures in recent years it is now prevalent in UK waters during the summer months.

During the winter, it is not detected in the marine environment around the UK and it is thought to die due to the cold winter temperatures.

This study could explain how Vibrio parahaemolyticus is able remerge in the environment during the summer.

The study was partly funded by the Biotechnology and Biological Sciences Research Council (BBSRC), with additional funding and support from Lyons Seafoods.

The paper, published in the journal PLOS Pathogens, is entitled: “Bacterial dormancy: a subpopulation of viable but non-culturable cells demonstrates better fitness for revival.”

Reference: Wagley S, Morcrette H, Kovacs-Simon A, Yang ZR, Power A, Tennant RK, et al. (2021) Bacterial dormancy: A subpopulation of viable but non-culturable cells demonstrates better fitness for revival. PLoS Pathog 17(1): e1009194. doi:10.1371/journal.ppat.1009194

Provided by University of Exeter

Study Defines Small-cell Lung Cancer Subtypes and Distinct Therapeutic Vulnerabilities For Each Type (Medicine)

Researchers from The University of Texas MD Anderson Cancer Center have developed the first comprehensive framework to classify small-cell lung cancer (SCLC) into four unique subtypes, based on gene expression, and have identified potential therapeutic targets for each type in a study published today in Cancer Cell.

SCLC is known for rapid, aggressive growth and resistance to treatment, which leads to poor outcomes. While recent advances in immunotherapy and targeted therapy have improved survival for non-small cell lung cancer (NSCLC), progress for SCLC has been limited.   

“For decades, small-cell lung cancer has been treated as a single disease because the tumors all look similar under the microscope, even though they behave very differently,” said Lauren Averett Byers, M.D., associate professor of Thoracic/Head & Neck Medical Oncology and senior author of the study. “Our study provides a transformative new system to define four major groups of small-cell lung cancer and, for the first time, an avenue for personalized treatment of the second most common type of lung cancer.”

Four major subtypes of SCLC

Although previous research identified three possible subtypes of SCLC based on transcription factors, which indicate whether particular genes are turned “on” or “off,” a large number of SCLC tumors didn’t fit into one of the three groups. Rather than trying to apply a hypothesis to the remaining tumors, Byers’ team took an unbiased bioinformatics approach—letting the data from a large set of SCLC tumor samples speak for itself. This led to a 1,300 gene “signature” that confirmed the three previously observed groups (A, N and P), plus a previously unrecognized fourth group (I) with a unique immune landscape.

Lauren Averett Byers, M.D. © Anderson Cancer center

The first three groups are defined by activation of the ASCL1 (SCLC-A), NEUROD1 (SCLC-N), and POU2F3 (SCLC-P) genes. The fourth type, SCLC-I, is characterized by an inflamed gene signature with a high expression of multiple immune genes, including significantly greater levels of genes indicating the presence of CD8-positive cytotoxic T cells.

“Our paper shows that the inflamed group has a distinct biology and environment and tends to be more responsive to immunotherapy,” Byers said. “Identifying the inflamed group is very important because so far there have not been any validated biomarkers for small-cell lung cancer that predict which patients get the most benefit from immunotherapy.”

Based on recent clinical trials, immunotherapy has become part of the standard of care for SCLC. However, all clinical trials for SCLC, including those using immune checkpoint inhibitors, have had limited success. This study could help explain why, as the results suggest different classes of drugs may be more effective in specific subtypes. For example, in the samples from this study, SCLC-I was most sensitive to immune checkpoint blockade, SCLC-A to BCL2 inhibitors, SCLC-N to Aurora kinase inhibitors and SCLC-P to PARP inhibitors.

“Immunotherapy plus chemotherapy is currently the backbone of treatment for all advanced small-cell lung cancer patients, but not all patients experience the same benefit,” said Carl Gay, M.D., Ph.D., assistant professor of Thoracic/Head & Neck Medical Oncology and lead author of the study. “Our results provide an opportunity to think about immunotherapy approaches that are specific to the inflamed group, which has a very different microenvironment, separately from combination approaches that might activate the immune response in the other three groups.”

Study methods and analysis

The research team first identified the four groups by applying non-negative matrix factorization to previously published data from 81 SCLC patients with surgically resected tumors. Most patients in this data set had early-stage disease, which is not typical. Because SCLC is so aggressive, it’s most often diagnosed at an advanced stage. To validate the four subtypes in late-stage disease, Byers’ team also analyzed data from 276 SCLC patients enrolled in the Phase III IMpower133 clinical trial, which established the current standard of care for advanced SCLC and represents the largest available SCLC data set to date.

Carl Gay, M.D., Ph.D. © MD Anderson Cancer Center

“Looking at the bigger data set of what a more typical patient looks like, the four major groups came out very clearly again, including the novel inflamed group we identified,” Byers said. “We also showed that you don’t have to use the full 1,300 gene panel. We have developed immunohistochemistry tests that we’re working toward adapting for the clinic to more quickly and easily classify SCLC tumors.”

One of the known challenges of SCLC is that it often develops resistance to treatment, even after an initial response. To determine if “subtype switching” causes resistance, the authors used single-cell RNA sequencing to evaluate tumor evolution in a series of patient-derived SCLC models. The study suggests that SCLC-A tends to switch to SCLC-I after being treated with chemotherapy, which could contribute to treatment resistance.

A path toward personalized treatment for SCLC

Using the SCLC subtype framework in future clinical trials will be necessary to verify the study findings, particularly regarding the therapeutic vulnerabilities for each group.

“Now we can develop more effective strategies for each group in clinical trials, taking into account that they each have different biology and optimal drug targets,” Byers said. “As a field, small-cell lung cancer is about 15 years behind non-small cell lung cancer’s renaissance of biomarkers and personalized therapies. This represents a huge step in understanding which drugs work best for which patients and gives us a path forward for personalized approaches for small-cell lung cancer.”

A full list of collaborating researchers and their disclosures is included in the paper. This research was supported by the National Institutes of Health/National Cancer Institute (CCSG P30-CA016672, T32 CA009666, R50-CA243698, R01-CA207295, U01-CA213273), The University of Texas Southwestern and MD Anderson Cancer Center Lung SPORE (5 P50 CA070907), Department of Defense (LC170171), Cancer Prevention & Research Institute of Texas (RP170067), The University of Texas MD Anderson Lung Cancer Moon Shots Program, Abell-Hangar Foundation, Andrew Sabin Family Fellowship, ASCO Young Investigator Award, The Hope Foundation, Khalifa Bin Zayed Al Nahyan Foundation and Rexanna’s Foundation for Fighting Lung Cancer.

Reference: Carl Gay, Ellison Stewart et al., “Patterns of transcription factor programs and immune pathway activation define four major subtypes of SCLC with distinct therapeutic vulnerabilities”, Cancer Cell, 2021.

Provided by University of Texas MD Anderson Cancer Center