Star Formation Rates Have Been Declining Ever Since, The Epoch Of Peak Galaxy Formation (Astronomy)

Measurements of faint radio emission from distant galaxies have revealed the nature of the gases that drove the epoch of peak galaxy formation — and also suggest why star-formation rates have since declined.

Over the past few decades, starting from studies carried out by the Hubble Space Telescope, very deep observations of select fields in the sky have revolutionized our understanding of galaxy formation. These observations have provided quantitative measures of the stars and star formation in galaxies from the present day right back to the first galaxies in the Universe, just a few hundred million years after the Big Bang. The results show that the cosmic star-formation-rate density — the rate of star formation per unit volume of the Universe — peaked between 2.5 and 4.5 gigayears after the Big Bang (1 Gyr is 10^9 years). Roughly half of the stars in the Universe formed during this peak epoch of galaxy assembly. The star-formation-rate density has decreased tenfold over the 10 Gyr that have passed since then.

The determination of the star-formation history of the Universe is one of the great successes of modern observational astronomy — but stars reveal only half of the story of galaxy formation. The other half is what happens to the gas that fuels star formation. Hot gas is thought to settle from the intergalactic medium (the material found in the space between galaxies) into regions of densely concentrated dark matter, as a result of the dark matter’s gravitational pull. The gas is then thought to cool to form diffuse clouds of neutral hydrogen atoms, which further cool and condense into dense clouds of hydrogen molecules (H2), from which stars form. These concentrations of stars and gas are what we call galaxies. Unfortunately, details of the neutral atomic hydrogen that contributes to galaxy formation remain sketchy, beyond what has been observed in galaxies in our local neighborhood of the Universe.

Chowdhury et al. now present a direct measurement of the emission from neutral atomic hydrogen in galaxies at a period close to the peak epoch of galaxy assembly. The authors used the Giant Metrewave Radio Telescope near Pune, India, to observe a characteristic feature of the emission spectrum of neutral hydrogen, called the 21-centimetre hyperfine structure line (or the H I 21-cm emission, for short). This feature is often used as a tracer of the neutral-hydrogen content of galaxies (Fig. 1), but is very weak. Detecting the H I 21-cm emission in the spectra of individual galaxies at large distances, such as those involved in Chowdhury and colleagues’ study, is problematic, even with the biggest radio telescopes in the world.

Figure 1 | The M81 triplet of galaxies. The stars in these modern galaxies are shown in red-white; these are true-colour images, obtained as a composite of multicolour optical images from the Sloan Digital Sky Survey in New Mexico. Gas clouds of neutral atomic hydrogen are shown in blue-white, and were imaged by the Very Large Array radio observatory in New Mexico by measuring the 21-centimetre hyperfine structure emission (a characteristic line in the emission spectrum of neutral hydrogen known as the H I 21-cm emission, for short). The ratio of the total mass of neutral hydrogen to the stellar mass in this system is less than 10%. Chowdhury et al.1 report measurements of H I 21-cm emission from galaxies during the peak epoch of cosmic star formation, about 8.5 gigayears ago (1 Gyr is 109 years), and find that this ratio was about 2.5 times higher, on average, than that in present-day galaxies, such as M81.Credit: Erwin de Blok

To overcome the sensitivity problem, the authors used a method known as a stacking analysis. They selected 7,653 galaxies whose distances from Earth are known from measurements of their redshifts made using optical telescopes. Redshift is a measure of the change in wavelength of a known line in the spectrum of an astronomical object, and occurs as a result of the expansion of the Universe. Redshift increases with distance from Earth and provides a measure not only of that distance, but also of the look-back time — the time elapsed between the emission of light from the source and its detection on Earth.

The light from the galaxies selected by Chowdhury and co-workers was emitted between 4.4 Gyr and 7.1 Gyr after the Big Bang, during the tail end of the peak epoch of galaxy assembly. The authors stacked the individual radio spectra from all the galaxies, lining up the sources in three dimensions (two dimensions corresponded to sky position, the third to redshift), to obtain the mean spectrum of neutral hydrogen for this set of galaxies. In so doing, they achieve a sensitivity that is roughly 90 times better than could be obtained for an individual galaxy.

Chowdhury and colleagues were thus able to determine the average mass of neutral hydrogen in galaxies towards the end of the peak epoch of star formation, about 8 billion years ago. They find that galaxies at that time contained about 2.5 times more of this gas relative to their stellar masses than do galaxies today. Given that atomic hydrogen is a key ingredient in the recipe for star formation, the discovery of an excess of this gas in distant galaxies helps explain the high star-formation rate at those early times.

Moreover, the authors find that the neutral hydrogen would have been consumed by star formation in a relatively short period of time (1–2 Gyr) — continuous gas accretion from the intergalactic medium would have been required to maintain the high rate of star formation. In other words, the slowdown of star formation observed after the peak epoch probably occurred, in part, because the supply of neutral hydrogen from the intergalactic medium was insufficient to fuel a high formation rate.

The gas content of galaxies in the distant Universe was not completely unknown before Chowdhury and co-workers’ study. Previous investigations of distant galaxies using the latest generation of radio telescopes provided the first observations of how the amount of H2 in galaxies has evolved through cosmic time. Likewise, studies of a line in the ultraviolet emission spectrum of atomic hydrogen (the Lyman-α line) have been used to determine the neutral-hydrogen content of galaxies at even greater distances than those in Chowdhury and colleagues’ work. However, the Lyman-α line emitted from galaxies during the epochs studied by Chowdhury et al. cannot be observed from the ground, because redshifting moves it to a part of the electromagnetic spectrum to which Earth’s atmosphere is opaque. Using the H I 21-cm line, the authors have therefore finally filled a gap in our knowledge of galaxy formation close to the crucial peak epoch.

The authors’ stacking analysis has some limitations, because it provides no information about the gas ‘demographics’. For example, the results cannot tell us whether the neutral hydrogen was found mostly in massive galaxies, or was distributed equally among high- and low-mass galaxies. Nor can it tell us whether the gas extends much beyond the stars in each galaxy, or whether the gas rotates in the gravitational field of each galaxy, rather than streaming into the galaxy centres.

A radio telescope called the Square Kilometre Array is currently being designed, and will be the world’s largest. Its defining goal is to detect the H I 21-cm emission from individual galaxies at large cosmological distances. Only instruments with this capability will be able to address the detailed questions about gas demographics and morphology on a case-by-case basis. Chowdhury and colleagues’ results suggest that studies of the H I 21-cm emission hold great promise.

The authors’ detection — even as a statistical mean — of H I 21-cm emission from galaxies during a crucial period of star formation is a watershed moment in our understanding of how baryonic matter is taken up and used by galaxies. It also indicates a clear pathway of research that will guide future studies with the Square Kilometre Array.

References: Chowdhury, A., Kanekar, N., Chengalur, J. N., Sethi, S. & Dwarakanath, K. S. Nature 586, 369–372 (2020). doi: https://doi.org/10.1038/d41586-020-02791-7

Provided by Nature

What Is The Sum Of The Masses Of The Milky Way and Andromeda? (Astronomy)

Using the MultiDark Planck (MDPL) simulation, combined with data from the Hubble Space Telescope and Gaia, Pablo Lemos and colleagues estimated the combined mass of Milky way and Andromeda. A similar data set was previously used in to obtain a point estimate of Mmw+M31 using Artificial
Neural Networks (ANN) in conjunction with the Timing Analysis, TA. In contrast, their work uses Density Estimation Likelihood Free Inference DELFI 2, using the pyDELFI package², combined with more recent data.

©NASA; ESA; Z. LEVAY AND R. VAN DER MAREL, STSCI; T. HALLAS; AND A. MELLINGER

Likelihood-free inference (LFI) has emerged as a very promising technique for inferring parameters from data, particularly in cosmology. It provides parameter posterior probability estimation without requiring the calculation of an analytic likelihood (i.e. the probability of the data being observed given the parameters). LFI uses forward simulations in place of an analytic likelihood function. Writing a likelihood for cosmological observables can be extremely complex, often requiring the solution of Boltzmann equations, as well as approximations for highly nonlinear processes such as structure formation and baryonic feedback. While simulations have their own limitations and are computationally expensive, the quality and efficiency of cosmological simulations are constantly increasing, and they are likely to soon far surpass the accuracy or robustness of any likelihood function.

This is a rapidly growing topic in cosmology, due to the emergence of novel methods for likelihood-free inference, with applications to data sets such as the Joint Light Curve (JLA) and Pantheon supernova datasets, and the Dark Energy Survey Science Verification data, amongst others. There are, therefore, many applications for which LFI could improve the robustness of parameter inference using cosmological data. In the present work, Pablo Lemos and colleagues performed a LFI-based parameter estimation of the sum of masses of the Milky Way and M31. The likelihood function for this problem requires significant simplifications, but forward simulations can be obtained easily.

A comparison of the estimates of the separate masses of M31, the MW, and their sum, the latter from this work. The plot shows the small discrepancy between separate estimates of the individual masses of the MW and M31 and this
work. ©Pablo et al.

The Milky Way & Andromeda are the main components of the Local Group, which includes tens of smaller galaxies. Researchers define Milky way and Andromeda as the sum of the Mmw and M31 masses. Estimating mass of milky way and andromeda remains an elusive and complex problem in astrophysics. As the mass of each of the Milky Way and M31 is known only to within a factor of 2, it is important to constrain the sum of their masses. The traditional approach is to use the so-called timing argument (TA). The timing argument estimates Mmw+M31 using Newtonian Dynamics integrated from the Big Bang. This integration is an extremely simplified version of a very complex problem.
Therefore, alternative methods that do not rely on the same approximations become extremely useful.

In their work, they have used Density Estimation Likelihood-Free Inference (DELFI) with forward-modelling to estimate the posterior distribution for sum of the masses of the Milky Way and M31 using observations of the distance and velocity to M31.

Comparison of the Pablo Lemos et al. work with previous estimates of the MMW+M31, shown as best fit and 68% confidence intervals. The result of this work is shown at the bottom; it is the first to account fully for the observational errors, and to not rely on the approximation of the TA.

They obtained the mass of Mmw+M31 = 4.6 × 10¹² M. Their result is the first one to fully account for the distribution of the observed errors in a robust (and Bayesian) manner. Results of previous studies use Gaussian approximations for observational errors, or neglect them completely, & therefore their result is the most accurate estimate of Mmw+M31 to date.

References: Pablo Lemos, Niall Jeffrey, Lorne Whiteway, Ofer Lahav, Niam I Libeskind, Yehuda Hoffman, “The sum of the masses of the Milky Way and M31: a likelihood-free inference approach”, ArXiv, pp. 1-14, Oct 2020. arXiv:2010.08537 link: https://arxiv.org/abs/2010.08537

Copyright of this article totally belongs to uncover reality. One should not use it without giving proper credit to us.

Oncotarget: Evaluation Of Cellular Alteration & Inflammatory Profile Of Cells (Medicine)

Oncotarget recently published “Evaluation of cellular alterations and inflammatory profile of mesothelial cells and/or neoplastic cells exposed to talc used for pleurodesis” which reported that in this study, PMC cultures, human lung and breast adenocarcinoma cells were divided in 5 groups: 100% PMC, 100% NC, 25% PMC 75% NC, 50% of each type and 75% PMC 25% NC. High IL-6, IL-1β and TNFRI levels were found in PMC and NC exposed to talc. In pure cultures TNFRI was higher in A549 followed by PMC and MCF7. LDH was higher in A549 than PMC. Apoptotic cells exposed to talc were higher in pure cultures of NC than in PMC. Mixed cultures of PMC and A549 showed lower levels of apoptosis in cultures with more NC.

Percentage of apoptosis in PMC, A549 and/or MCF7 after 24 hours exposed to talc. PMC = pleural mesothelial cells; NC = neoplastic cells. *p < 0.05 when compared 100% A549 and MCF7 when 100% PMC; #p < 0.05 when compared 100% A549 when A549 mixed, MCF7 mixed and 100% PMC. ©Correspondence to – Milena Marques Pagliarelli Acencio – milena.acencio@incor.usp.br

Dr. Milena Marques Pagliarelli Acencio from the University of de São Paulo said, “Metastatic neoplasms are the most common type of pleural neoplastic disease and the principal primary sites are lung, breast, stomach and ovary.”

The Oncotarget authors described that in an experimental model of pleurodesis acute inflammatory reaction to talc was observed with an increase in pleural fluid concentrations of IL-8, VEGF and TGF-β detected after intrapleural injection of talc and noted that the mesothelial cell layer was preserved.

Thus, mesothelial cells appear to participate in the response to talc and contribute to the acute inflammatory response.

Some authors discuss the importance of cell death caused mainly by apoptosis in mesothelial cells and/or neoplastic cells leading to the success or failure of pleurodesis, or even acting to decrease the tumor.

They explain that in preliminary experimental studies it has also been suggested that talc can induce apoptosis in tumor cells and inhibit angiogenesis, thus contributing to a better control of malignant pleural effusion.

The ultimate hypothesis of the Oncotarget study is to determine the role of mesothelial and/or neoplastic cells in the initiation and regulation of the acute inflammatory response following the instillation of talc in the pleural space, evaluating cellular aspects such as apoptosis and inflammatory mediators.

“The ultimate hypothesis of the Oncotarget study is to determine the role of mesothelial and/or neoplastic cells in the initiation and regulation of the acute inflammatory response.”
The Acencio Research Team concluded in their Oncotarget Research Paper that these results permit them to infer that the normal mesothelium in contact with the talc particles is the main stimulus in the genesis of the inflammatory process.

From the mesothelial activation the production of molecular mediators occurs, and that probably contributes to the dynamics of the local inflammatory process and subsequent production of pleural fibrosis; these mechanisms are necessary to induce effective pleurodesis.

These data also allow them to observe that talc has an action in the neoplastic cells inducing higher rates of apoptosis than observed in normal mesothelial cells; this may even contribute in a modest way to tumoral decrease.

Also that different types of tumor cells may respond differently to exposure to talc.

References: Acencio M. Marques Pagliarelli, Silva B. Rocha, Teixeira L. Ribeiro, Alvarenga V. Adélia, Silva C. Sérgio Rocha, da Silva A. Graças Pereira, Capelozzi V. Luiza, Marchi E. Evaluation of cellular alterations and inflammatory profile of mesothelial cells and/or neoplastic cells exposed to talc used for pleurodesis. Oncotarget. 2020; 11: 3730-3736. Retrieved from https://www.oncotarget.com/article/27750/text/

Provided by Impact Journals LLC

Primates Aren’t Quite Frogs (Neuroscience)

Spinal modules in macaques can independently control forelimb force direction and magnitude.

Researchers in Japan demonstrated for the first time the ‘spinal motor module hypothesis’ in the primate arm, opening a new pathway for recovery after disease or injury.

An experiment nearly 40 years ago in frogs showed that their leg muscles were controlled by simultaneously recruitment of two modules of neurons. It’s a bit more complex in macaques (The National Center of Neurology and Psychiatry).

The human hand has 27 muscles and 18 joints, which our nervous system is able to coordinate for complex movements. However, the number of combinations — or degrees of freedom — is so large that attempting to artificially replicate this control and adjustment of muscle activity in real time taxes even a modern supercomputer. While the method used by the central nervous system to reduce this complexity is still being intensely studied, the “motor module” hypothesis is one possibility.

Under the motor module hypothesis, the brain recruits interneuronal modules in the spinal cord rather than individual muscles to create movement; wherein different modules can be combined to create specific movements. Nearly 40 years ago, research in frogs showed that simultaneously recruiting two modules of neurons controlling leg muscles created the same pattern of motor activity that represents a “linear summation” of the two component patterns.

An international team of researchers, led by Kazuhiko Seki at the National Center of Neurology and Psychiatry’s Department of Neurophysiology, in collaboration with David Kowalski of Drexel University and Tomohiko Takei of Kyoto University’s Hakubi Center for Advanced Research, attempted to determine if this motor control method is also present in the primate spinal cord. If validated, it would provide new insight into the importance of spinal interneurons in motor activity and lead to new ideas in movement disorder treatments and perhaps even a method to “reanimate” a limb post-spinal injury.

The team implanted a small array of electrodes into the cervical spinal cord in three macaques. Under anesthesia, different groups of interneurons were recruited individually using a technique called intraspinal microstimulation, or ISMS. The team found that, as in the frog leg, the force direction of the arm at the wrist during dual-site simulation was equal to the linear summation of the individually recruited outputs. However, unlike the frog leg, the force magnitude output could be many times higher than that expected from a simple linear summation of the individual outputs. When the team examined the muscle activity, they found that this supralinear summation was in a majority of the muscles, particularly in the elbow, wrist, and finger.

“This is a very interesting finding for two reasons,” explains Seki. “First, it demonstrates a particular trait of the primate spinal cord related to the increased variety of finger movements. Second, we now have direct evidence primates can use motor modules in the spinal cord to control arm movement direction and force magnitude both efficiently and independently.”

In effect, using paired stimulation in the primate spinal cord not only directly activate two groups of interneurons, INa and INb, which recruit their target muscle synergies, Syn-a and Syn-b, to set the arm trajectory, but can also activate a third set of interneurnons that can adapt the motor activity at the spinal level to change the force of the movement, group INc. This would let the brain plan the path the arm should take while the spinal cord adapts the muscle activity to make sure that path happens.

One example of this “plan and adapt” approach to motor control is the deceptively simple act of drinking from a can of soda. The brain can predetermine the best way to lift the can to your mouth for a sip, but the actual amount of soda in the can — and therefore the can’s weight — is perhaps unknown. Once your brain has determined the trajectory the can should take — in this case INa and INb — the amount of force needed to complete that action can be modulated separately in INc, rather than redetermining which sets of muscles will be needed.

This study experimentally proves for the first time that primate arm movements may be efficiently controlled by motor modules present in the spinal cord. Based on the results of this research, it is expected that the analysis and interpretation of human limb movements based on the motor module hypothesis will further advance in the future.

In the field of robotics, this control theory may lead to more efficient methods to create complex limb movements, while in the field of clinical medicine, it is expected that new diagnostic and therapeutic methods will be created by analyzing movement disorders caused by neurodegenerative diseases and strokes.

References: Amit Yaron, David Kowalski, Hiroaki Yaguchi, Tomohiko Takei, and Kazuhiko Seki, “Forelimb force direction and magnitude independently controlled by spinal modules in the macaque”, Proceedings of the National Academy of Sciences of the United States of America, 2020. DOI】https://doi.org/10.1073/pnas.1919253117

Provided by Kyoto University

Study Discovers Potential Target For Treating Aggressive Cancer Cells (Medicine / Oncology)

New research by a team at Brown University finds that special filaments called vimentin may be key to the spread of some aggressive, chemo-resistant cancer cells.

As researchers and medical professionals work to develop new treatments for cancer, they face a variety of challenges. One is intratumor heterogeneity — the presence of multiple kinds of cancer cells within the same tumor. Often, these “mosaic” tumors include cells, such as polyploidal giant cancer cells, that have evolved to become aggressive and resistant to chemotherapy and radiation.

©gettyimages

In the past, polyploidal giant cancer cells (PGCCs) have been largely ignored because studies had found that they do not undergo mitosis, which is the mechanism that is typically required for cell division. However, recent studies have found that PGCCs undergo amitotic budding — cell division that does not occur through mitosis — and that their cell structure enables them to spread rapidly.

A new study, published this month by a team of Brown University scientists in Proceedings of the National Academy of Sciences, sheds more light and identifies a potential target for treating these aggressive cancer cells.

Specifically, PGCCs rely on cell filaments called vimentin in order to migrate. Vimentin is found in cells throughout the body, but PGCCs were found to have a greater amount of vimentin compared to non-PGCC control cells, and their vimentin was much more evenly distributed throughout the cell.

“These cells appear to play an active role in invasion and metastasis, so targeting their migratory persistence could limit their effects on cancer progression,” said study author Michelle Dawson, an assistant professor of molecular pharmacology, physiology and biotechnology at Brown University.

As cells replicate within a tumor, they become increasingly crowded, and neighboring cells press tightly against them. Eventually, the cells become jammed together in a solid-like mass. Vimentin provides PGCCs with a more flexible, elastic structure, which helps protect them from damage in this situation and allows them to squeeze past their neighboring cells to escape to new, less crowded areas.

Thus, when the researchers disrupted vimentin, they dramatically reduced the cells’ ability to move. In addition, vimentin appears to play an important role in rearranging the nucleus of a dividing cell, so vimentin disruption could also help prevent PGCCs from forming daughter cells.

As a next step, Dawson and her colleagues hope to find a biomarker for PGCCs so that they can study these cells in human tumors.

“This study shows vimentin is overexpressed in PGCCs and is likely responsible for several of their abnormal behaviors,” Dawson said. “Vimentin is a ubiquitous protein, so targeting vimentin directly may not be an answer, but drugs that target vimentin interactions may be effective in limiting the effects of these cells.”

In addition to Dawson, other Brown University authors on the study were Botai Xuan, Deepraj Ghosh, Joy Jiang and Rachelle Shao. The study was funded by the National Science Foundation (1825174) and the National Institutes of Health (P30 GM110750).

References: Botai Xuan, Deepraj Ghosh, Joy Jiang, Rachelle Shao, Michelle R. Dawson, “Vimentin filaments drive migratory persistence in polyploidal cancer cells”, Proceedings of the National Academy of Sciences Oct 2020, 202011912; DOI: 10.1073/pnas.2011912117

Provided by Brown University

Optical Imaging Techniques Could Offer Non-invasive Method To Measure Swelling Within The Brain (Neuroscience)

Imaging techniques ordinarily used by eye doctors to monitor the optic nerve could offer a non-invasive method of measuring and managing potentially dangerous swelling in the skull, a new UK study led by researchers at the University of Birmingham has found.

Optical Coherence Tomography (OCT) works by using light waves to take cross-section pictures of the back of the eye, allowing doctors to not only see each individual layer, but measure each layer’s thickness. ©University of Birmingham

Intracranial pressure (ICP) – or pressure within the skull often caused by a recent brain injury – is a potentially fatal condition that can damages brain tissue. Currently, the most common way to measure the level of pressure is using a lumbar puncture to remove and analyse a sample of spinal fluid, however this can result in complications and can cause patient distress.

However, this latest study looking at potential non-invasive alternatives may have found the answer in the form of a technique ordinarily used by ophthalmologists. Optical Coherence Tomography (OCT) works by using light waves to take cross-section pictures of the back of the eye, allowing doctors to not only see each individual layer, but measure each layer’s thickness.

The longitudinal cohort study analysed data collected from 3 randomized clinical trials, conducted between April 2014 and August 2019. 104 female participants aged between 18 and 45, all with idiopathic intracranial hypertension, were recruited from 5 NHS trusts across the UK. Participants were split into two cohorts with some of the participants receiving a small telemetric implant placed on the cranial bone to measure levels of intracranial pressure while lumbar punctures were used to measure ICP over a 24 month period in the second cohort. Both cohorts also received OCT imaging to measure the thickness of the optic nerve and macula.

Results from cohort 1 showed direct correlation between mean levels of pressure within the skull and OCT measures with optic nerve head central thickness the most closely associated with ICP. Cohort 2 also demonstrated a correlation between thickness of the optic nerve and ICP. These correlations were seen at all follow up points of the study for example at 12 months, a decrease in central thickness (by 50µm) was associated with a decrease in pressure of 5cm H²O. Results suggest the potential for OCT to be used as a tool to inform clinicians of changes in ICP, as well as a method of monitoring and predicting levels of pressure in idiopathic intracranial hypertension.

Senior author Professor Alex Sinclair from the University of Birmingham’s Institute of Metabolism and Systems Research said: “Non-invasive measures of ICP have been sought for many years. Here we demonstrate the utility of using optic nerve head thickness to reflect changes in ICP. This will have important implications for disease monitoring and guiding treatment decisions. We are seeing a change in practice away from performing regular lumbar punctures in IIH to using OCT scanning to predict ICP and guide management”

Susan Mollan, Director of Ophthalmic Research, University Hospitals Birmingham NHS Foundation Trust commented: “The use of OCT scanning for papilloedema is growing but this research has identified that we can use measures of the optic nerve head to inform changes in ICP. Optic nerve head measures are easy and quick to perform and thus can be readily adapted into the clinical environment.”

References: Vivek Vijay, Susan P. Mollan, James L. Mitchell et al., “Using Optical Coherence Tomography as a Surrogate of Measurements of Intracranial Pressure in Idiopathic Intracranial Hypertension”, JAMA Ophthalmol. Published online October 22, 2020. doi:10.1001/jamaophthalmol.2020.4242 link: https://jamanetwork.com/journals/jamaophthalmology/article-abstract/2772034?utm_campaign=articlePDF&utm_medium=articlePDFlink&utm_source=articlePDF&utm_content=jamaophthalmol.2020.4242

Provided by University Of Birmingham

How to Recognize and Respond to a Fake Apology? (Psychology)

Suppose someone apologizes to you for harm they’ve caused, and it doesn’t quite “land.” Maybe it doesn’t sound entirely sincere—or you get a vague sense that the person delivering it just wants to wrap it up, but you’re not yet ready to move on. Or maybe they offer any of these notoriously bad ways to make amends:

©gettyimages

• A statement that contains a “but” (“I’m sorry, but…”) invalidates the apology.
• Similarly, “if” (“I’m sorry if…”) suggests that your hurt may not have happened.
• Vague wording (“for what happened”) fails to take personal responsibility.
• Passive voice (“the mistake that you were affected by”) is squirming out of responsibility, too.
• Too many words, explanations, and justifications crowd the picture.

As I write this, I struggle with the term “fake apologies,” because of course, no one can know for sure what’s in the heart of another person. But if you’re the recipient, you somehow have to figure out whether or not to accept an apology, which is hard to do if you feel uneasy and mistrustful and just can’t tell if it’s genuine.

For starters, a few words of regret usually won’t carry enough weight to build (or rebuild) trust. The words “I’m sorry” are not a magic incantation that instantly inspires faith in someone. If you’re not interested in repairing the relationship in question, you don’t have to worry about whether the apology attempt is sincere. Just move on.

But, if there is some trust between you, you probably don’t want to give up too easily. If you value the relationship, you have to determine whether or not this apology is an attempt to manipulate you and misrepresent feelings of regret. The question here concerns the person’s motives. (We’ll get to other kinds of inadequate apologies below.)

The potential apology could be less than sincere in any number of ways:

• He says the right words, but they’re pro forma (acting as required, but absent any real feeling for hurt he caused).
• She simply wants the problem she created to disappear (but doesn’t care about healing your hurt).
• He wants to avoid negative consequences of hurtful actions or inaction (rather than wanting to take responsibility for them).
• They don’t believe they’re responsible but want interpersonal “credit” for making amends (putting you in the position of being the one causing a problem, e.g., “I said I was sorry—why are you holding a grudge?”).
• She believes she’s done something harmful, and is preoccupied with her own guilt and only wants to alleviate that (rather than healing your hurt or repairing the relationship).

As they stand, these approaches are all pretty much doomed to fail. Unless they’re vastly improved, you won’t be healed and the relationship won’t be repaired.

You always may refuse to accept any inadequate apology. That’s your prerogative.

But, if you care about the person and you want to hold onto the relationship, you probably want to be sure about the person’s sincerity. What if the apology attempt is what I might call inept but well-meaning? Many would-be apologizers fall on their faces, not because of insincerity but because they simply don’t know how.

©gettyimages

How can you determine the difference?

According to Molly Howes, a clinical psychologist, in order to find out if he means his unconvincing “I’m sorry,” give him a second chance to do it right and see what happens. Naturally, the key here is that you have to know what would be an effective apology, so you know what to ask for.

A Good Apology

Saying “I’m sorry” is rarely the first part of a good apology. Before saying anything, the other person has to understand your hurt. Usually, that means listening. So, ask her to back up and let you tell her about your experience of hurt, about how her behavior has affected you.

In this Step One, nothing about the apologizer is relevant: not her good intentions, good character, history of kindness, etc. If she’s not interested or unwilling to listen to you, you have discovered the shallowness of her regret. Her apology will remain partial and ineffective. If she can engage in a genuine attempt to understand, you are on your way to a real repair.

But that’s only the first step! There are four things that have to happen for the apology to be real and effective. Each one is necessary and none is sufficient by itself. If you and your would-be apologizer go through this process together, your relationship will not only recover from this hurt; it will be stronger.

©gettyimages

The second step, to make a sincere statement of responsibility and empathy, is much easier if Step One has taken place—and much more convincing. Nonetheless, there are still several telltale ways for Step Two to go wrong, some of which appear in the beginning of this column. In my experience, most people need practice at these skills. If your apologizer has gotten this far with you, you can probably sense good-willed effort; nonetheless, your relationship will benefit from your holding high standards for this step.

The third step requires the person to make restitution, that is, to make up for the wrong or hurt. In relationships, these reparations can take the form of a “do-over,” a chance to get right what the person got wrong the first time. Often a sense of what needs to be done is reached via collaboration with you. Making it right requires a person to put her words or intentions into action. Reluctance to try again or to extend herself in this way is another sign that your apologizer isn’t really interested in making a thorough apology.

But Step Four, making sure it doesn’t happen again, is the pudding in which the proof lies. To be a trustworthy apologizer, the person has to change their ways or the conditions that led to the initial problem. Good intentions—or avowals to that effect—are easy, but rarely enough. It will take time for you to see if a true change has taken place, but a convincing plan helps you stay motivated to see it through.

Making your way through this process is energy-intensive for you both and its outcome only fully reveals itself over time. But if your apologizer follows these four steps, they will convince you of their sincerity. It’s the only way to know for sure.

This article is republished here from psychology today under common creative licenses

How Does Background Air Pressure Influence The Inner Edge of the Habitable Zone For Tidally Locked Planets in a 3D View? (Planetary Science)

Various factors can influence the width of the habitable zone around stars, including stellar spectrum, planetary rotation, radius and gravity, orbital obliquity and eccentricity, air mass and composition, surface land and sea configurations, etc. In the recent study, Yixiao Zhang and Jun Yang investigated the effect of varying pN2, i.e. background N2 surface pressure.

©gettyimages.

N2 is a common atmospheric composition of rocky planets in the solar system. The value of pN2 is 0.78 bar on modern Earth, may less than 0.5 bar on early Earth, 1.4 bar on Titan (≈10 times Earth’s value for per unit area, given Titan’s gravity is only 1.35 ms-²), and 3.3 bar on Venus. Planets beyond the solar system are expected to also have a wide range of pN2, which is determined by many processes such as accretion from the protoplanetary disk, impacts, lightning, volcanism, atmospheric escape, photochemistry, and ocean chemistry.

Although N2 is not a greenhouse gas, it can influence planetary climate through several processes, including pressure broadening, Rayleigh scattering, heat capacity, lapse rate (i.e., the vertical profile of air temperature), and energy transport. The relative importance of these effects depends on the level of pN2 and the climate state. For temperate climates of early Earth and early Mars for which pN2 is not very high, the warming effect of pressure broadening dominates.

For temperate or cold climate with high-level pN2 but relatively low greenhouse gases (such as H2O and CO2), the cooling effect of Rayleigh scattering dominates. Moist adiabatic lapse rate (-∂T/∂z) increases with pN2, which influences vapor concentration aloft, the strength of greenhouse effect, and shortwave heating rate. Atmospheric heat capacity (Cpdm, where Cp is the specific heat capacity and dm is the air mass per unit area between two adjacent vertical levels) increases with pN2, which can also strongly affect
shortwave heating rate, longwave cooling rate, and condensation heating rate. The shortwave heating rate (=FSW /(Cpdm), where FSW is the net shortwave flux for each level) decreases significantly with pN2 due to the decrease of water vapor aloft and the increase of heat capacity. The magnitude of pN2 can also influence horizontal and vertical energy transports.

In the current research work, Yixiao Zhang and Jun Yang examined the net effect of pN2 on the inner edge of the habitable zone with a model including all these processes as well as clouds and atmospheric sub-saturation.

But what is the inner edge of habitable zone? Well, the inner edge is defined as the location where absorbed shortwave radiation (ASR) of the planet exceeds the upper limit of outgoing longwave radiation at the top of the atmosphere (OLRm) with the entire ocean would evaporate.

Instead of focusing on the onset of a moist greenhouse state i.e., high water vapor concentration above the tropopause and significant water loss to space. Y. Zhang and J. Yang focused on the runaway greenhouse.

Previous studies using 1D radiative-convective model, showed that varying pN2 has an insignificant effect on the runaway greenhouse limit. This is due to that for the runaway greenhouse state the atmosphere is dominated by water vapor and the presence of N2 is not so important.

Later, some studies with updated absorption coefficients for CO2 and H2O, found that varying pN2 has an effect of within ≈10% on the runaway greenhouse limit, due to the combined effects of pressure broadening, lapse rate, and Rayleigh scattering.

It was also showed in previous studies that the effect of pN2 for M and K dwarfs is much weaker than that for F and A stars, due to the lower Rayleigh scattering and higher near-infrared absorption of water vapor under redder spectra and the effect of pN2 on the inner edge could be as large as 65% in 1D climate calculations.

But friends, two weaknesses of the 1D models are that clouds are not simulated and relative humidity is fixed, because clouds and humidity are primarily determined by 3D atmospheric circulation and convection. In this current research, Zhang and Yang planned to improve the understanding of this problem through 3D simulations with an atmospheric general circulation model (AGCM) Exo-CAM.

Effects of pN2 on the climate of a tidally locked aqua-planet. (a) air temperature, (b) relative humidity (RH), (c) water vapor concentration, (d) shortwave heating rate (QRS), (e) vertical velocity (W, solid line is zero velocity), (f) radial velocity (Vr), (g) cloud water content, and (h) cloud fraction in tidally-locked (TL) coordinates, for pN2 of 0.25, 0.5, 1.0, 2.0, 4.0, and 10.0 bar from left to right columns. The substellar point (SP) and anti-stellar point (AP) are at 0◦ and 180◦, respectively. The contour lines in (f) are mass streamfunction with intervals of 10¹¹ kgs-¹ (solid lines: clockwise; dashed lines: anti-clockwise). The vertical dashed lines in (g-h) mask the region where the cloud fraction is relatively low. The numbers in the right corner of each panel is global-mean surface temperature in (a), total relative humidity in ((b), defined as the percentage of water vapor by mass contained in the whole atmosphere compared with the vapor mass that the atmosphere could theoretically hold if saturated everywhere, following Wolf & Toon (2015)), vertically integrated water vapor amount in (c), total atmospheric heat capacity in ((d), defined as Cpm where Cp is the specific heat capacity and m is the vertically integrated air mass per unit area), maximum vertical velocity below σ = 0.1 in (e), vertically integrated cloud water path in (g), and total cloud water fraction in ((h), assuming maximum–random overlap). The stellar flux is 1700 Wm-², star temperature is 3700 K, and rotation period is 60 Earth days in all these experiments. ©Yang and Zhang.

They focused on tidally locked planets around M dwarfs due to their relatively large planet-to-star ratio and frequent transits. They employed previous studies 3D experiments to examine the inner edge for tidally locked planets, but these studies always assumed pN2 being equal to ≈1.0 bar. But, Yang and Zhang showed that the magnitude of varying pN2 on the runaway greenhouse limit is within ≈13%, similar to that found in 1D radiative-convective models, but the underlying mechanisms are different and more complex than that found in 1D models. More important, they found that the dependence of the inner edge on pN2 is non-monotonic, especially for a slow rotation orbit. This is due to the competing effects of five different processes, including clouds, pressure broadening, heat capacity, lapse rate, and relative humidity. These competing processes increase the complexity in predicting the location of the inner edge of the habitable zone.

For a slow rotation orbit of 60 Earth days, the critical stellar flux for the runaway greenhouse onset is 1700–1750, 1900–1950, and 1750–1800 W mm-² under 0.25, 1.0, and 4.0 bar of pN2, respectively, suggesting that the magnitude of the effect of pN2 is within ≈13%. For a rapid rotation orbit, the effect of varying pN2 on the inner edge is smaller, within a range of ≈7%. Moreover, they showed that Rayleigh scattering effect as varying pN2 is unimportant for the inner edge due to the masking effect of cloud scattering and to the strong shortwave absorption by water vapor under hot climates.

They also found that lapse rate and water vapor aloft decrease with increasing pN2 and atmospheric heat capacity increases with pN2, so shortwave heating rate by water vapor decreases with pN2 under a given surface temperature. These act to delay the onsets of temperature inversion and runaway greenhouse.

While, the effects of pressure broadening and N2–N2 absorption increase with pN2; this warms the surface and increases water vapor concentration. Water vapor feedback further amplifies the warming. These promote the onsets of temperature inversion and runaway greenhouse.

They concluded that future work is required to investigate tidally locked planets but in spin-orbit resonance states like Mercury and rapidly rotating planets like Earth. For these planets, atmospheric circulation and cloud distribution are different from those of 1:1 tidally locked planets; this can strongly influence the trend of the effect of pN2 on the inner edge, following the analyses above. They also note that the atmospheric masses or N2 pressures on exoplanets may could be inferred from the observations of phase curves, emission and transmission spectra, or Raman scattering.

References: Yixiao Zhang, Jun Yang, “How does Background Air Pressure Influence the Inner Edge of the Habitable Zone for Tidally Locked Planets in a 3D View?”, The Astrophysical Journal Letters, Volume 901, Number 2, 2020. https://iopscience.iop.org/article/10.3847/2041-8213/abb87f

Copyright of this article totally belongs to uncover reality. One must not used it without giving proper credits to us.

Cause Of Alzheimer’s Disease Traced To Mutation In Common Enzyme (Medicine)

Mutation to MARK4 makes proteins stickier and more likely to clump in brain.

Researchers from Tokyo Metropolitan University have discovered a new mechanism by which clumps of tau protein are created in the brain, killing brain cells and causing Alzheimer’s disease. A specific mutation to an enzyme called MARK4 changed the properties of tau, usually an important part of the skeletal structure of cells, making it more likely to aggregate, and more insoluble. Getting to grips with mechanisms like this may lead to breakthrough treatments.

The mutant MARK4 creates a form of tau which accumulates easily in brain cells, causing neurons to die. ©Tokyo Metropolitan University.

Alzheimer’s disease is a life-changing, debilitating condition, affecting tens of millions of people worldwide. According to the World Health Organization, it is the most common cause of senile dementia, with numbers worldwide expected to double every 20 years if left unchecked.

Alzheimer’s is said to be caused by the build-up of tangled clumps of a protein called “tau” in brain cells. These sticky aggregates cause neurons to die, leading to impairment in memory and motor functions. It is not yet clear how and why tau builds up in the brain cells of Alzheimer’s patients. Understanding the cause and mechanism behind this unwanted clumping would open up the way to new treatments and ways to prevent the disease.

A team led by Associate Professor Kanae Ando of Tokyo Metropolitan University has been exploring the role played by the MARK4 (Microtubule Affinity Regulating Kinase 4) enzyme in Alzheimer’s disease. When everything is working normally, the tau protein is an important part of the structure of cells, or the cytoskeleton. To keep the arms of the cytoskeleton or microtubules constantly building and disassembling, MARK4 actually helps tau detach from the arms of this structure.

Problems start when a mutation occurs in the gene that provides the blueprint for making MARK4. Previous work had already associated this with an increased risk of Alzheimer’s, but it was not known why this was the case. The team artificially introduced mutations into transgenic drosophila fruit flies that also produce human tau, and studied how the proteins changed in vivo. They discovered that this mutant form of MARK4 makes changes to the tau protein, creating a pathological form of tau. Not only did this “bad” tau have an excess of certain chemical groups that caused it to misfold, they found that it aggregated much more easily and were no longer soluble in detergents. This made it easier for tau to form the tangled clumps that causes neurons to degenerate.

MARK4 has also been found to cause a wide range of other diseases which involve the aggregation and buildup of other proteins. That’s why the team’s insights into tau protein buildup may lead to new treatments and preventative measures for an even wider variety of neurodegenerative conditions.

References: http://dx.doi.org/10.1074/jbc.RA120.014420

Provided by Tokyo Metropolitan University