Pressurized Teens Will Act Out In The Classroom (Psychology)

No matter how hard a teacher tries, there will always be those kids who act out in class. You know the ones: they don’t pay attention, shout disruptive things, and take part in other miscellaneous buffoonery. Calmly reasoning with them only goes so far—and the same goes for your nerves. It can be tempting to put your foot down, but according to a 2016 study, you shouldn’t. Kids behave best when they feel a sense of control.

The study, which was published in the journal Learning and Instruction, specifically looked at students’ reactions to psychological control by teachers. What do they mean by psychological control? According to the study press release, it refers to threats or controlling language, such as commanding “do this because I say so,” without so much as an explanation. There’s no surer way to get a frustrated and disengaged teen.

Stephen Earl from the University of Kent, the lead author of the study, found that pupils under 14 years of age reported feeling disengaged when they felt forced to do activities or were made to feel incapable of being successful. That led them to have less energy, daydream more, and act out by talking and making noise in class. The researchers found that this was even true when the teachers were well meaning—after all, they only urge kids to do what they say because they want them to get more out of their education.

Although this may sound obvious on its face—nobody likes being forced to do things—it has some important implications for the classroom. It could be that “class clowns” and other misbehaved students are fooling around not because they’re bad kids, but because they’re feeling powerless. Instead of meeting disruption with stern hostility, it’s best to really understand what’s going on with your student. They’ll likely zone out less, learn more, and cut back on their clownery.


People With Fregoli Delusion Think Everyone They Meet Is The Same Person In Disguise (Psychology)

When you have a crush on someone, it feels like you see them wherever you go. But, what if you literally did see them…everywhere? If you had Fregoli delusion, you might think that everyone you met is really just your crush in disguise. Middle school could’ve been that much worse.

The Fregoli delusion was first documented in 1927 when a patient believed that an actress named Robine was disguising herself as different people throughout the patient’s life. The syndrome was named after renowned 19th- and 20th-century Italian actor Leopoldo Fregoli, who was known for being able to impersonate every character in a scene.

A 2014 review of the literature found that the psychological elements that define Fregoli delusion itself are somewhat murky. But like many psychiatric disorders, the fact that it happens at all can tell us important things about the brain. Psychologist Stéphane Thibierge from the Université Paris-Diderot writes that Fregoli delusion shows that the processes of identification and recognition are two different things: patients with the delusion can identify that people have different appearances, but they’re unable to recognize that the people also have different identities.

The Fregoli delusion is included in a category of conditions known as “delusional misidentification syndromes,” and it’s closely related to the better-known Capgras syndrome, or the irrational belief that loved ones or places have been replaced by impostors. How common is this type of delusion? Pretty rare. The Telegraph reveals that there are only about 40 reported cases in the world, which comprise 0.2 percent of psychiatric patients and 0.5 percent of people with dementia.


High Heels Were Originally Meant For Men (Culture / History)

It’s hard to think of a more iconically feminine accessory than the high-heeled shoe. In fact, high heels have been associated with women since all the way back in the 1600s. Well, here’s a shocker: For the seven centuries that preceded that, they were exclusively worn by men.

Before high-heeled shoes were the must-have item of the fashion-conscious upper crust, they were imported from an unexpected source: a group of horse-riding Persian diplomats. In 1599, this envoy came to France in search of allies in the war against the Ottoman Empire. They started in Moscow and ended in Lisbon, and where they went, people took notice. Specifically, they took notice of their heightened heels, which were a technological innovation that kept their riders secure in the stirrups. By the time of Louis XIV in France (1643–1715), there was a fascination with all things Persian.

To this day, Louis XIV’s name is shorthand for the absolute decadence of the absolute monarchy. It turns out that Europe’s longest-reigning royal ruler was also on the smaller side. At five feet, four inches (163 centimeters), he was under the average height of the day, so he pushed the envelope of his footwear to tower in heels as high as four inches (10 centimeters). As time went on, he felt the need to distinguish his footwear even more. A 1701 portrait shows him in full regalia, including a pair of high-heeled shoes with red-painted heels that he decreed could only be worn by those he saw fit. In other words, you could glance at a person’s feet and know instantly if they were in the king’s inner circle. As Cardi B might say, those are bloody shoes.

So what happened? Well, things started changing shortly after the shoes grew popular in Europe. About the same time that Persia-mania was catching fire on the northern continent, European women were starting to assert their equality by adopting traditionally male styles of dress, including heels. According to Bata Shoe Museum curator and “Heights of Fashion” author Elizabeth Semmelhack, “You had women cutting their hair, adding epaulettes to their outfits. They would smoke pipes, they would wear hats that were very masculine. And this is why women adopted the heel — it was in an effort to masculinise their outfits.”

Eventually, this was codified into male heels (thick) and female heels (skinny), and the trend gradually began rolling down to the non-elite. But sometime around the turn of the 19th century, men quit wearing heels altogether.

That wasn’t the only thing they gave up. Fashion scholars call it “The Great Male Renunciation,” and it marks a moment when men’s clothing suddenly transformed from something alive with color and individual fashion to something that was comparatively drab and uniform. The various Louis’ elaborate silks gave way to simpler silhouettes and dull-colored suits, and the heel for men never really made a comeback. Except among those who use it for its original purpose — even today, cowboy boots have a short heel for holding on to a stirrup. Hey, if the ruff can make a comeback, the men’s heel certainly can.


Illusory Pattern Perception Is Why Conspiracy Theorists Believe Such Wild Things (Psychology)

Have you ever noticed that there’s an eye in a pyramid on the back of every dollar bill? And that there are pyramids in Egypt? And that the Sphinx is in Egypt too, and “sphinx” rhymes with “stinks”? It all adds up — the United States is telling each and every one of us that we stink. Either that, or we’re somehow perceiving patterns that aren’t really there.

According to a study in the European Journal of Social Psychology, “illusory pattern perception” is a major cause of belief in conspiracy theories. You can probably guess from the name what that kind of perception is. Basically, every time you make a connection between two unrelated events or experiences, you’ve got an IPP problem. For example, if you bring a PB&J to work three days in a row and your workplace nemesis calls in sick all three days, you might be inclined to stick with the sandwiches — but your good fortune is almost certainly a coincidence. And there are other types of illusory pattern perceptions that are a lot less far-fetched. Take constellations, for example: there’s not really any connection between the stars in the Big Dipper, but for some reason, some Greek dude looked up one night and said, “Oh yeah, that’s definitely a sky-bear.”

So it might not seem too surprising that a tendency to notice patterns where there are no patterns to notice can be linked to believing in conspiracy theories, but scientists had never actually looked into that connection before. In the experiment, the researchers gathered 264 adults from the United States and started by quizzing them on a variety of conspiracy theories. They found out how each of the subjects felt about the moon-landing hoax, whether or not the Ebola virus is manmade, and the dangerous side-effects of the Red Bull ingredient “testiculus taurus” (they made that last one up just for the study). Next, they gave each participant a series of pattern-recognition tests and they found an obvious correlation — if you can believe a shadowy cabal of scientists, at least.

The first test was about as random as you can possibly get: a coin toss. The people who believed in more conspiracy theories were more likely to perceive patterns in the way the coin landed. Then it was time for an even more complicated pattern-recognition test.

Each person was given paintings by Victor Vaserely (who was very orderly) and Jackson Pollock (who was very not), and asked if they could find patterns in the abstract works. While most everybody could find patterns in the geometric shapes of Vaserely’s pieces, only those who checked off a lot of conspiracy theories saw the patterns in Pollock’s painted splatters.

The ability to recognize patterns is primary to the way that humans experience the world, and it seems like people who believe in a lot of conspiracy theories just have that ability cranked up to 11.


Identical Twins Can’t Tell Themselves Apart, Either (Psychology)

In the most annoying things an identical twin hears on the regular, “Can you tell each other apart?” is right up there with “Can you read each other’s minds?” and “Who’s the evil one?” But according to several studies, it’s not as silly a question as you might think. Identical twins actually do have trouble telling who’s who in photographs — and it can be even harder depending on their relationship style.

Despite the popularity of the question, there hasn’t been a ton of research into what happens when twins see each other’s faces. There have, however, been plenty of studies into what happens when a non-twin sees his or her own face in comparison with other faces. Basically, it jumps out at you.

People recognize their own faces in a lineup much faster than even those of celebrities or close friends, a phenomenon scientists call “self-face advantage.” That makes sense since you see your own face more than any other. Plus, there’s an evolutionary advantage: can you imagine what would happen if you regularly mistook your reflection for a threatening stranger?

But an identical twin has a different situation. He sees his own face more than any other, sure, but his twin’s face probably comes at a close second. And studies have shown that while people recognizing other faces usually take in the whole face as a unit, people recognizing their own face use individual features — and identical twins share a whole lot of features. And while babies can generally recognize themselves in the mirror by age two, a 1987 study showed that identical twin babies only differentiated between themselves and their twins after looking at them for a long time. So for an Ig Nobel Prize-winning study published in PLOS One in 2015, Italian researchers set out to determine whether identical twins had that “self-face advantage” when it came to each other’s faces.

The researchers recruited 30 volunteers in total: 10 pairs of identical twins, and 10 non-twins to act as controls, all with an equal number of men and women. The volunteers were shown a series of cropped black-and-white pictures of themselves, their twin, and a close friend or family member (sometimes right side up, sometimes upside down) for one second apiece and asked to identify who they were each time. The controls saw themselves and a pair of twins. They did this in four sessions, seeing 336 pictures in all.

Interestingly, twins took longer to recognize themselves than they did to recognize their friends, even though non-twin volunteers did the opposite. It also took the twins just as long to recognize their own twin as it did for them to recognize themselves. That suggests that twins don’t have a “self-face advantage” — the time it takes them to distinguish whether they’re seeing themselves and their twin destroys any advantage there might have been.

The volunteers also took personality tests, so the researchers were able to delve even deeper into the data. They found that if a twin scored high on markers of avoidant or anxious relationship styles (that is, if they were either overly dependent on relationships or had an unusually low attachment to them) they had a harder time identifying their own face. But that suggests that the opposite is also true. Well-adjusted take heart!


Heavy Metals Make Soil Enzymes 3 Times Weaker (Chemistry)

According to Aponte and colleagues, heavy metals suppress enzyme activity in the soil by three to 3.5 times and have especially prominent effect on the enzymes that support carbon and sulfur circulation. The data obtained by the team can lead to more efficient use and fertilization of agricultural lands.

Heavy Metals Make Soil Enzymes 3 Times Weaker, Says a Soil Scientist from RUDN University. Credit: RUDN University

Soil enzymes promote chemical reactions in soils, regulate cellular metabolism of soil organisms, participate in the decomposition of organic matter and in the formation of humus. The quality and fertility of soil depend to a great extent on the activity of soil enzymes. Heavy metals, such as lead, zinc, cadmium, copper, and arsenic, reduce the catalytic abilities of enzymes, thus interfering with the circulation of chemical elements.

In their study they carried out a meta-analysis which was based on 671 observations and found that the activities of seven enzymes decreases in response to soil contamination with Pb, Zn, Cd, Cu and As. HM contamination linearly reduced the activities of all enzymes in the following order: arylsulfatase > dehydrogenase > β-glucosidase > urease > acid phosphatase > alkaline phosphatase > catalase.

Arylsulphatase is an enzyme that promotes reactions between water and sulfur-bearing acids. Therefore, it is associated with the biogeochemical cycle of sulfur. Similarly, other enzymes play their roles in the cycles of carbon, nitrogen or phosphorus.

The activities of two endoenzymes: arylsulfatase (partly as exoenzyme) and dehydrogenase were reduced by 72% and 64%, respectively. These reductions were two times greater than of exoenzymes: β-glucosidase, urease, acid phosphatase, alkaline phosphatase and catalase (partly endoenzyme). Unlike those, urease, an enzyme that plays a role in nitrogen circulation, is less sensitive to heavy metal concentration: Its activity reduces by 10% at low levels of contamination and by up to 70% when contamination values are extremely high. Notably, the activity of acid phosphatases increases in the presence of small amounts of cadmium and copper in small amounts. This reflects the much stronger impact of HMs on living microorganisms and their endoenzymes than on extracellular enzymes stabilized on clay minerals and organic matter.

Researchers also found that increasing clay content weakened the negative effects of HM contamination on EAs. All negative effects of HMs on EAs decreased with soil depth because HMs remain mainly in the topsoil.

According to researchers, EAs (Enzyme activity) involved in the cycling of C and S were more affected by HMs (Heavy metal) than the enzymes associated with the cycling of N and P. Consequently, HM contamination may alter the stoichiometry of C, N, P and S released by enzymatic decomposition of organic compounds that consequently affect microbial community structure and activity.


References: Humberto Aponte, Paula Meli, Benjamin Butler, Jorge Paolini Francisco Matuse, Carolina Merino, Pablo Cornejo, Yakov Kuzyakov, “Meta-analysis of heavy metal effects on soil enzyme activities”, Science of the total environment, Volume 737, 1 October 2020, 139744, doi: https://doi.org/10.1016/j.scitotenv.2020.139744 link: https://www.sciencedirect.com/science/article/abs/pii/S0048969720332642?via%3Dihub

Introducing METISSE, The Fastest Stellar Evolution Code For Calculating The Properties Of Star (Astronomy)

In the era of advanced electromagnetic and gravitational wave detectors, it has become increasingly important to effectively combine and study the impact of stellar evolution on binaries and dynamical systems of stars. Systematic studies dedicated to exploring uncertain parameters in stellar evolution are required to account for the recent observations of the stellar populations.

This artist’s impression of different mass stars; from the smallest “red dwarfs”, weighing in at about 0.1 solar masses, to massive “blue” stars weighing around 10 to 100 solar masses. While red dwarfs are the most abundant stars in the Universe, it’s the massive blue stars that contribute the most to the evolution of stars clusters and galaxies. Credit: ESO/M. Kornmesser

The best tool to study massive stars are detailed stellar evolution codes: computer programs that calculate both the interior structure and the evolution of these stars. Unfortunately, detailed codes are computationally expensive and time-consuming—it can take several hours to compute the evolution of just a single star. For this reason, it’s impractical to use these codes for modeling stars in complex systems such as globular star clusters, which can contain millions of interacting stars.

To address this problem, a team of scientists presented a new approach to the commonly used SINGLE-STAR EVOLUTION (SSE) fitting formulae, one that is more adaptable called METhod of Interpolation for Single Star Evolution (METISSE). Interpolation is a method for estimating a quantity based on nearby values, such as estimating the size of a star based on the sizes of stars with similar masses. Via interpolation, METISSE quickly calculates the properties of a star at any instant by using selected stellar models computed with detailed stellar evolution codes.

They have used METISSE with detailed stellar tracks computed by the Modules for experiments in stellar astrophysics (MESA), the Bonn Evolutionary Code (BEC), and the Cambridge STARS code. And what they found?? Via interpolation, METISSE able to calculate the properties of a star at any instant by using selected stellar models computed with detailed stellar evolution codes compared to SSE, and is on average three times faster.

Most importantly, it can use sets of stellar models to predict the properties of stars—this is extremely important for massive stars. Massive stars are rare, and their complex and short lives make it difficult to calculate their properties. Consequently, detailed stellar evolution codes often have to make assumptions while computing the evolution of these stars. The differences in the assumptions used by different stellar evolution codes can significantly impact their predictions about the lives and the properties of the massive stars.

This illustration demonstrates how a massive star (at least 8 times bigger than our sun) fuses heavier and heavier elements until exploding as a supernova and spreading those elements throughout space. Credits: NASA, ESA, and L. Hustak (STScI)

They interpolated stars that were between nine and 100 times the mass of the sun and compared the predictions for the final fates of these stars. For most massive stars in their set, they found that the masses of the stellar remnants (neutron stars or black holes) can vary by up to 20 times the mass of our sun. While, the maximum radial expansion achieved by a star can differ by an order of magnitude between different stellar models.


References: Poojan Agrawal, Jarrod Hurley, Simon Stevenson, Dorottya Szécsi, Chris Flynn, The fates of massive stars: exploring uncertainties in stellar evolution with METISSE, Monthly Notices of the Royal Astronomical Society, Volume 497, Issue 4, October 2020, Pages 4549–4564, https://doi.org/10.1093/mnras/staa2264

Sharks May Have Evolved from Bony Ancestors (Paleontology)

Paleontologists reported the discovery of extensive endochondral bone in Minjinia turgenensis, a new genus and species of ‘placoderm’-like fish from the Early Devonian (Pragian) of western Mongolia using X-ray computed microtomography. The fossil consists of a partial skull roof and braincase with anatomical details providing strong evidence of placement in the gnathostome stem group. However, its endochondral space is filled with an extensive network of fine trabeculae resembling the endochondral bone, the hard bone that makes up our skeleton after birth, of osteichthyans. Their discovery suggests, the lighter skeletons of sharks may have evolved from bony ancestors, rather than the other way around.

Virtual 3D model of the braincase of Minjinia turgenensis generated from CT scan. Inset shows raw scan data showing the spongy endochondral bone inside. Image credit: Brazeau et al, doi: 10.1038/s41559-020-01290-2.

Sharks have skeletons made of cartilage, which is around half the density of bone.

Cartilaginous skeletons are known to evolve before bony ones, but it was thought that sharks split from other animals on the evolutionary tree before this happened, keeping their cartilaginous skeletons while other fish, and eventually us, went on to evolve bone.

Minjinia turgenensis belongs to a broad group of fish called placoderms, out of which sharks and all other jawed vertebrates — animals with backbones and mobile jaws — evolved.

Previously, no placoderm had been found with endochondral bone, but the skull fragments of the ancient fish species were wall-to-wall endochondral.

This could suggest the ancestors of sharks first evolved bone and then lost it again, rather than keeping their initial cartilaginous state for more than 400 million years.


References: Brazeau, M.D., Giles, S., Dearden, R.P. et al. Endochondral bone in an Early Devonian ‘placoderm’ from Mongolia. Nat Ecol Evol (2020). https://doi.org/10.1038/s41559-020-01290-2 link: https://www.nature.com/articles/s41559-020-01290-2

Comet 67P/Churyumov-Gerasimenko’s Interior is Less Dense than Its Surface (Astronomy)

A new analysis of data gathered by the CONSERT (Comet Nucleus Sounding Experiment by Radio wave Transmission) instrument, a radar onboard ESA’s Rosetta spacecraft and its Philae lander, confirms that solar radiation has significantly modified the surface of comet 67P/Churyumov-Gerasimenko as it travels through space between the orbits of Jupiter and Earth.

CONSERT, a bistatic radar onboard the Rosetta spacecraft and its Philae lander, was designed to probe the nucleus of comet 67P/Churyumov–Gerasimenko with radio waves at 90 MHz frequency. In 2016 September, the exact position of Philae was retrieved, within the region previously identified by CONSERT. This allowed them to revisit the measurements and improve their analysis of the properties of the interior, the results of which they present in their study.

ESA’s Philae lander at work on comet 67P/Churyumov-Gerasimenko. Image credit: ESA / AOES Medialab.

The time for the signal to travel between CONSERT’s two antennas offered insights into the comet’s nucleus, such as porosity and composition.

This graphic shows the signal connecting the CONSERT instrument on Philae, on the surface of comet 67P/Churyumov-Gerasimenko, to the one on the Rosetta orbiter. The fan like appearance is a result of the motion of Rosetta along its orbit, with the colors marking the separate signal paths as the orbit evolves. The image below shows the signals in more detail, propagating inside the comet from Philae to the points from where they leave the comet to the orbiter. The curving is a result of the projection of its paths on the bumpy surface of the comet. The bluer color indicates more shallow paths (just a few centimeters), while the redder tones show where the signals penetrated below 100 m (328 feet) in depth. Image credit: Kofman et al, doi: 10.1093/mnras/staa2001.

Dr. Kofman and colleagues discovered that rays propagated at different velocities, indicating varying densities within the comet.

The relative permittivity of the materials is found to range from about 1.7 to 1.95 in the shallow subsurface (<25 m) and about 1.2 to 1.32 in the interior.

These differences indicate different average densities between the shallow subsurface and the interior of comet.

They can be explained by various physical phenomena such as different porosities, the possible compaction of surface materials, or even perhaps different proportions of the same materials.

This strongly suggests that the less dense interior has kept its pristine nature.


References: Wlodek Kofman et al. 2020. The interior of Comet 67P/C-G; revisiting CONSERT results with the exact position of the Philae lander. MNRAS 497 (3): 2616-2622; doi: 10.1093/mnras/staa2001