By modelling the kinetochore from scratch, Max Planck Institute’s researchers get a step closer to creating artificial chromosomes
It´s a cellular process going on since one billion years, yet we are not able to replicate it, nor to fully understand it. Mitosis, the mechanism of cell division that is so important for life, involves more that 100 proteins at its core. Now, the group of Prof. Dr. Andrea Musacchio from the Max Planck Institute of Molecular Physiology in Dortmund has been able to fully reconstitute the engine of the mitosis machinery, called kinetochore. Being able to model a functioning kinetochore is the first step towards the making of artificial chromosomes, that may one day be used to restore missing functions in cells. The results appear this week in the journal Science Advances.
A Wonder of Nature
As a human cell begins division, its 23 chromosomes duplicate into identical copies that remain joined at a region called the centromere. Here lies the kinetochore, a complicated assembly of proteins that binds to thread-like structures, the microtubules. As mitosis progresses, the kinetochore gives green light to the microtubules to tear the DNA copies apart, towards the new forming cells. “The kinetochore is a beautiful, flawless machine: You almost never lose a chromosome in a normal cell!”, says Musacchio. “We already know the proteins that constitute it, yet important questions about how the kinetochore works are still open: How does it rebuild itself during chromosome replication? How does it bind to the microtubules? And how does it control them?”
A Life´s Endeavour
Musacchio´s quest for answers started more than 20 years ago and has been guided by a simple motto: “Before we understand how things go wrong, we better understand why and how things work”. He therefore embarked in the mission of rebuilding the kinetochore in vitro. In 2016 he could synthesize a partial kinetochore made of 21 proteins.
In the new publication, Musacchio, postdoc Kai Walstein, and their colleagues at MPI Dortmund have been able to fully reconstruct the system: All subunits, from the ones that bind the centromere to the ones that bind the microtubules, are now present in the right numbers and stoichiometry. Scientists proved that the new system functions properly, by successfully substituting parts of the original kinetochore in the cell with artificial ones. “This is a real milestone in the reconstruction of an object that exists, unaltered, in all eukaryotic cells since more than one billion years!”, says Musacchio. This breakthrough paves the way towards the making of synthetic chromosomes carrying functions that can be replicated in organisms. “The potential for biotech applications could be huge”, he says.
In the Protein Factory
MPI scientists had to overcome a major hurdle to rebuild the kinetochore, namely to fully reconstruct the highly flexible Centromeric Protein C (CENP-C). This is an essential protein that bridges the centromeric region to the outer proteins of the kinetochore. Researchers rebuilt CENP-C by “gluing” together the two ends of it.
A highly organised laboratory, similar to a factory, is fundamental for the reconstitution of complex protein assemblies. For each protein of the kinetochore, MPI scientists built a production pipeline to isolate the genes, express them in insects´ cells, and collect them. “When we put them together in vitro, these proteins click-in to form the kinetochore, just like LEGO pieces following the instructions”, he says. Other than the famous toys though, each kinetochore protein has a different interface and interaction with closer proteins.
The group will now step up to the next level of complexity: Investigating how the kinetochore functions and interacts in the presence of microtubules and supplied energy (in the form of ATP). The project has been recently granted an ERC Synergy Grant and will be carried out by an international team comprising Musacchio’s group and researchers from Cambridge, UK, and Barcelona, Spain.
Parasitoid wasps (Hymenoptera) are one of the most species rich animal taxa on Earth, but their tropical diversity is still poorly known. Now, scientist have discovered the Dolichomitus meii and Polysphincta parasitoid wasp species previously unknown to science in South America. The new species found in the rainforests entice with their colours and exciting habits. Researchers at the University of Turku have already described 53 new animal species this year.
Researchers at the Biodiversity Unit of the University of Turku, Finland, study insect biodiversity particularly in Amazonia and Africa. In their studies, they have discovered hundreds of species previously unknown to science. Many of them are exciting in their size, appearance, or living habits.
– The species we have discovered show what magnificent surprises the Earth’s rainforests can contain. The newly discovered Dolichomitus meii wasp is particularly interesting for its large size and unique colouring. With a quick glance, its body looks black but glitters electric blue in light. Moreover, its wings are golden yellow. Therefore, you could say it’s like a flying jewel, says Postdoctoral Researcher Diego Pádua from the Instituto Nacional de Pesquisas da Amazônia (INPA) in Brazil, who has also worked at the Biodiversity Unit of the University of Turku.
Dolichomitus parasitoid wasps are parasitic on insect larvae living deep in tree trunks. They lay a single egg on the insect larva and the wasp hatchling eats the host larva as it develops.
– The ovipositor of the Dolichomitus meii wasp is immensely long. It sticks the ovipositor into holes in the wood and tries to find host larvae inside. The species’ striking colouring protects it from birds that prey on insects. They do not snatch the wasp sitting on the tree trunk as they think it will taste bad or that it is dangerous, says Professor of Biodiversity Research Ilari E. Sääksjärvi from the University of Turku.
Polysphincta Parasitoid Wasps Manipulate the Behaviour of the Host Spider
At the same time as the publication on the Dolichomitus meii species, the researchers published another research article on South American wasp species. The article describes altogether seven new wasp species belonging to the Polysphincta genus.
The Polysphincta parasitoid wasps are parasitic on spiders. The female attacks a spider in its web and temporarily paralyses it with a venomous sting. After this, the wasp lays a single egg on the spider, and a larva hatches from the egg. The larva gradually consumes the spider and eventually pupates.
– The wasps that are parasitic on spiders are extremely interesting as many of them can manipulate the behaviour of the host spider. They can change the way a spider spins its web, so that before its death, the spider does not spin a normal web to catch prey. Instead, they spin a safe nest for the parasitoid wasp pupa, describes Professor Sääksjärvi.
Researchers at University of Turku Have Already Discovered 53 New Species This Year
The new species are often discovered through extensive international collaboration. This was also the case with the newly published studies.
– For example, the discovery of the Dolichomitus meii species was an effort of six researchers. Moreover, these researchers all come from different countries, says Professor Sääksjärvi.
The work to map out biodiversity previously unknown to science continues at the University of Turku and there are interesting species discoveries ahead.
– I just counted that, in 2021, the researchers of the Biodiversity Unit at the University of Turku have described already 53 new species from different parts of the globe – and we’re only halfway through the year, Sääksjärvi announces cheerfully.
The discoveries of the research group were published in the Biodiversity Data Journal and ZooKeys.
Featured image: The Dolichomitus meii wasp was discovered in western Amazonia. Its body looks black but glitters electric blue in light. The wasp lays its eggs on insect larvae living deep in wood. It reaches the host larvae with a long ovipositor. Picture: Filippo De Giovanni and Rodrigo Araújo
The first commercially scalable integrated laser and microcomb on a single chip
Fifteen years ago, UC Santa Barbara electrical and materials professor John Bowers pioneered a method for integrating a laser onto a silicon wafer. The technology has since been widely deployed in combination with other silicon photonics devices to replace the copper-wire interconnects that formerly linked servers at data centers, dramatically increasing energy efficiency — an important endeavor at a time when data traffic is growing by roughly 25% per year.
For several years, the Bowers group has collaborated with the group of Tobias J. Kippenberg at the Swiss Federal Institute of Technology (EPFL), within the Defense Advanced Research Projects Agency (DARPA) Direct On-Chip Digital Optical Synthesizer (DODOS) program. The Kippenberg group discovered “microcombs,” a series of parallel, low-noise, highly stable laser lines. Each of the many lines of the laser comb can carry information, extensively multiplying the amount of data that can be sent by a single laser.
Recently, several teams demonstrated very compact combs by placing a semiconductor laser chip and a separate silicon nitride ring-resonator chip very close together. However, the laser and the resonator were still separate devices, made independently and then placed in close proximity to each other perfectly aligned, a costly and time-consuming process that is not scalable.
The Bowers lab has worked with the Kippenberg lab to develop an integrated on-chip semiconductor laser and resonator capable of producing a laser microcomb. A paper titled “Laser soliton microcombs heterogeneously integrated on silicon,” published in the new issue of the journal Science, describes the labs’ success in becoming the first to achieve that goal.
Soliton microcombs are optical frequency combs that emit mutually coherent laser lines — that is, lines that are in constant, unchanging phase relative to each other. The technology is applied in the areas of optical timing, metrology and sensing. Recent field demonstrations include multi-terabit-per-second optical communications, ultrafast light detection and ranging (LiDAR), neuromorphic computing, and astrophysical spectrometer calibration for planet searching, to name several. It is a powerful tool that normally requires exceptionally high power and expensive lasers and sophisticated optical coupling to function.
The working principle of a laser microcomb, explained lead author Chao Xiang, a postdoctoral researcher and newly minted Ph.D. in Bowers’s lab, is that a distributed feedback (DFB) laser produces one laser line. That line then passes through an optical phase controller and enters the micro-ring resonator, causing the power intensity to increase as the light travels around the ring. If the intensity reaches a certain threshold, non-linear optical effects occur, causing the one laser line to create two additional, identical lines on either side. Each of those two “side lines” creates others, leading to a cascade of laser-line generation. “You end up with a series of mutually coherent frequency combs,” Xiang said — and a vastly expanded ability to transmit data.
This research enables semiconductor lasers to be seamlessly integrated with low-loss nonlinear optical micro-resonators — “low-loss” because the light can travel in the waveguide without losing a significant amount of its intensity over distance. No optical coupling is required, and the device is entirely electrically controlled. Importantly, the new technology lends itself to commercial-scale production, because thousands of devices can be made from a single wafer using industry standard complementary metal oxide semiconductor (CMOS)-compatible techniques. “Our approach paves the way for large-volume, low-cost manufacturing of chip-based frequency combs for next-generation high-capacity transceivers, datacenters, space and mobile platforms,” the researchers stated.
The key challenge in making the device was that the semiconductor laser and the resonator, which generates the comb, had to be built on different material platforms. The lasers can be made only with materials from the III and V groups on the Periodic Table, such as indium phosphide, and the best combs can be made only from silicon nitride. “So, we had to find a way to put them together on a single wafer,” Xiang explained.
Working sequentially on the same wafer, the researchers leveraged UCSB’s heterogeneous integration process for making high-performance lasers on silicon substrate and the ability of their EPFL collaborators to make record ultra-low-loss high-Q silicon nitride micro-resonators using the “photonic damascene process” they developed. The wafer-scale process — in contrast to making individual devices and then combining them one by one — enables thousands of devices to be made from a single 100-mm-diameter wafer, a production level that can be scaled up further from the industry standard 200-mm- or 300-mm-diameter substrate.
For the device to function properly, the laser, the resonator and the optical phase between them must be controlled to create a coupled system based on the “self-injection locking” phenomenon. Xiang explained that the laser output is partially back-reflected by the micro-resonator. When a certain phase condition is achieved between the light from the laser and the back-reflected light from the resonator, the laser is said to be locked to the resonator.
Normally, back-reflected light harms laser performance, but here it is crucial for generating the microcomb. The locked laser light triggers soliton formation in the resonator and reduces the laser light noise, or frequency instability, at the same time. Thus, something harmful is transformed into a benefit. As a result, the team was able to create not only the first laser soliton microcomb integrated on a single chip, but also the first narrow-linewidth laser sources with multiple available channels on one chip.
“The field of optical comb generation is very exciting and moving very fast. It is finding applications in optical clocks, high-capacity optical networks and many spectroscopic applications,” said Bowers, the Fred Kavli Chair in Nanotechnology and the director of the College of Engineering’s Institute for Energy Efficiency. “The missing element has been a self-contained chip that includes both the pump laser and the optical resonator. We demonstrated that key element, which should open up rapid adoption of this technology.”
“I think this work is going to become very big,” said Xiang. The potential of this new technology, he added, reminds him of the way putting lasers on silicon 15 years ago advanced both research and industrial commercialization of silicon photonics. “That transformative technology has been commercialized, and Intel ships millions of transceiver products per year,” he said. “Future silicon photonics using co-packaged optics will likely be a strong driver for higher-capacity transceivers using a large number of optical channels.”
Xiang explained that the current comb produces about twenty to thirty usable comb lines and that the goal going forward will be to increase that number, “hopefully to get one hundred combined lines from each laser-resonator, with low power consumption.”
Based on the soliton microcombs’ low energy use and their ability to provide a large number of high-purity optical comb lines for data communications, said Xiang, “We believe that our achievement could become the backbone of efforts to apply optical frequency comb technologies in many areas, including efforts to keep up with fast-growing data traffic and, hopefully, slow the growth of energy consumption in mega-scale datacenters.”
It contains over 150 thousand compact objects, with star formation in progress or in power, the final catalog created with data from the Herschel space telescope as part of the Hi-Gal survey. Media Inaf interviewed Davide Elia of the National Institute of Astrophysics, first author of the article presenting the new results, published in Monthly Notices of the Royal Astronomical Society
At the beginning there were almost 101 thousand. Now there are more than 150 thousand. We are talking about the compact pre- and proto-stellar sources in our galaxy, the Milky Way, cataloged as part of the Herschel infrared Galactic Plane Survey , or Hi-Gal, a project led by researchers from the National Institute of Astrophysics based on observations made by the Herschel satellite of the European Space Agency (ESA).
The first version of the catalog , published in 2017, mainly included sources in the inner part of the Milky Way, observed looking in the direction of the galactic center from our position – the Sun is located halfway between the center and the periphery. Now the new catalog adds the view on the outer part of the galaxy , allowing us to study, for the first time, the distribution of these objects on a galactic scale in unprecedented detail. An article published in June in the Monthly Notices of the Royal Astronomical Society presents the content and early results of the scientific analysis of this massive dataset. Media Inafinterviewed the first author of the article, Davide Elia , a researcher at the National Institute of Astrophysics in Rome.
Doctor Elia, what is new in this work compared to the one published four years ago?
«This article presents the final catalog of all the compact sources, ie point-like or nearly point-like, which can be the site of star formation – in progress or in the future – identified by the Hi-Gal survey in the far infrared, between 70 and 500 microns. The survey was conducted in pieces: the first corresponded to the inner part of the Milky Way, about 140 degrees straddling the galactic center, which is the most densely populated region of the galaxy, also in terms of matter that can give rise to star formation. So we released the catalog for that part first. Now we have the complete survey of the galactic plane, on a slice of the sky 2 degrees wide in latitude and 360 degrees in longitude ».
What has changed since then?
«Numerically, the portion of the catalog presented previously continues to be preponderant, because it includes the central part of the galaxy, the most populated one. However, qualitatively we added an important piece of information because we observed the part of the outer galaxy, which is less populated – the ratio between the areas observed in the new study compared to the previous one is almost 2: 1, but the number of objects it is half present – but it has characteristics that can differ considerably from those of the inner galaxy ».
What can be found in the new catalog?
«First of all, the distance of over 150 thousand compact sources, which allows us to study their distribution in the galaxy. Then there are the physical properties that depend on distance: mass and brightness. If we see a source with a certain brightness from the ground, the estimate we give for its intrinsic brightness changes according to the distance we attribute to the source, and the same goes for the mass we calculate. And then there is the physical dimension: normally we look at these maps in 2D, but only if we know the distance can we estimate how physically extended an object is. Another important novelty is that, in these four years, there has been an impressive preparatory work on the estimation of distances, which has been refined compared to the 2017 catalog not only for the new 50 thousand objects but also for the previous 100 thousand.
This is a very large dataset . How complex is it to create a catalog of this size?
«Both the previous work and this one have characteristics of dataset grandeur and complexity. Furthermore, there is an additional complication for Herschel, a happy complication we could say: Herschel observed at five different wavelengths, and the same source does not necessarily appear in all five. It can appear in some yes and in others no, for example depending on its temperature. Even where it appears in multiple adjacent bands, it can look very different from band to band. The sky changes its appearance with the wavelength and this also applies to individual sources. Putting all this information together, necessary to bring out the physical characteristics of these objects, is a fairly complex preparatory work ».
So what is all this work for?
“We finally have the possibility of making a comparison between the inner part of the galaxy, the one inside an ideal circle corresponding to the orbit of the Sun around the center of the galaxy, and the outer part, from which almost all of the objects introduced come. ex novo in this version of the catalog. If the inner part of the galaxy is the most populated and most efficient in forming new stars, the outer part puts us in front of a series of other questions: for example, the metallicity is lower – in astronomical jargon, this means a lower abundance of chemical elements heavier than helium – which can determine a different behavior of the interstellar medium, compact objects and star formation ».
Measuring distances, a fundamental aspect in this work, is notoriously a thorny subject in astronomy. What method did you use to estimate the distances from your sources?
“The estimation of distances, which is by no means trivial, was presented in an article led by our colleagues from the Laboratoire d’Astrophysique de Marseille, to which our group also contributed substantially, and which came out shortly before our article. The method used is that of kinematic distances: spectroscopy is used, if available, and we start from the molecular lines emitted by a nebula, in particular from that part of the nebula corresponding to the compact region that interests us. We identify a line – this is not trivial too, because there could be several lines along the same line of sight – and then once that line has been identified we go to measure the Doppler effect ».
What does it mean?
“The frequency at which we measure a line is slightly offset from what we would expect to measure in a ground-based laboratory from a gas emitting inside a stationary instrument. Instead, since there are movements due to the rotation of the whole galaxy, and at each distance from the galactic center each object has its own peculiar rotation speed, we must invoke a model that describes the rotation of the galaxy and that tells us the distance for each object. which we observe in a certain direction and which has a certain relative speed with respect to us.
However, when we go to solve this equation, in the direction of the inner galaxy we have two solutions, it is an intrinsic geometric problem. We have to decide which of the two solutions to choose, which are generally radically different from each other, and to resolve this ambiguity we use secondary indications from other data sets. This, on the other hand, does not happen for objects in the outer galaxy, for a geometric question, so the distances measured for objects in the outer galaxy are not affected by this ambiguity “.
It seems like a very laborious process. Is that why four years passed between the two catalogs being published?
«Yes, the estimation of the distances has certainly complicated the job. The software prepared by the colleagues in Marseille is a big “machine” that does a lot of calculations, drawing from all the spectroscopic survey databases , comparing each source with the surroundings to extract the most likely line to associate with the source, and then calculate the distance, resolves the ambiguity related to the double solution, and also considers known catalogs of distances. The latest version of the distance catalog was produced in summer 2020. Only at that point were we able to consolidate the catalog of the physical properties of the sources and conclude its scientific analysis ».
What are the main results that you have extracted from the catalog?
“In addition to distance-dependent parameters, it is also convenient for us to discuss distance-independent quantities, such as temperature, which can be derived from the shape of the continuous spectrum and the position of the peak of this spectrum. Another distance-independent parameter is the ratio of brightness to mass, because both depend on distance equally. This ratio (L / M) is very important because it is an indicator of the evolutionary state of a source ».
What is meant by the evolutionary state of a source?
“Our sources are concentrations of gas and dust called clumps , which can form one or more stars, or are already forming them. The less evolved ones, with the same mass, have a lower brightness. As star formation progresses inside such an envelope, its brightness increases, while the mass remains nearly constant because only a small part of the clump collapses to form stars. If the brightness increases in the L / M ratio, the whole fraction increases. At a certain point, the stars that are forming begin to dissipate the surrounding matter, thanks to the pressure exerted by the radiation they emit. Therefore these clumpthey lose mass and therefore not only does the luminosity increase but the mass begins to decrease, and so the L / M ratio continues to grow. In this way, we can use the L / M ratio, which does not depend on distance, to draw up a sort of evolutionary ranking, from pre-stellar objects to those hosting star formation, up to those hosting even well-evolved, massive and which already heavily influence the surrounding environment “.
And what did you discover by drawing up this evolutionary ranking?
“An interesting result is that, by representing this magnitude as a function of the distance from the center of the galaxy to the periphery, we don’t notice any particular trends. This is interesting because the galaxy’s disk does not have a uniform distribution of objects, but has strong cloud densities near the spiral arms. We do not see any particular dependence of the average evolutionary values on the position of the spiral arms, so the passage of a spiral arm – which is a kind of wave that passes through the galaxy – does not seem to speed up the star formation process. More than anything else, it seems that the arms act as collectors: they are like density waves that “accumulate” more clouds of gas and more stars as they pass, but without shortening the time of star formation.
Were you surprised by this result?
“We actually expected the absence of an evolutionary trend from the 2017 article, but we didn’t have the coverage of the entire galaxy to affirm it with the same authority as today. There were also indications from other surveys , but with lower quality than Herschel’s. We think we have provided a rather clear observational constraint to the theories that intend to explain the link between the role of spiral arms and star formation ».
Have you noticed any other interesting aspects?
«Another interesting thing is that even the“ inter-arm ”areas , between one arm and the other, are not as depopulated as one might think. This is evident both from the simple analysis of the distances and when we then combine the distances with the evolutionary indicators. So we shouldn’t expect star formation only on the spiral arms, where surely there are more objects ».
How much other science is hidden in this catalog?
“There is still a lot of it. The numbers are large, so you can study from small to large, extract information either on individual regions or on the entire galaxy. And then we also included objects from the Far Outer Galaxy , which are located at distances over 40-45 thousand light-years from the center of the galaxy – but for us they are closer because we in turn are over 27 thousand light-years from the center. . We have identified a few hundred and these lend themselves to further studies for those who deal with this part of the extreme periphery of our galaxy ».
What kind of studies can be done with the new catalog?
«The catalog lists the physical properties of these objects and therefore leaves the community the opportunity to study them further starting from a very broad statistical base. For those who are interested, for example, in a piece of the galaxy, in a particular region, or those interested in all objects with temperatures higher than a certain value, or the most massive or farthest from the galactic center, etc. Furthermore, the catalog can be used, in its entirety, to characterize our galaxy in a single number – the star formation rate, or how much matter converts into stars in a year – to then compare it with distant galaxies. It is possible to study the link between compact sources and filaments, elongated structures in molecular clouds which, especially after Herschel, they are thought to play an important role in star formation. Our research group in Iaps is strongly committed to this matter. A series of selections can also be made for observations offollow-up , for example of a spectroscopic type, also by means of interferometers ».
What are the ideal tools to continue observing these sources?
«They are sources that Herschel, despite being a truly marvelous instrument, has observed with a resolution of from a few to a few tens of arc seconds. Today we have instruments such as Alma in the southern hemisphere and Noema in the northern hemisphere that allow observations to be made at a higher resolution and go below the second of arc. We can observe these objects that with Herschel appear as large blobs, to see if they really host a single star in formation or a small cluster of cores from which single stars are forming or will form.
A “son” of Hi-Gal is AlmaGal , a large project approved with Alma whose Principal Investigator is Sergio Molinari, who was also the principal investigatorby Hi-Gal: we have extracted a thousand sources from the 150 thousand of the catalog to study their structure in more detail thanks to the resolution of Alma. Clearly with Alma it would be unthinkable to observe 150,000 objects, so we have selected the best candidates for the formation of massive stars, in which we are particularly interested. The approach is always to study star formation in a statistical way in the entire galaxy by observing a variety of physical and environmental conditions in which star formation can take place. And in any case even a thousand springs are certainly not few! ».
Besides observations with large radio telescopes and interferometers like Alma, what else is there in the future of infrared astronomy after Herschel?
«At the frequencies of our catalog we are a bit stuck: Herschel observed from 70 to 500 microns, in this domain there was the perspective of Spica which unfortunately was set aside by ESA . For this catalog, in addition to Herschel, we also used photometric data at shorter wavelengths, between 20-25 microns, from previous missions such as Spitzer , Wise and Msx . In future this band will operate the instrument Miri on board JWST which should start in the fall and that will surely help you look at these wavelengths counterparties to our clumpof dust seen in the far infrared with Herschel. It will observe them with good resolution and sensitivity so if there are stars already formed inside these objects, it will be Jwst’s task to reveal these populations to us and therefore confirm whether or not the proto-stellar or pre-stellar nature of these objects we have. established on the basis of our far infrared data. The inheritance value that this catalog has is very important, also because there will be no facilities like Herschel in the coming decades. After all, even Spica, if it had been made, would have been too similar to Herschel in terms of capacity and wavelength range observed. We expect this catalog to become a reference point for a long time, a bit like the Iras catalog was of the Eighties, before the advent of Herschel ».
Featured image: The Herschel Space Telescope (2009-2013) observed the sky in the infrared, allowing us to get a fascinating glimpse into the early life stages of stars. Credits: ESA
To know more:
Read the article in Monthly Notices of the Royal Astronomical Society “ The Hi-Gal compact source catalog – II. The 360◦ catalog of clump physical properties “by Davide Elia, M. Merello, S. Molinari, E. Schisano, A. Zavagno, D. Russeil, P. Mège, PG Martin, L. Olmi, M. Pestalozzi, R. Plume, SE Ragan, M. Benedettini, DJ Eden, TJT Moore, A. Noriega-Crespo, R. Paladini, P. Palmeirim, S. Pezzuto, GL Pilbratt, KLJ Rygl, P. Schilke, F. Strafella, JC Tan, A. Traficante, A. Baldeschi, J. Bally, AM di Giorgio, E. Fiorellino, SJ Liu, L. Piazzo and D. Polychroni
What is consciousness? It is not as Descartes, “Cartesian doubt”, which says, “I think, therefore I am”, i.e. Descartes tried to doubt his own existence, but found that even his doubting showed that he existed, since he could not doubt if he did not exist. Now, Kauffman and Roli answered this question. Not as Descartes “Cartesian doubt”, but as how organisms find their way in their world.
According to them, finding one’s way involves finding possible uses of features of the world that might be beneficial or avoiding those that might be harmful.
“Possible uses of X to accomplish Y” are “Affordances”. The number of uses of X is indefinite, the different uses are unordered and are not deducible from one another. All biological adaptations are either affordances seized by heritable variation and selection or, far faster, by the organism acting in its world finding uses of X to accomplish Y.
Based on this, they make four major claims:
Strong AI is not possible.
Brain-mind is not purely classical.
Brain-mind must be partly quantum.
Qualia are experienced and arise with our collapse of the wave function.
“Our Brain-Mind entangles with the world in a vast superposition. We try to and do collapse the wave function to a single state. We experience that state as a “qualia”.”
But, does we have any evidences that qualia is associated with collapse of wave function? Yeah, not just one, we have many. First, qualia are never superpositions. From this, one can say that consciousness plays some role in the collapse of the wave function. Second, finding novel affordances is not deductive. Collapse of the wave function is also not deductive. Our experienced qualia are not deductions. Means, our ideas, grasping the point, creativity etc. are not deductions. Third, our analysis of the incapacity of universal turning machines and any classical system to see affordances has a further implication.
“Artificial Intelligence is wonderful, but algorithmic. We are not algorithmic. Mind is almost certainly quantum, and it is a plausible hypothesis that we collapse the wave function, and thereby perceive affordances as qualia and seize them by preferring, choosing and acting to do so. We, with our minds, play an active role in evolution. The complexity of mind can have evolved with and furthered the complexity of life. At last, since Descartes lost his Res Cogitans, Mind can act in the world. Free at last”
— concluded authors of the study
Reference: Stuart A. Kauffman and Andrea Roli, “What Is Consciousness? Artificial Intelligence, Real Intelligence, Quantum Mind, And Qualia”, Arxiv, 2021. https://arxiv.org/abs/2106.15515
Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article
A collection of symptoms including persistent dreaminess, fatigue, and slow-working speed, sluggish cognitive tempo has been a subject of debate over whether it is part of, or separate from, ADHD.
Researchers at NYU Grossman School of Medicine and Icahn School of Medicine at Mount Sinai who led the study say the stimulant lisdexamfetamine (Vyvanse®) reduced by 30 percent self-reported symptoms of sluggish cognitive tempo. It also lowered by more than 40 percent symptoms of ADHD and significantly corrected deficits in executive brain function, which included fewer episodes of procrastination, improvements in keeping things in mind, and strengthened prioritization skills.
The team interpreted that outcome to mean that decreases in ADHD-related incidents of physical restlessness, behaving impulsively, and/or moments of not paying attention were linked to some but not all of the improvements in sluggish cognitive tempo.
“Our study provides further evidence that sluggish cognitive tempo may be distinct from attention deficit hyperactivity disorder and that the stimulant lisdexamfetamine treats both conditions in adults, and when they occur together,” says lead study investigator and psychiatrist Lenard A. Adler, MD.
Dr. Adler, who directs the Adult ADHD Program at NYU Langone Health, says until now stimulants have only been shown to improve sluggish cognitive tempo symptoms in children with ADHD. The NYU Langone–Mount Sinai team’s findings, he adds, are the first to show that such treatments also work in adults.
A professor in the Department of Psychiatry and the Department of Child and Adolescent Psychiatry at NYU Langone, Dr. Adler says sluggish cognitive tempo is likely a subset of symptoms commonly seen in some patients with ADHD and other psychiatric disorders. However, it remains unclear if sluggish cognitive tempo is a distinct psychiatric condition on its own and if stimulant medications will improve sluggish cognitive tempo in patients without ADHD.
Some specialists have been seeking to qualify sluggish cognitive tempo as distinct, but critics say more research is needed to settle the question. “These findings highlight the importance of assessing symptoms of sluggish cognitive tempo and executive brain function in patients when they are initially diagnosed with ADHD,” says Dr. Adler.
For the study, funded by the drug manufacturer, Takeda Pharmaceuticals of Cambridge, Massachusetts, several dozen volunteer participants received daily doses of either lisdexamfetamine or a placebo sugar pill for one month. Researchers then carefully tracked their psychiatric health on a weekly basis through standardized tests for signs and symptoms of sluggish cognitive tempo, ADHD, as well as other measures of brain function. Study participants then switched roles: the one-half who had been taking the placebo started taking daily doses of lisdexamfetamine, while the other half, who had been on the drug during the study’s first phase, started taking the placebo.
Dr. Adler has received grant and/or research support from Sunovion Pharmaceuticals, Enymotec, Shire Pharmaceuticals (now part of Takeda), Otsuka, and Lundbeck. He has also served as a paid consultant to these companies, in addition to Bracket, SUNY, the National Football League, and Major League Baseball. He has also received royalty payments since 2004 from NYU for adult ADHD diagnostic and training materials. All of these relationships are being managed in accordance with the policies and procedures of NYU Langone.
Besides Dr. Adler, other NYU Langone researchers involved in the study are Terry Leon, MS, RN; Taylor Sardoff, BA; and Michael Silverstein, MS. Other investigators include Beth Krone, PhD, and Jeffrey Newcorn, MD, at Icahn School of Medicine at Mount Sinai in New York City; and Stephen Faraone, PhD, at SUNY Upstate Medical University in Syracuse, New York.
Featured image: The stimulant lisdexamfetamine reduced self-reported symptoms of sluggish cognitive tempo in adults with attention deficit hyperactivity disorder. PHOTO: HAILSHADOW/GETTY
By identifying the mechanism of toxicity induced by immunotherapies, scientists from UNIGE and from the Harvard Medical School are paving the way for cancer treatments with fewer side effects.
In recent years, immunotherapy has revolutionised the field of cancer treatment. However, inflammatory reactions in healthy tissues frequently trigger side effects that can be serious and lead to the permanent discontinuation of treatment. This toxicity is still poorly understood and is a major obstacle to the use of immunotherapy. Scientists from the University of Geneva (UNIGE), Switzerland, and Harvard Medical School, United States, have succeeded in establishing the differences between deleterious immune reactions and those targeting tumour cells that are sought after. It appears that while the immune mechanisms are similar, the cell populations involved are different. This work, published in the journal Science Immunology, makes it possible to envisage better targeted, more effective, and less dangerous treatments for cancer patients.
Based on massive stimulation of the patient’s immune system, immunotherapies have saved many lives. Unfortunately, they are not without consequences. “When the immune system is activated so intensively, the resulting inflammatory reaction can have harmful effects and sometimes cause significant damage to healthy tissue”, says Mikaël Pittet, holder of the ISREC Foundation Chair in Onco-Immunology at UNIGE Faculty of Medicine Department of Pathology and Immunology and Centre for Translational Research in Onco-Haematology, and a member of the Swiss Cancer Centre Leman. “Therefore, we wanted to know if there are differences between a desired immune response, which aims to eliminate cancer, and an unwanted response, which can affect healthy tissue. The identification of distinctive elements between these two immune reactions would indeed allow the development of new, more effective and less toxic therapeutic approaches.”
Using liver biopsy samples from patients treated at the CHUV and the HUG who had suffered such toxic reactions, the scientists studied the cellular and molecular mechanisms at work to reveal similarities and dissimilarities.
A similar response, but with different cells
In an immunotherapy-related toxic response, two types of immune cells — macrophage and neutrophil populations — appear to be responsible for attacking healthy tissue, but are not involved in killing cancer cells. In contrast, another cell type — a population of dendritic cells — is not involved in attacking healthy tissue but is essential for eliminating cancer cells. “Immunotherapies can trigger the production of specialised proteins that alert the immune system and trigger an inflammatory response, explains Mikaël Pittet. In a tumour, these proteins are welcome because they allow the immune system to destroy cancerous cells. In healthy tissue, however, the presence of these same proteins can lead to the destruction of healthy cells. The fact that these inflammatory proteins are produced by such different cells in tumours and healthy tissue is therefore an interesting finding.”
Dendritic cells are very rare, whereas macrophages and neutrophils are much more common. Some macrophages are present in most of our organs from embryonic development stages and remain there throughout our lives. Contrary to what was previously thought, these macrophages do not necessarily inhibit inflammation but, stimulated by immunotherapies, can trigger a harmful inflammatory response in the healthy tissue where they reside, thus explaining why toxicity can affect different organs.
Neutralising neutrophils for a double benefit
When macrophages are activated by drugs, they produce inflammatory proteins. These in turn activate neutrophils, which execute the toxic reaction. “This opens the possibility of limiting immunotherapy’s side effects by manipulating neutrophils”, says Mikaël Pittet.
The research team confirmed their discovery by studying the immune reactions of mice whose cell activity was modulated with genetic tools. They were able to identify a loophole that could be exploited to eliminate these side effects. Indeed, neutrophils produce some factors that are important for the development of toxicity, including TNF-α, which could be a therapeutic target. TNF-α inhibitors are already used to modulate the immune response in people with arthritis and could perhaps be useful in the cancer setting to inhibit the toxic effects of neutrophils during immunotherapy. “Furthermore, inhibiting neutrophils could be a more effective way to fight cancer: in addition to triggering a toxic response, some of these cells also promote tumour growth. Thus, by managing to control them, we could have a double beneficial effect: overcome the toxicity in healthy tissues, and limit the growth of cancerous cells”, concludes Mikaël Pittet.
The Swiss Cancer Centre Léman is a network that brings together the universities of Geneva (UNIGE) and Lausanne (UNIL), the EPFL, the HUG and the CHUV under the same banner. This alliance brings together, under a federating and regional identity, all the specialists in the chain leading from the laboratory to the bedside.
Data in a study by Mayo Clinic Cancer Center researchers indicates that the level of tumor cell PD-L1, a protein that acts as a brake to keep the body’s immune responses under control, may be an important factor for sensitivity to chemotherapy in colorectal cancer treatment. The study was published Friday, July 2, in Oncogene.
“We have identified a mechanism by which absent or low levels of tumor cell PD-L1, which is commonly found in solid tumors, can confer resistance to chemotherapy in colorectal cancer,” says Frank Sinicrope, M.D., a Mayo Clinic medical oncologist and gastroenterologist, and the study’s author.
PD-L1 is a protein that is increased on some human cancer cells and is a target for immunotherapy, but its role in response to chemotherapy is poorly understood.
“Our study found that the loss of PD-L1 in tumor cells was shown to enhance JNK signaling that modifies a protein called BIM, resulting in its inactivation such that it cannot mediate the killing of cancer cells,” says Dr. Sinicrope. He says targeting JNK may be a promising strategy to overcome drug resistance in cancer cells with low or absent tumor cell PD-L1 expression, which is typical in most colorectal cancers.
“Our results identify an important mechanism by which low or absent levels of PD-L1 protien may contribute to lack of response to chemotherapy,” says Dr. Sinicrope. He says the findings suggests a potential new strategy to target the JNK pathway, thereby sensitizing colon cancer cells to chemotherapy.
“We have identified an important role of PD-L1 in colon cancer cells that is independent of it serving as a target for cancer immunotherapy,” says Dr. Sinicrope. “Our findings demonstrate that frequently identified low or absent PD-L1 levels in human colorectal cancer cells can be a cause of resistance to chemotherapy.” He says the study findings indicate that the mechanism of this effect is mediated by enhanced JNK signaling, and inhibiting this signaling may be a promising strategy to overcome resistance to drug therapy in colorectal cancer treatment.
Reference: Sun, L., Patai, Á.V., Hogenson, T.L. et al. Irreversible JNK blockade overcomes PD-L1-mediated resistance to chemotherapy in colorectal cancer. Oncogene (2021). https://doi.org/10.1038/s41388-021-01910-6
This image taken with the NASA/ESA Hubble Space Telescope depicts the open star cluster NGC 330, which lies around 180,000 light-years away inside the Small Magellanic Cloud. The cluster – which is in the constellation Tucana (the Toucan) – contains a multitude of stars, many of which are scattered across this striking image.
Because star clusters form from a single primordial cloud of gas and dust, all the stars they contain are roughly the same age. This makes them useful natural laboratories for astronomers to learn how stars form and evolve. This image uses observations from Hubble’s Wide Field Camera 3 and incorporates data from two very different astronomical investigations. The first aimed to understand why stars in star clusters appear to evolve differently from stars elsewhere, a peculiarity first observed with Hubble. The second aimed to determine how large stars can be before they become doomed to end their lives in cataclysmic supernova explosions.
Hubble images show us something new about the universe. This image, however, also contains clues about the inner workings of Hubble itself. The crisscross patterns surrounding the stars in this image, known as diffraction spikes, were created when starlight interacted with the four thin vanes supporting Hubble’s secondary mirror.
Text credit: European Space Agency (ESA) Image credit: ESA/Hubble & NASA, J. Kalirai, A. Milone