Frozen in time: Transparent worms keep genes in sync for 20 million years

Two species of worms have retained remarkably similar patterns in the way they switch their genes on and off despite having split from a common ancestor 20 million years ago, a new study finds.

The findings appear in the June 19 issue of the journalScience.

"It was just remarkable, with this evolutionary distance, that we should see such coherence in gene expression patterns," said Dr. Robert Waterston, professor of genome sciences at the University of Washington School of Medicine in Seattle and a co-senior author of the paper. "I was surprised how well everything lined up."

Gene expression patterns tended to remain unchanged — or what evolutionary biologists call conserved — when a change might affect many cell types, Waterston said.

"If the gene is broadly expressed in many cell types across the organism, it may be difficult to change expression," he said. "But if the gene expression is limited to a single cell type or a few cell types, maybe it can succeed."

When gene expression did diverge between the two worms, the changes were more likely to occur in specialized cell types. For example, expression patterns in cells involved in basic functions like muscle or gut tended to be conserved, while expression patterns in more specialized cells involved in sensing and responding to the worm's environment were more likely to diverge.

"Genes related to neuronal function, for example, seem to diverge more rapidly — perhaps because changes were needed to adapt to new environments — but for now, that's speculation," said Christopher R. L. Large, a postdoctoral fellow in the Department of Genetics at the Perelman School of Medicine at the University of Pennsylvania and the paper's lead author. Large earned his Ph.D. in genome sciences from the UW School of Medicine.

In the study, researchers compared gene expression patterns in two soil-dwelling roundworms,Caenorhabditis elegansandCaenorhabditis briggsae. Both species are ideal for studying development: They are small, about a millimeter long; simple, made up of about 550 cells when fully developed; and transparent. These characteristics allow scientists to observe their cells divide and develop in real time. Importantly, these worms share many of their roughly 20,000 genes with more complex organisms, including humans.

All the cells in both worms have been identified and mapped. Despite 20 million years of evolution, the two worms retain nearly identical body plans and cell types, with an almost one-to-one correspondence that makes them ideal subjects for comparison.

The goal of the study was to compare gene expression in every cell type of the two worms to determine what changes had occurred since they split from their common ancestor.

To do this, the researchers measured levels of messenger RNA in every cell at various stages of embryonic development by using a technique called single-cell RNA sequencing.

Messenger RNA, or mRNA, carries the instructions for making proteins from active genes to the cell's protein-making machinery. High levels of mRNA from a gene indicate that it is active. Low levels mean it's inactive.

With the single-cell RNA sequencing technique, the researchers tracked the changes in the individual cells during the worms' embryonic development from when the embryo was a ball of 28 mostly undifferentiated cells to when most cell types had developed into their near final form, a process that takes about 12 hours

"We've been studying the evolution of development since the 1970s," said Dr. Junhyong Kim, professor of biology and director of the Penn Genome Frontiers Institute, a co-senior author of the study. "But this is the first time that we've been able to compare development in every single cell of two different organisms."

Kim said the finding that some gene expression was conserved wasn't surprising, because of how similar the worms' bodies are. But it was surprising that when there were changes, those changes appeared to have no effect on the body plan.

The study describes where and when gene expression patterns differ between the species but doesn't yet explain why, said Dr. John Isaac Murray, associate professor of genetics at the Perelman School of Medicine and the study's third senior author.

"It's hard to say whether any of the differences we observed were due to evolutionary adaptation or simply the result of genetic drift, where changes happen randomly," he said. "But this approach will allow us to explore many unanswered questions about evolution."

This study was supported by the National Institutes of Health (HD105819, HG010478, HG007355) and the National Sciences Foundation (PRFB2305513).

Materialsprovided byUniversity of Washington School of Medicine/UW Medicine. Original written by Michael McCarthy.Note: Content may be edited for style and length.

Photon-powered alchemy: How light is rewriting fossil fuel chemistry

Colorado State University researchers have published a paper inSciencethat describes a new and more efficient light-based process for transforming fossil fuels into useful modern chemicals. In it, they report that their organic photoredox catalysis system is effective even at room temperatures. That advantage could lower energy demands around chemical manufacturing in a variety of instances and could also reduce associated pollution, among other benefits.

The work is led at CSU by professors Garret Miyake and Robert Paton from the Department of Chemistry and the Center for Sustainable Photoredox Catalysis (SuPRCat). Their system – inspired by photosynthesis – uses visible light to gently alter the properties of chemical compounds. It does this by exposing them to two separate photons (light particles) to generate energy needed for the desired reactions. A single photon does not normally carry enough energy for these processes, said Miyake. By combining energy from two light particles, the team's system can perform super-reducing reactions – chemical changes that require a lot of energy to break tough bonds or add electrons – easily.

Miyake said their system was tested on a group of chemical compounds called aromatic hydrocarbons – otherwise known as arenes. These compounds are usually resistant to change.

"This technology is the most efficient system currently available for reducing arenes – such as benzene in fossil fuels – for the production of chemicals needed for plastics and medicine," Miyake said. "Usually generating these reactions is difficult and energy intensive because the original bonds are so strong."

The research continues work being done through the U.S. National Science Foundation Center for Sustainable Photoredox Catalysis at CSU. Miyake is the director of that multi-institution research effort to transform chemical synthesis processes across many uses.

Katharine Covert, program director for the NSF Centers for Chemical Innovation program, which supported this research, said photoredox catalysis has become indispensable for many industries.

"Photoredox catalysis has become indispensable for pharmaceutical development and other industries," said Covert. "Through the NSF Center for Sustainable Photoredox Catalysis, synthetic and computational chemists have teamed up to understand the fundamental chemical nature of how those catalysts function and, in so doing, found a new path that requires less heat and energy."

Miyake said researchers across the center are developing catalysis systems similar to the one described in this paper to support energy efficient production of ammonia for fertilizers, the breakdown of PFAS forever chemicals, and the upcycling of plastics.

"We built an all-star team of chemists to address these challenges and make a more sustainable future for this world," Miyake said. "The world has a timeclock that is expiring, and we must meet the urgent need for developing sustainable technologies before our current ways of doing things puts us to a place that we can't recover from."

CU Boulder Professor Niels Damrauer is also an author on the paper and member of the center. Other CSU authors include Amreen Bains, Brandon Portela, Alexander Green, Anna Wolff and Ludovic Patin.

Materialsprovided byColorado State University.Note: Content may be edited for style and length.

MIT’s tiny 5G receiver could make smart devices last longer and work anywhere

MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.

The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.

The researchers' chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver's amplifier from unwanted wireless signals that could jam the device.

Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.

The receiver's capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.

"This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks," says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.

He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.

A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.

Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.

But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.

"This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment," Araei says.

To reduce the size, cost, and power consumption of an IoT device, engineers can't rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.

One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.

In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.

Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.

"This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit," Araei says.

Their receiver has an active area of less than 0.05 square millimeters.

One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.

In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.

To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.

Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.

"Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small," Araei adds.

Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.

Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.

This research is supported, in part, by the National Science Foundation.

Materialsprovided byMassachusetts Institute of Technology. Original written by Adam Zewe.Note: Content may be edited for style and length.

Diabetes drug cuts migraines in half by targeting brain pressure

A diabetes medication that lowers brain fluid pressure has cut monthly migraine days by more than half, according to a new study presented today at the European Academy of Neurology (EAN) Congress 2025.1

Researchers at the Headache Center of the University of Naples "Federico II" gave the glucagon-like peptide-1 (GLP-1) receptor agonist liraglutide to 26 adults with obesity and chronic migraine (defined as ≥15 headache days per month). Patients reported an average of 11 fewer headache days per month, while disability scores on the Migraine Disability Assessment Test dropped by 35 points, indicating a clinically meaningful improvement in work, study, and social functioning.

GLP-1 agonists have gained recent widespread attention, reshaping treatment approaches for several diseases, including diabetes and cardiovascular disease.2In the treatment of type 2 diabetes, liraglutide helps lower blood sugar levels and reduce body weight by suppressing appetite and reducing energy intake.3,4,5

Importantly, while participants' body-mass index declined slightly (from 34.01 to 33.65), this change was not statistically significant. An analysis of covariance confirmed that BMI reduction had no effect on headache frequency, strengthening the hypothesis that pressure modulation, not weight loss, drives the benefit.

"Most patients felt better within the first two weeks and reported quality of life improved significantly," said lead researcher Dr Simone Braca. "The benefit lasted for the full three-month observation period, even though weight loss was modest and statistically non-significant."

Patients were screened to exclude papilledema (optic disc swelling resulting from increased intracranial pressure) and sixth nerve palsy, ruling out idiopathic intracranial hypertension (IIH) as a confounding factor. Growing evidence closely links subtle increases in intracranial pressure to migraine attacks.6GLP-1-receptor agonists such as liraglutide reduce cerebrospinal fluid secretion and have already proved effective in treating IIH.7Therefore, building on these observations, Dr Braca and colleagues hypothesized that exploiting the same mechanism of action might ultimately dampen cortical and trigeminal sensitization that underlie migraine.

"We think that, by modulating cerebrospinal fluid pressure and reducing intracranial venous sinuses compression, these drugs produce a decrease in the release of calcitonin gene-related peptide (CGRP), a key migraine-promoting peptide," Dr Braca explained. "That would pose intracranial pressure control as a brand-new, pharmacologically targetable pathway."

Mild gastrointestinal side effects (mainly nausea and constipation) occurred in 38% of participants but did not lead to treatment discontinuation.

Following this exploratory 12-week pilot study, a randomized, double-blind trial with direct or indirect intracranial pressure measurement is now being planned by the same research team in Naples, led by Professor Roberto De Simone. "We also want to determine whether other GLP-1 drugs can deliver the same relief, possibly with even fewer gastrointestinal side effects," Dr Braca noted.

If confirmed, GLP-1-receptor agonists could offer a new treatment option for the estimated one in seven people worldwide who live with migraine,8particularly those who do not respond to current preventives. Given liraglutide's established use in type 2 diabetes and obesity, it may represent a promising case of drug repurposing in neurology.

Dr Simone Braca is a neurology resident and clinical research fellow at the Headache Centre of the University of Naples "Federico II," Italy. His work focuses on the interplay between applied pharmacodynamics, intracranial-pressure regulation and primary headache disorders. Dr Braca has authored or co-authored several peer-reviewed papers on migraine therapeutics and serves as an early-career representative in the European Academy of Neurology (EAN) Headache Scientific Panel. He combines hands-on patient care with translational research, aiming to bring novel, mechanism-based treatments from bench to bedside.

Materialsprovided byBeyond.Note: Content may be edited for style and length.

The Atlantic’s chilling secret: A century of data reveals ocean current collapse

For more than a century, a patch of cold water south of Greenland has resisted the Atlantic Ocean's overall warming, fueling debate amongst scientists. A new study identifies the cause as the long-term weakening of a major ocean circulation system.

Researchers from the University of California, Riverside show that only one explanation fits both observed ocean temperatures and salinity patterns: the Atlantic Meridional Overturning Circulation, or AMOC, is slowing down. This massive current system helps regulate climate by moving warm, salty water northward and cooler water southward at depth.

"People have been asking why this cold spot exists," said UCR climate scientist Wei Liu, who led the study with doctoral student Kai-Yuan Li. "We found the most likely answer is a weakening AMOC."

The AMOC acts like a giant conveyor belt, delivering heat and salt from the tropics to the North Atlantic. A slowdown in this system means less warm, salty water reaches the sub-polar region, resulting in the cooling and freshening observed south of Greenland.

When the current slows, less heat and salt reach the North Atlantic, leading to cooler, fresher surface waters. This is why salinity and temperature data can be used to understand the strength of the AMOC.

Liu and Li analyzed a century's worth of this data, as direct AMOC observations go back only about 20 years. From these long-term records, they reconstructed changes in the circulation system and compared those with nearly 100 different climate models.

As the paper published inCommunications Earth & Environmentshows, only the models simulating a weakened AMOC matched the real-world data. Models that assumed a stronger circulation didn't come close.

"It's a very robust correlation," Li said. "If you look at the observations and compare them with all the simulations, only the weakened-AMOC scenario reproduces the cooling in this one region."

The study also found that the weakening of the AMOC correlates with decreased salinity. This is another clear sign that less warm, salty water is being transported northward.

The consequences are broad. The South Greenland anomaly matters not just because it's unusual, but because it's one of the most sensitive regions to changes in ocean circulation. It affects weather patterns across Europe, altering rainfall and shifting the jet stream, which is a high-altitude air current that steers weather systems and helps regulate temperatures across North America and Europe.

The slowdown may also disturb marine ecosystems, as changes in salinity and temperature influence where species can live.

This result may help settle a dispute amongst climate modelers about whether the South Greenland cooling is driven primarily by ocean dynamics or by atmospheric factors such as aerosol pollution. Many newer models suggested the latter, predicting a strengthened AMOC due to declining aerosol emissions. But those models failed to recreate the actual, observed cooling.

"Our results show that only the models with a weakening AMOC get it right," Liu said. "That means many of the recent models are too sensitive to aerosol changes, and less accurate for this region."

By resolving that mismatch, the study strengthens future climate forecasts, especially those concerning Europe, where the influence of the AMOC is most pronounced.

The study also highlights the ability to draw clear conclusions from indirect evidence. With limited direct data on the AMOC, temperature and salinity records provide a valuable alternative for detecting long-term change, and for helping to predict future climate scenarios.

"We don't have direct observations going back a century, but the temperature and salinity data let us see the past clearly," Li said. "This work shows the AMOC has been weakening for more than a century, and that trend is likely to continue if greenhouse gases keep rising."

As the climate system shifts, the South Greenland cold spot may grow in influence. The hope is that by unlocking its origins, scientists can better prepare societies for what lies ahead.

"The technique we used is a powerful way to understand how the system has changed, and where it is likely headed if greenhouse gases keep rising," Li said.

Materialsprovided byUniversity of California – Riverside.Note: Content may be edited for style and length.

Gravity, flipped: How tiny, porous particles sink faster in ocean snowstorms

The deep ocean can often look like a real-life snow globe. As organic particles from plant and animal matter on the surface sink downward, they combine with dust and other material to create "marine snow," a beautiful display of ocean weather that plays a crucial role in cycling carbon and other nutrients through the world's oceans.

Now, researchers from Brown University and the University of North Carolina at Chapel Hill have found surprising new insights into how particles sink in stratified fluids like oceans, where the density of the fluid changes with depth. In a study published inProceedings of the National Academy of Sciences, they show that the speed at which particles sink is determined not only by resistive drag forces from the fluid, but by the rate at which they can absorb salt relative to their volume.

"It basically means that smaller particles can sink faster than bigger ones," said Robert Hunt, a postdoctoral researcher in Brown's School of Engineering who led the work. "That's exactly the opposite of what you'd expect in a fluid that has uniform density."

The researchers hope the new insights could aid in understanding the ocean nutrient cycle, as well as the settling of other porous particulates including microplastics.

"We ended up with a pretty simple formula where you can plug in estimates for different parameters — the size of the particles or speed at which the liquid density changes — and get reasonable estimates of the sinking speed," said Daniel Harris, an associate professor of engineering at Brown who oversaw the work. "There's value in having predictive power that's readily accessible."

The study grew out of prior work by Hunt and Harris investigating neutrally buoyant particles — those that sink to a certain depth and then stop. Hunt noticed some strange behavior that seemed to be related to the porosity of the particles.

"We were testing a theory under the assumption that these particles would remain neutrally buoyant," Hunt said. "But when we observed them, they kept sinking, which was actually kind of frustrating."

That led to a new theoretical model of how porosity — specifically, the ability to absorb salt — would affect the rate at which they sunk. The model predicts that the more salt a particle can absorb relative to its size, the faster it sinks. That means, somewhat counterintuitively, that small porous particles sink faster than larger ones.

To test the model, the researchers developed a way to make a linearly stratified body of water in which the density of the liquid increased gradually with depth. To do that, they fed a large tub with water sourced from two smaller tubs, one with fresh water and the other with salt water. Controllable pumps from each tub enabled them to carefully control the density profile of the larger tub.

Using 3D-printed molds, the team then created particles of varying shapes and sizes made from agar, a gelatinous material derived from seaweed. Cameras imaged individual particles as they sank.

The experiments confirmed the predictions of the model. For spherical particles, smaller ones tended to sink faster. For thinner or flatter particles, their settling speed was primarily determined by their smallest dimension. That means that elongated particles actually sink faster than spherical ones of the same volume.

The results are surprising, the researchers said, and could provide important insights into how particles settle in more complex ecological settings — either for understanding natural carbon cycling or for engineering ways of speeding up carbon capture in large bodies of water.

"We're not trying to replicate full oceanic conditions," Harris said. "The approach in our lab is to boil things down to their simplest form and think about the fundamental physics involved in these complex phenomena. Then we can work back and forth with people measuring these things in the field to understand where these fundamentals are relevant."

Harris says he hopes to connect with oceanographers and climate scientists to see what insights these new findings might provide.

Other co-authors of the research were Roberto Camassa and Richard McLaughlin from UNC Chappel Hill. The research was funded by the National Science Foundation (DMS-1909521, DMS-1910824, DMS-2308063) and the Office of Naval Research (N00014-18-1-2490 and N00014-23-1-2478).

Materialsprovided byBrown University.Note: Content may be edited for style and length.

Hydrogen fuel at half the cost? Scientists reveal a game-changing catalyst

To reduce greenhouse gas emissions and combat climate change, the world urgently needs clean and renewable energy sources. Hydrogen is one such clean energy source that has zero carbon content and stores much more energy by weight than gasoline. One promising method to produce hydrogen is electrochemical water-splitting, a process that uses electricity to break down water into hydrogen and oxygen. In combination with renewable energy sources, this method offers a sustainable way to produce hydrogen and can contribute to the reduction of greenhouse gases.

Unfortunately, large-scale production of hydrogen using this method is currently unfeasible due to the need for catalysts made from expensive rare earth metals. Consequently, researchers are exploring more affordable electrocatalysts, such as those made from diverse transition metals and compounds. Among these, transition metal phosphides (TMPs) have attracted considerable attention as catalysts for the hydrogen generating side of the process, known as hydrogen evolution reaction (HER), due to their favorable properties. However, they perform poorly in the oxygen evolution reaction (OER), which reduces overall efficiency. Previous studies suggest that Boron (B)-doping into TMPs can enhance both HER and OER performance, but until now, making such materials has been a challenge.

In a recent breakthrough, a research team led by Professor Seunghyun Lee, including Mr. Dun Chan Cha, from the Hanyang University ERICA campus in South Korea, has developed a new type of tunable electrocatalyst using B-doped cobalt phosphide (CoP) nanosheets. Prof. Lee explains, "We have successfully developed cobalt phosphides-based nanomaterials by adjusting boron doping and phosphorus content using metal-organic frameworks. These materials have better performance and lower cost than conventional electrocatalysts, making them suitable for large-scale hydrogen production." Their study was published in the journalSmallon March 19, 2025.

The researchers used an innovative strategy to create these materials, using cobalt (Co) based metal-organic frameworks (MOFs). "MOFs are excellent precursors for designing and synthesizing nanomaterials with the required composition and structures," notes Mr. Cha. First, they grew Co-MOFs on nickel foam (NF). They then subjected this material to a post-synthesis modification (PSM) reaction with sodium borohydride (NaBH4), resulting in the integration of B. This was followed up by a phosphorization process using different amounts of sodium hypophosphite (NaH2PO2), resulting in the formation of three different samples of B-doped cobalt phosphide nanosheets (B-CoP@NC/NF).

Experiments revealed that all three samples had a large surface area and a mesoporous structure, key features that improve electrocatalytic activity. As a result, all three samples exhibited excellent OER and HER performance, with the sample made using 0.5 grams of NaH2PO2 (B-CoP0.5@NC/NF) demonstrating the best results. Interestingly, this sample exhibited overpotentials of 248 and 95 mV for OER and HER, respectively, much lower than previously reported electrocatalysts.

An alkaline electrolyzer developed using the B-CoP0.5@NC/NF electrodes showed a cell potential of just 1.59 V at a current density of 10 mA cm-2, lower than many recent electrolyzers. Additionally, at high current densities above 50 mA cm-2, it even outperformed the state-of-the-art RuO2/NF(+) and 20% Pt-C/NF(−) electrolyzer, while also demonstrating long-term stability, maintaining its performance for over 100 hours.

Density functional theory (DFT) calculations supported these findings and clarified the role of B-doping and adjusting P content. Specifically, B-doping and optimal P content led to effective interaction with reaction intermediates, leading to exceptional electrocatalytic performance.

"Our findings offer a blueprint for designing and synthesizing next-generation high-efficiency catalysts that can drastically reduce hydrogen production costs," says Prof. Lee. "This is an important step towards making large-scale green hydrogen production a reality, which will ultimately help in reducing global carbon emissions and mitigating climate change.

Materialsprovided byIndustrial Cooperation & research Planning team, Hanyang University ERICA.Note: Content may be edited for style and length.

Plants’ secret second roots rewrite the climate playbook

Plants and trees extend their roots into the earth in order to draw nutrients and water from the soil — however, these roots are thought to decline as they move deeper underground. But a new study by a multi-institutional team of scientists shows that many plants develop a second, deeper layer of roots — often more than three feet underground — to access additional nourishment.

Published in the journalNature Communications,the study reveals previously unrecognized rooting patterns, altering our understanding of how ecosystems

respond to changing environmental conditions. More importantly, the study suggests that plants might transport and store fixed carbon deeper than currently thought — welcome news at a time when CO2 levels are at an 800,000-year high, according to the World Meteorological Organization's "State of the Global Climate Report" issued in March.

"Understanding where plants grow roots is vital, as deeper roots could mean safer and longer-term carbon storage. Harsher conditions at depth may prevent detritus-feeding microbes from releasing carbon back to the atmosphere," says Mingzhen Lu, an assistant professor at New York University's Department of Environmental Studies and the paper's lead author. "Our current ecological observations and models typically stop at shallow depths; by not looking deep enough, we may have overlooked a natural carbon storage mechanism deep underground."

The research team used data from the National Ecological Observatory Network (NEON) to examine rooting depth. The NEON database includes samples collected from soil 6.5 feet below the surface, far deeper than the one-foot depth of traditional ecological studies. This unprecedented depth allowed researchers to detect additional root patterns, spanning diverse climate zones and ecosystem types from the Alaskan tundra to Puerto Rico's rainforests.

The scientists' work focused on three questions — all with the aim of better understanding plants' resource acquisition strategies and their resilience in response to environmental change:

The researchers found that nearly 20 percent of the studied ecosystems had roots that peaked twice across depth — a phenomenon called "bimodality." In these cases, plants developed a second, deeper layer of roots, often more than three feet underground and aligning with nutrient-rich soil layers.This suggests that plants grew — in previously unknown ways — to exploit additional sustenance.

"The current understanding of roots is literally too shallow. Above ground, we have eagle vision — thanks to satellites and remote sensing. But below ground, we have mole vision," observes Lu, former Omidyar Fellow who conducted part of this research at the Sante Fe Institute and as a postdoctoral affiliate at Stanford University. "Our limited below ground vision means that we cannot estimate the full ability of plants to store carbon deep in the soil."

"Deep plant roots may cause increased soil carbon storage in one condition or lead to losses in other conditions due to a stimulation of soil microbes," suggests coauthor Avni Malhotra, the lead author of a companion study that investigated the connection between root distribution and soil carbon stock. "This discovery opens a new avenue of inquiry into how bimodal rooting patterns impact the dynamics of nutrient flow, water cycling, and the long-term capacity of soils to store carbon."

"Scientists and policymakers need to look deeper beneath the Earth's surface as these overlooked deep soil layers may hold critical keys for understanding and managing ecosystems in a rapidly changing climate," concludes Lu. "The good news is plants may already be naturally mitigating climate change more actively than we've realized — we just need to dig deeper to fully understand their potential."

The study also included researchers from Boston College, Columbia University, Dartmouth College, the Morton Arboretum, the National Ecological Observatory Network-Battelle, Pacific Northwest National Laboratory, and Stanford University.

Materialsprovided byNew York University.Note: Content may be edited for style and length.

Iron overload: The hidden culprit behind early Alzheimer’s in Down syndrome

Scientists at the USC Leonard Davis School of Gerontology have discovered a key connection between high levels of iron in the brain and increased cell damage in people who have both Down syndrome and Alzheimer's disease.

In the study, researchers found that the brains of people diagnosed with Down syndrome and Alzheimer's disease (DSAD) had twice as much iron and more signs of oxidative damage in cell membranes compared to the brains of individuals with Alzheimer's disease alone or those with neither diagnosis. The results point to a specific cellular death process that is mediated by iron, and the findings may help explain why Alzheimer's symptoms often appear earlier and more severely in individuals with Down syndrome.

"This is a major clue that helps explain the unique and early changes we see in the brains of people with Down syndrome who develop Alzheimer's," said Max Thorwald, lead author of the study and a postdoctoral fellow in the laboratory of University Professor Emeritus Caleb Finch at the USC Leonard Davis School. "We've known for a long time that people with Down syndrome are more likely to develop Alzheimer's disease, but now we're beginning to understand how increased iron in the brain might be making things worse."

Down syndrome is caused by having an extra third copy, or trisomy, of chromosome 21. This chromosome includes the gene for amyloid precursor protein, or APP, which is involved in the production of amyloid-beta (Aβ), the sticky protein that forms telltale plaques in the brains of people with Alzheimer's disease.

Because people with Down syndrome have three copies of the APP gene instead of two, they tend to produce more of this protein. By the age of 60, about half of all people with Down syndrome show signs of Alzheimer's disease, which is approximately 20 years earlier than in the general population.

"This makes understanding the biology of Down syndrome incredibly important for Alzheimer's research," said Finch, the study's senior author.

Key findings point to ferroptosis

The research team studied donated brain tissue from individuals with Alzheimer's, DSAD, and those without either diagnosis. They focused on the prefrontal cortex — an area of the brain involved in thinking, planning, and memory — and made several important discoveries:

Together, these findings indicate increased ferroptosis, a type of cell death characterized by iron-dependent lipid peroxidation, Thorwald explained: "Essentially, iron builds up, drives the oxidation that damages cell membranes, and overwhelms the cell's ability to protect itself."

Lipid rafts: a hotspot for brain changes

The researchers paid close attention to lipid rafts — tiny parts of the brain cell membrane that play crucial roles in cell signaling and regulate how proteins like APP are processed. They found that in DSAD brains, lipid rafts had much more oxidative damage and fewer protective enzymes compared to Alzheimer's or healthy brains.

Notably, these lipid rafts also showed increased activity of the enzyme β-secretase, which interacts with APP to produce Aβ proteins. The combination of more damage and more Aβ production may promote the growth of amyloid plaques, thus speeding up Alzheimer's progression in people with Down syndrome, Finch explained.

Rare Down syndrome variants offer insight

The researchers also studied rare cases of individuals with "mosaic" or "partial" Down syndrome, in which the third copy of chromosome 21 is only present in a smaller subset of the body's cells. These individuals had lower levels of APP and iron in their brains and tended to live longer. In contrast, people with full trisomy 21 and DSAD had shorter lifespans and higher levels of brain damage.

"These cases really support the idea that the amount of APP — and the iron that comes with it — matters a lot in how the disease progresses," Finch said.

The team says their findings could help guide future treatments, especially for people with Down syndrome who are at high risk of Alzheimer's. Early research in mice suggests that iron-chelating treatments, in which medicine binds to the metal ions and allows them to leave the body, may reduce indicators of Alzheimer's pathology, Thorwald noted.

"Medications that remove iron from the brain or help strengthen antioxidant systems might offer new hope," Thorwald said. "We're now seeing how important it is to treat not just the amyloid plaques themselves but also the factors that may be hastening the development of those plaques."

The study was supported by the National Institute on Aging, National Institutes of Health (P30-AG066519, R01-AG051521, P50-AG05142, P01-AG055367, R01AG079806, P50-AG005142, P30-AG066530, P30-AG066509, U01-AG006781, T32AG052374, R01AG079806-02S1, and T32-AG000037); Cure Alzheimer's Fund; Simons Collaboration on Plasticity in the Aging Brain (SF811217); Larry L. Hillblom Foundation (2022-A-010-SUP); Glenn Foundation for Medical Research; and the Navigage Foundation Award.

Materialsprovided byUniversity of Southern California.Note: Content may be edited for style and length.

Scientists create living building material that captures CO₂ from the air

The idea seems futuristic: At ETH Zurich, various disciplines are working together to combine conventional materials with bacteria, algae and fungi. The common goal: to create living materials that acquire useful properties thanks to the metabolism of microorganisms — "such as the ability to bind CO2from the air by means of photosynthesis," says Mark Tibbitt, Professor of Macromolecular Engineering at ETH Zurich.

An interdisciplinary research team led by Tibbitt has now turned this vision into reality: it has stably incorporated photosynthetic bacteria — known as cyanobacteria — into a printable gel and developed a material that is alive, grows and actively removes carbon from the air. The researchers recently presented their "photosynthetic living material" in a study in the journalNature Communications.

Key characteristic: Dual carbon sequestration

The material can be shaped using 3D printing and only requires sunlight and artificial seawater with readily available nutrients in addition to CO2to grow. "As a building material, it could help to store CO2directly in buildings in the future," says Tibbitt, who co-initiated the research into living materials at ETH Zurich.

The special thing about it: the living material absorbs much more CO2than it binds through organic growth. "This is because the material can store carbon not only in biomass, but also in the form of minerals — a special property of these cyanobacteria," reveals Tibbitt.

Yifan Cui, one of the two lead authors of the study, explains: "Cyanobacteria are among the oldest life forms in the world. They are highly efficient at photosynthesis and can utilize even the weakest light to produce biomass from CO2and water."

At the same time, the bacteria change their chemical environment outside the cell as a result of photosynthesis, so that solid carbonates (such as lime) precipitate. These minerals represent an additional carbon sink and — in contrast to biomass — store CO2in a more stable form.

Cyanobacteria as master builders

"We utilize this ability specifically in our material," says Cui, who is a doctoral student in Tibbitt's research group. A practical side effect: the minerals are deposited inside the material and reinforce it mechanically. In this way, the cyanobacteria slowly harden the initially soft structures.

Laboratory tests showed that the material continuously binds CO2 over a period of 400 days, most of it in mineral form — around 26 milligrams of CO2per gram of material. This is significantly more than many biological approaches and comparable to the chemical mineralization of recycled concrete (around 7 mg CO2per gram).

The carrier material that harbours the living cells is a hydrogel — a gel made of cross-linked polymers with a high water content. Tibbitt's team selected the polymer network so that it can transport light, CO2, water and nutrients and allows the cells to spread evenly inside without leaving the material.

To ensure that the cyanobacteria live as long as possible and remain efficient, the researchers have also optimised the geometry of the structures using 3D printing processes to increase the surface area, increase light penetration and promote the flow of nutrients.

Co-first author Dalia Dranseike: "In this way, we created structures that enable light penetration and passively distribute nutrient fluid throughout the body by capillary forces." Thanks to this design, the encapsulated cyanobacteria lived productively for more than a year, the materials researcher in Tibbitt's team is pleased to report.

Infrastructure as a carbon sink

The researchers see their living material as a low-energy and environmentally friendly approach that can bind CO2from the atmosphere and supplement existing chemical processes for carbon sequestration. "In the future, we want to investigate how the material can be used as a coating for building façades to bind CO2throughout the entire life cycle of a building," Tibbitt looks ahead.

There is still a long way to go — but colleagues from the field of architecture have already taken up the concept and realised initial interpretations in an experimental way.

Two installations in Venice and Milan

Thanks to ETH doctoral student Andrea Shin Ling, basic research from the ETH laboratories has made it onto the big stage at the Architecture Biennale in Venice. "It was particularly challenging to scale up the production process from laboratory format to room dimensions," says the architect and bio-designer, who is also involved in this study.

Ling is doing her doctorate at ETH Professor Benjamin Dillenburger's Chair of Digital Building Technologies. In her dissertation, she developed a platform for biofabrication that can print living structures containing functional cyanobacteria on an architectural scale.

For the Picoplanktonics installation in the Canada Pavilion, the project team used the printed structures as living building blocks to construct two tree-trunk-like objects, the largest around three metres high. Thanks to the cyanobacteria, these can each bind up to 18 kg of CO2per year — about as much as a 20-year-old pine tree in the temperate zone.

"The installation is an experiment — we have adapted the Canada Pavilion so that it provides enough light, humidity and warmth for the cyanobacteria to thrive and then we watch how they behave," says Ling. This is a commitment: The team monitors and maintains the installation on site — daily. Until November 23.

At the 24th Triennale di Milano, Dafne's Skin is investigating the potential of living materials for future building envelopes. On a structure covered with wooden shingles, microorganisms form a deep green patina that changes the wood over time: A sign of decay becomes an active design element that binds CO2and emphasises the aesthetics of microbial processes. Dafne's Skin is a collaboration between MAEID Studio and Dalia Dranseike. It is part of the exhibition "We the Bacteria: Notes Toward Biotic Architecture" and runs until November 9.

The photosynthetic living material was created thanks to an interdisciplinary collaboration within the framework ofALIVE (Advanced Engineering with Living Materials). The ETH Zurich initiative promotes collaboration between researchers from different disciplines in order to develop new living materials for a wide range of applications.

Materialsprovided byETH Zurich.Note: Content may be edited for style and length.

Tidak Ada Lagi Postingan yang Tersedia.

Tidak ada lagi halaman untuk dimuat.