MIT uncovers the hidden playbook your brain uses to outsmart complicated problems

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans' skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

"What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don't have the means to solve a complex problem, we manage by using simpler heuristics that get the job done," says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT's McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD '24 and graduate student Cheng Tang are the lead authors of the paper, which appears today inNature Human Behavior. Nicholas Watters PhD '25 is also a co-author.

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don't know much about how the brain decides which one to use in a given situation.

"This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?" Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball's path is a task that is impossible for humans to solve with perfect accuracy.

"It requires four parallel simulations in your mind, and no human can do that. It's analogous to having four conversations at a time," Jazayeri says. "The task allows us to tap into this set of algorithms that the humans use, because you just can't solve it optimally."

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects' performance with the models' predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn't compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

"People rely on counterfactuals to the degree that it's helpful," Jazayeri says. "People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who's really good at retrieving information from the recent past, you may go back to the other side."

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball's path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model's ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model's memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

"What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior," Jazayeri says. "This is really saying that humans are acting rationally under the constraints that they have to function under."

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

Materialsprovided byMassachusetts Institute of Technology.Note: Content may be edited for style and length.

This tiny patch could replace biopsies—and revolutionize how we detect cancer

A patch containing tens of millions of microscopic nanoneedles could soon replace traditional biopsies, scientists have found.

The patch offers a painless and less invasive alternative for millions of patients worldwide who undergo biopsies each year to detect and monitor diseases like cancer and Alzheimer's.

Biopsies are among the most common diagnostic procedures worldwide, performed millions of times every year to detect diseases. However, they are invasive, can cause pain and complications, and can deter patients from seeking early diagnosis or follow-up tests. Traditional biopsies also remove small pieces of tissue, limiting how often and how comprehensively doctors can analyse diseased organs like the brain.

Now, scientists at King's College London have developed a nanoneedle patch that painlessly collects molecular information from tissues without removing or damaging them. This could allow healthcare teams to monitor disease in real time and perform multiple, repeatable tests from the same area – something impossible with standard biopsies.

Because the nanoneedles are 1,000 times thinner than a human hair and do not remove tissue, they cause no pain or damage, making the process less painful for patients compared to standard biopsies. For many, this could mean earlier diagnosis and more regular monitoring, transforming how diseases are tracked and treated.

Dr Ciro Chiappini, who led the research published today inNature Nanotechnology, said: "We have been working on nanoneedles for twelve years, but this is our most exciting development yet. It opens a world of possibilities for people with brain cancer, Alzheimer's, and for advancing personalised medicine. It will allow scientists – and eventually clinicians – to study disease in real time like never before."

The patch is covered in tens of millions of nanoneedles. In preclinical studies, the team applied the patch to brain cancer tissue taken from human biopsies and mouse models. The nanoneedles extracted molecular 'fingerprints' — including lipids, proteins, and mRNAs — from cells, without removing or harming the tissue.

The tissue imprint is then analysed using mass spectrometry and artificial intelligence, giving healthcare teams detailed insights into whether a tumour is present, how it is responding to treatment, and how disease is progressing at the cellular level.

Dr Chiappini said: "This approach provides multidimensional molecular information from different types of cells within the same tissue. Traditional biopsies simply cannot do that. And because the process does not destroy the tissue, we can sample the same tissue multiple times, which was previously impossible."

This technology could be used during brain surgery to help surgeons make faster, more precise decisions. For example, by applying the patch to a suspicious area, results could be obtained within 20 minutes and guide real-time decisions about removing cancerous tissue.

Made using the same manufacturing techniques as computer chips, the nanoneedles can be integrated into common medical devices such as bandages, endoscopes and contact lenses.

Dr Chippani added: "This could be the beginning of the end for painful biopsies. Our technology opens up new ways to diagnose and monitor disease safely and painlessly – helping doctors and patients make better, faster decisions."

The breakthrough was possible through close collaboration across nanoengineering, clinical oncology, cell biology, and artificial intelligence — each field bringing essential tools and perspectives that, together, unlocked a new approach to non-invasive diagnostics.

The study was supported by the European Research Council through its flagship Starting Grant programme, Wellcome Leap, and UKRI's EPSRC and MRC, which enabled acquisition of key analytical instrumentation.

Materialsprovided byKing's College London.Note: Content may be edited for style and length.

Forever chemicals’ toxic cousin: MCCPs detected in U. S. air for first time

Once in a while, scientific research resembles detective work. Researchers head into the field with a hypothesis and high hopes of finding specific results, but sometimes, there's a twist in the story that requires a deeper dive into the data.

That was the case for the University of Colorado Boulder researchers who led a field campaign in an agricultural region of Oklahoma. Using a high-tech instrument to measure how aerosol particles form and grow in the atmosphere, they stumbled upon something unexpected: the first-ever airborne measurements of Medium Chain Chlorinated Paraffins (MCCPs), a kind of toxic organic pollutant, in the Western Hemisphere. Their results published today inACS Environmental Au.

"It's very exciting as a scientist to find something unexpected like this that we weren't looking for," said Daniel Katz, CU Boulder chemistry PhD student and lead author of the study. "We're starting to learn more about this toxic, organic pollutant that we know is out there, and which we need to understand better."

MCCPs are currently under consideration for regulation by the Stockholm Convention, a global treaty to protect human health from long-standing and widespread chemicals. While the toxic pollutants have been measured in Antarctica and Asia, researchers haven't been sure how to document them in the Western Hemisphere's atmosphere until now.

MCCPs are used in fluids for metal working and in the construction of PVC and textiles. They are often found in wastewater and as a result, can end up in biosolid fertilizer, also called sewage sludge, which is created when liquid is removed from wastewater in a treatment plant. In Oklahoma, researchers suspect the MCCPs they identified came from biosolid fertilizer in the fields near where they set up their instrument.

"When sewage sludges are spread across the fields, those toxic compounds could be released into the air," Katz said. "We can't show directly that that's happening, but we think it's a reasonable way that they could be winding up in the air. Sewage sludge fertilizers have been shown to release similar compounds."

MCCPs little cousins, Short Chain Chlorinated Paraffins (SCCPs), are currently regulated by the Stockholm Convention, and since 2009, by the EPA here in the United States. Regulation came after studies found the toxic pollutants, which travel far and last a long time in the atmosphere, were harmful to human health. But researchers hypothesize that the regulation of SCCPs may have increased MCCPs in the environment.

"We always have these unintended consequences of regulation, where you regulate something, and then there's still a need for the products that those were in," said Ellie Browne, CU Boulder chemistry professor, CIRES Fellow, and co-author of the study. "So they get replaced by something."

Measurement of aerosols led to a new and surprising discovery

Using a nitrate chemical ionization mass spectrometer, which allows scientists to identify chemical compounds in the air, the team measured air at the agricultural site 24 hours a day for one month. As Katz cataloged the data, he documented the different isotopic patterns in the compounds. The compounds measured by the team had distinct patterns, and he noticed new patterns that he immediately identified as different from the known chemical compounds. With some additional research, he identified them as chlorinated paraffins found in MCCPs.

Katz says the makeup of MCCPs are similar to PFAS, long-lasting toxic chemicals that break down slowly over time. Known as "forever chemicals," their presence in soils recently led the Oklahoma Senate to ban biosolid fertilizer.

Now that researchers know how to measure MCCPs, the next step might be to measure the pollutants at different times throughout the year to understand how levels change each season. Many unknowns surrounding MCCPs remain, and there's much more to learn about their environmental impacts.

"We identified them, but we still don't know exactly what they do when they are in the atmosphere, and they need to be investigated further," Katz said. "I think it's important that we continue to have governmental agencies that are capable of evaluating the science and regulating these chemicals as necessary for public health and safety."

Materialsprovided byUniversity of Colorado at Boulder.Note: Content may be edited for style and length.

From shortage to supremacy: How Sandia and the CHIPS Act aim to reboot US chip power

Sandia National Laboratories has joined a new partnership aimed at helping the United States regain its leadership in semiconductor manufacturing.

While the U.S. was considered a powerhouse in chip production in the 1990s, fabricating more than 35% of the world's semiconductors, that share has since dropped to 12%. Today, the U.S. manufactures none of the world's most advanced chips which power technologies like smartphones, owned by 71% of the world's population, as well as self-driving cars, quantum computers, and artificial intelligence-powered devices and programs.

Sandia hopes to help change that. It recently became the first national lab to join the U.S. National Semiconductor Technology Center. The NSTC was established under the CHIPS and Science Act to accelerate innovation and address some of the country's most pressing technology challenges.

"We have pioneered the way for other labs to join," said Mary Monson, Sandia's senior manager of Technology Partnerships and Business Development. "The CHIPS Act has brought the band back together, you could say. By including the national labs, U.S. companies, and academia, it's really a force multiplier."

Sandia has a long history of contributing to the semiconductor industry through research and development partnerships, its Microsystems Engineering, Science and Applications facility known as MESA, and its advanced cleanrooms for developing next-generation technologies. Through its NSTC partnerships, Sandia hopes to strengthen U.S. semiconductor manufacturing and research and development, enhance national security production, and foster the innovation of new technologies that sets the nation apart globally.

"The big goal is to strengthen capabilities. Industry is moving fast, so we are keeping abreast of everything happening and incorporating what will help us deliver more efficiently on our national security mission. It's about looking at innovative ways of partnering and expediting the process," Monson said.

The urgency of the effort is evident. The pandemic provided a perfect example, as car lots were left bare and manufacturers sat idle, waiting for chips to be produced to build new vehicles.

"An average car contains 1,400 chips and electric vehicles use more than 3,000," said Rick McCormick, Sandia's senior scientist for semiconductor technology strategy. McCormick is helping lead Sandia's new role. "Other nations around the globe are investing more than $300 billion to be leaders in semiconductor manufacturing. The U.S. CHIPS Act is our way of 'keeping up with the Joneses.' One goal is for the U.S. to have more than 25% of the global capacity for state-of-the-art chips by 2032."

Sandia is positioned to play a key role in creating the chips of the future.

"More than $12 billion in research and development spending is planned under CHIPS, including a $3 billion program to create an ecosystem for packaging assemblies of chiplets," McCormick said. "These chiplets communicate at low energy and high speed as if they were a large expensive chip."

Modern commercial AI processors use this approach, and Sandia's resources and partnerships can help expand access to small companies and national security applications. MESA already fabricates high-reliability chiplet assembly products for the stockpile and nonproliferation applications.

McCormick said Sandia could also play a major role in training the workforce of the future. The government has invested billions of dollars in new factories, all of which need to be staffed by STEM students.

"There is a potential crisis looming," McCormick said. "The Semiconductor Industry Association anticipates that the U.S. will need 60,000 to 70,000 more workers, so we need to help engage the STEM workforce. That effort will also help Sandia bolster its staffing pipeline."

As part of its membership, Sandia will offer access to some of its facilities to other NSTC members, fostering collaboration and partnerships. Tech transfer is a core part of Sandia's missions, and this initiative will build on that by helping private partners increase their stake in the industry while enabling Sandia to build on its own mission.

"We will be helping develop suppliers and strengthen our capabilities," Monson said. "We are a government resource for semiconductor knowledge. We are in this evolving landscape and have a front row seat to what it will look like over the next 20 years. We are helping support technology and strengthening our national security capabilities and mission delivery."

Materialsprovided byDOE/Sandia National Laboratories.Note: Content may be edited for style and length.

AI sniffs earwax and detects Parkinson’s with 94% accuracy

Most treatments for Parkinson's disease (PD) only slow disease progression. Early intervention for the neurological disease that worsens over time is therefore critical to optimize care, but that requires early diagnosis. Current tests, like clinical rating scales and neural imaging, can be subjective and costly. Now, researchers in ACS'Analytical Chemistryreport the initial development of a system that inexpensively screens for PD from the odors in a person's earwax.

Previous research has shown that changes in sebum, an oily substance secreted by the skin, could help identify people with PD. Specifically, sebum from people with PD may have a characteristic smell because volatile organic compounds (VOCs) released by sebum are altered by disease progression — including neurodegeneration, systemic inflammation and oxidative stress. However, when sebum on the skin is exposed to environmental factors like air pollution and humidity, its composition can be altered, making it an unreliable testing medium. But the skin inside the ear canal is kept away from the elements. So, Hao Dong, Danhua Zhu and colleagues wanted to focus their PD screening efforts on ear wax, which mostly consists of sebum and is easily sampled.

To identify potential VOCs related to PD in ear wax, the researchers swabbed the ear canals of 209 human subjects (108 of whom were diagnosed with PD). They analyzed the collected secretions using gas chromatography and mass spectrometry techniques. Four of the VOCs the researchers found in ear wax from people with PD were significantly different than the ear wax from people without the disease. They concluded that these four VOCs, including ethylbenzene, 4-ethyltoluene, pentanal, and 2-pentadecyl-1,3-dioxolane, are potential biomarkers for PD.

Dong, Zhu and colleagues then trained an artificial intelligence olfactory (AIO) system with their ear wax VOC data. The resulting AIO-based screening model categorized with 94% accuracy ear wax samples from people with and without PD. The AIO system, the researchers say, could be used as a first-line screening tool for early PD detection and could pave the way for early medical intervention, thereby improving patient care.

"This method is a small-scale single-center experiment in China," says Dong. "The next step is to conduct further research at different stages of the disease, in multiple research centers and among multiple ethnic groups, in order to determine whether this method has greater practical application value."

The authors acknowledge funding from the National Natural Sciences Foundation of Science, Pioneer and Leading Goose R&D Program of Zhejiang Province, and the Fundamental Research Funds for the Central Universities.

Materialsprovided byAmerican Chemical Society.Note: Content may be edited for style and length.

A thousand colors, one galaxy: Astronomers reveal a cosmic masterpiece

Astronomers have created a galactic masterpiece: an ultra-detailed image that reveals previously unseen features in the Sculptor Galaxy. Using the European Southern Observatory's Very Large Telescope (ESO's VLT), they observed this nearby galaxy in thousands of colors simultaneously. By capturing vast amounts of data at every single location, they created a galaxy-wide snapshot of the lives of stars within Sculptor.

"Galaxies are incredibly complex systems that we are still struggling to understand," says ESO researcher Enrico Congiu, who led a new Astronomy & Astrophysics study on Sculptor. Reaching hundreds of thousands of light-years across, galaxies are extremely large, but their evolution depends on what's happening at much smaller scales. "The Sculptor Galaxy is in a sweet spot," says Congiu. "It is close enough that we can resolve its internal structure and study its building blocks with incredible detail, but at the same time, big enough that we can still see it as a whole system."

A galaxy's building blocks — stars, gas and dust — emit light at different colors. Therefore, the more shades of color there are in an image of a galaxy, the more we can learn about its inner workings. While conventional images contain only a handful of colors, this new Sculptor map comprises thousands. This tells astronomers everything they need to know about the stars, gas and dust within, such as their age, composition, and motion.

To create this map of the Sculptor Galaxy, which is 11 million light-years away and is also known as NGC 253, the researchers observed it for over 50 hours with the Multi Unit Spectroscopic Explorer (MUSE) instrument on ESO's VLT. The team had to stitch together over 100 exposures to cover an area of the galaxy about 65,000 light-years wide.

According to co-author Kathryn Kreckel from Heidelberg University, Germany, this makes the map a potent tool: "We can zoom in to study individual regions where stars form at nearly the scale of individual stars, but we can also zoom out to study the galaxy as a whole."

In their first analysis of the data, the team uncovered around 500 planetary nebulae, regions of gas and dust cast off from dying Sun-like stars, in the Sculptor Galaxy. Co-author Fabian Scheuermann, a doctoral student at Heidelberg University, puts this number into context: "Beyond our galactic neighborhood, we usually deal with fewer than 100 detections per galaxy."

Because of the properties of planetary nebulae, they can be used as distance markers to their host galaxies. "Finding the planetary nebulae allows us to verify the distance to the galaxy — a critical piece of information on which the rest of the studies of the galaxy depend," says Adam Leroy, a professor at The Ohio State University, USA, and study co-author.

Future projects using the map will explore how gas flows, changes its composition, and forms stars all across this galaxy. "How such small processes can have such a big impact on a galaxy whose entire size is thousands of times bigger is still a mystery," says Congiu.

This research was presented in a paper accepted for publication inAstronomy & Astrophysics.

The team is composed of E. Congiu (European Southern Observatory, Chile [ESO Chile]), F. Scheuermann (Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Germany [ARI-ZAH]), K. Kreckel (ARI-ZAH), A. Leroy (Department of Astronomy and Center for Cosmology and Astroparticle Physics, The Ohio State University [OSU], USA), E. Emsellem (European Southern Observatory, Germany [ESO Garching] and Univ. Lyon, Univ. Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon, France), F. Belfiore (INAF – Osservatorio Astrofisico di Arcetri, Italy), J. Hartke (Finnish Centre for Astronomy with ESO [FINCA] and Tuorla Observatory, Department of Physics and Astronomy [Tuorla], University of Turku, Finland), G. Anand (Space Telescope Science Institute, USA), O. V. Egorov (ARI-ZAH), B. Groves (International Centre for Radio Astronomy Research, University of Western Australia, Australia), T. Kravtsov (Tuorla and FINCA), D. Thilker (Department of Physics and Astronomy, The Johns Hopkins University, USA), C. Tovo (Dipartimento di Fisica e Astronomia 'G. Galilei', Universit'a di Padova, Italy), F. Bigiel (Argelander-Institut für Astronomie, Universität Bonn, Germany), G. A. Blanc (Observatories of the Carnegie Institution for Science, USA, and Departamento de Astronomía, Universidad de Chile, Chile), A. D. Bolatto and S. A. Cronin (Department of Astronomy, University of Maryland, USA), D. A. Dale (Department of Physics and Astronomy, University of Wyoming, USA), R. McClain (OSU), J. E. Méndez-Delgado (Instituto de Astronomía, Universidad Nacional Autónoma de México, Mexico), E. K. Oakes (Department of Physics, University of Connecticut, USA), R. S. Klessen (Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik and Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, Germany, Center for Astrophysics Harvard & Smithsonian, USA, and Elizabeth S. and Richard M. Cashin Fellow at the Radcliffe Institute for Advanced Studies at Harvard University, USA) E. Schinnerer (Max-Planck-Institut für Astronomie, Germany), T. G. Williams (Sub-department of Astrophysics, Department of Physics, University of Oxford, UK).

The European Southern Observatory (ESO) enables scientists worldwide to discover the secrets of the Universe for the benefit of all. We design, build and operate world-class observatories on the ground — which astronomers use to tackle exciting questions and spread the fascination of astronomy — and promote international collaboration for astronomy. Established as an intergovernmental organisation in 1962, today ESO is supported by 16 Member States (Austria, Belgium, Czechia, Denmark, France, Finland, Germany, Ireland, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom), along with the host state of Chile and with Australia as a Strategic Partner. ESO's headquarters and its visitor centre and planetarium, the ESO Supernova, are located close to Munich in Germany, while the Chilean Atacama Desert, a marvelous place with unique conditions to observe the sky, hosts our telescopes. ESO operates three observing sites: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope and its Very Large Telescope Interferometer, as well as survey telescopes such as VISTA. Also at Paranal ESO will host and operate the Cherenkov Telescope Array South, the world's largest and most sensitive gamma-ray observatory. Together with international partners, ESO operates ALMA on Chajnantor, a facility that observes the skies in the millimeter and submillimeter range. At Cerro Armazones, near Paranal, we are building "the world's biggest eye on the sky" — ESO's Extremely Large Telescope. From our offices in Santiago, Chile we support our operations in the country and engage with Chilean partners and society.

Materialsprovided byESO.Note: Content may be edited for style and length.

Winter sea ice supercharges Southern Ocean’s CO2 uptake

New research reveals the importance of winter sea ice in the year-to-year variability of the amount of atmospheric CO2absorbed by a region of the Southern Ocean.

In years when sea ice lasts longer in winter, the ocean will overall absorb 20% more CO2from the atmosphere than in years when sea ice forms late or disappears early. This is because sea ice protects the ocean from strong winter winds that drive mixing between the surface of the ocean and its deeper, carbon-rich layers.

The findings, based on data collected in a coastal system along the west Antarctic Peninsula, show that what happens in winter is crucial in explaining this variability in CO2uptake.

The study was led by scientists at the University of East Anglia (UEA), in collaboration with colleagues from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI, Germany), British Antarctic Survey (BAS, UK) and Institute of Marine Research (IMR, Norway). It is published today in the journalCommunications Earth & Environment.

The global ocean takes up about a quarter of all CO2that humans emit into the atmosphere. The Southern Ocean is responsible for about 40% of this and the researchers wanted to know why it varies so much from year to year.

Lead author Dr Elise Droste, of UEA's School of Environmental Sciences, said: "Our picture of the Southern Ocean's carbon cycle is incomplete, and so we cannot predict whether its atmospheric CO2uptake – and therefore its contribution to climate change mitigation – will increase, decrease, or remain the same in the future.

"Whatever it does, it will affect what our climate will look like and how fast it will change. To improve predictions, our work suggests that we need to look at how sea ice affects the exchange of carbon between the deep and shallow parts of the ocean. To do this, we need more wintertime observations in the Southern Ocean."

Much of the Southern Ocean surrounding the west Antarctic Peninsula is covered by sea ice in winter, which disappears in spring and summer. In spring and summer, phytoplankton growth and melt water lead to low CO2concentrations at the ocean surface. This allows the Southern Ocean to absorb large amounts of atmospheric CO2, significantly reducing the global impact of anthropogenic emissions.

In winter, as sea ice forms, the ocean underneath mixes with deeper waters that contain lots of 'natural' carbon that has been in the ocean for centuries. This can cause CO2at the ocean surface to increase to the point where it can be released into the atmosphere.

Sea ice blocks a large amount of this CO2'outgassing'. However, it is part of the natural seasonal cycle that some CO2does escape the ocean. This seasonal balance means that the total amount of CO2absorbed by the Southern Ocean within one year often depends on how much CO2is absorbed in summer and how much is released in winter.

"We don't have a good grasp on what is driving this year-to-year variability, which is making it difficult to fully understand the system and to improve the predictability of how the ocean's CO2uptake will change in the future," said Dr Droste. "One major reason is because we have relatively little data on the Southern Ocean, particularly in the wintertime.

"It is extremely challenging to collect observations in the harsh weather and sea conditions of the Southern Ocean, not to mention sea ice cover making much of it inaccessible, even for the strongest icebreaker. However, this study takes us a step in the right direction."

The study draws on data for 2010-2020, a time series led and maintained by BAS, which collects year-round measurements along the west Antarctic Peninsula. At Rothera, the UK's Antarctic research station, ocean scientists measured physical aspects of the seawater in Ryder Bay and collected samples for nutrient and CO2analysis, carried out at both Rothera and UEA.

Using other physical and chemical data collected at the same time, the team was able to study why years with long sea ice duration differed from those with short sea ice duration.

Dr Hugh Venables, from BAS, said: "A series of ocean scientists have wintered at Rothera on the Antarctic Peninsula to collect these and other samples, from either a small boat or a sea ice sledge, to build a unique time series of year-round oceanographic conditions for the last 25 years.

"This important result shows the importance of this winter sampling and will hopefully lead to more year-round sampling in the Southern Ocean, both by humans and autonomous technology."

Prof Dorothee Bakker, Professor in Marine Biogeochemistry at UEA, added: "The fact that this data has been collected throughout the year at the same location allows us to investigate which mechanisms are important to explain the year-to-year variability of CO2uptake by the ocean at this particular location, but we can also use these insights to better understand how the rest of the Southern Ocean works."

The study also involved scientists from the National Institute of Oceanography and Applied Geophysics (Italy) and University of Gothenburg (Sweden). It was supported by funding from the UK's Natural Environment Research Council and European Union's Horizon 2020 research and innovation program.

Materialsprovided byUniversity of East Anglia.Note: Content may be edited for style and length.

Defying Darwin: Scientists discover worms rewrote their DNA to survive on land

In 1859, Darwin imagined evolution as a slow, gradual progress, with species accumulating small changes over time. But even he was surprised to find the fossil record offered no missing links: the intermediate forms which should have told this story step by step were simply not there. His explanation was as uncomfortable as it was unavoidable: basically, the fossil record is an archive where most of the pages have been torn out.

In 1972, the scarcity of intermediate forms led the palaeontologists Stephen Jay Gould and Niles Eldredge to propose a provocative idea: punctuated equilibrium. According to this theory, rather than changing slowly, species remain stable for millions of years and then suddenly make rapid, radical evolutionary jumps. This model would explain why the fossil record seems so silent between species: large changes would happen suddenly and in small, isolated populations, well off the palaeontological radar. Although some fossils support this pattern, the scientific community remains divided: is this a rule of evolution, or an eye-catching exception?

Now a research team led by the Institute of Evolutionary Biology (IBE), a mixed research centre belonging to the Spanish National Research Council (CSIC) and Pompeu Fabra University (UPF), points for the first time to a mechanism of rapid, massive genomic reorganisation which could have played a part in the transition of marine to land animals 200 million years ago. The team has shown that marine annelids (worms) reorganised their genome from top to bottom, leaving it unrecognisable, when they left the oceans. Their observations are consistent with a punctuated equilibrium model, and could indicate that not only gradual but sudden changes in the genome could have occurred as these animals adapted to terrestrial settings. The genetic mechanism identified could transform our concept of animal evolution and revolutionise the established laws of genome evolution.

An unprecedented invertebrate genomic library

The team sequenced for the first time the high-quality genome of various earthworms, and compared to them to other closely related annelid species (leeches and bristle worms or polychaetes). The level of precision was the same as for sequencing human genomes, although in this case starting from scratch, with no existing references for the studied species. Until now, the lack of complete genomes had prevented the study of chromosomal-level patterns and characteristics for many species, limiting research to smaller-scale phenomena – population studies of a handful of genes, rather than macroevolutionary changes at the full-genome level.

After putting together each of the genomic jigsaw puzzles, the team was able to travel back in time with great precision more than 200 million years, to when the ancestors of the sequenced species were alive. "This is an essential episode in the evolution of life on our planet, given that many species, such as worms and vertebrates, which had been living in the ocean, now ventured onto land for the first time," comments Rosa Fernández, lead researcher of the IBE's Metazoa Phylogenomics and Genome Evolution Lab.

The analysis of these genomes has revealed an unexpected result: the annelids' genomes were not transformed gradually, as Neo-Darwinian theory would predict, but in isolated explosions of deep genetic remodelling. "The enormous reorganisation of the genomes we observed in the worms as they moved from the ocean to land cannot be explained with the parsimonious mechanism Darwin proposed; our observations chime much more with Gould and Eldredge's theory of punctuated equilibrium," Fernández adds.

A radical genetic mechanism which could provide evolutionary responses

The team has discovered that marine worms broke their genome into a thousand pieces only to reconstruct it and continue their evolutionary path on land. This phenomenon challenges the models of genome evolution known to date, given that if we observe almost any species, whether a sponge, a coral, or a mammal, many of their genomic structures are almost perfectly conserved. "The entire genome of the marine worms was broken down and then reorganised in a completely random way, in a very short period on the evolutionary scale," Fernández says. "I made my team repeat the analysis again and again, because I just couldn't believe it."

The reason why this drastic deconstruction did not lead to extinction could be in the 3D structure of the genome. Fernández's team has discovered that the chromosomes of these modern worms are much more flexible than those of vertebrates and other model organisms. Thanks to this flexibility, it is possible that genes in different parts of the genome could change places and continue working together.

Major changes in their DNA could have helped the worms adapt quickly to life on land, reorganising their genes to respond better to new challenges such as breathing air or being exposed to sunlight. The study suggests that these adjustments not only moved genes around, but also joined fragments that had been separated, creating new "genetic chimeras" which would have driven their evolution. "You could think that this chaos would mean the lineage would die out, but it's possible that some species' evolutionary success is based on that superpower," comments Fernández.

The observations in the study are consistent with a punctuated equilibrium model, where we observe an explosion of genomic changes after a long period of stability. However, the lack of experimental data for or against — in this case, 200-million-year-old fossils — makes it difficult to validate this theory.

Chromosomal chaos: problem or solution?

It seems from this study that conserving the genomic structure at the linear level — i.e., where the genes are more or less in the same place in different species — may not be as essential as had been thought. "In fact, stability could be the exception and not the rule in animals, which could benefit from a more fluid genome," Fernández says.

This phenomenon of extreme genetic reorganisation had previously been observed in the progression of cancer in humans. The termchromoanagenesiscovers several mechanisms which break down and reorganise chromosomes in cancerous cells, where we see similar changes to those observed in the earthworms. The only difference is that while these genomic breakdowns and reorganisations are tolerated by the worms, in humans they lead to diseases. The results of this study open the door to a better understanding of the potency of this radical genomic mechanism, with implications for human health.

The study has also reawakened one of the liveliest scientific debates of our time. "Both visions, Darwin's and Gould's, are compatible and complementary. While Neo-Darwinism can explain the evolution of populations perfectly, it has not yet been able to explain some exceptional and crucial episodes in the history of life on Earth, such as the initial explosion of animal life in the oceans over 500 million years ago, or the transition from the sea to land 200 million years ago in the case of earthworms," Fernández notes. "This is where the punctuated equilibrium theory could offer some answers."

In the future, a larger investigation of the genomic architecture of less-studied invertebrates could shed light on the genomic mechanisms shaping the evolution of the species. "There is a great diversity we know nothing about, hidden in the invertebrates, and studying them could bring new discoveries about the diversity and plasticity of genomic organisation, and challenge dogmas on how we think genomes are organised," Fernández concludes.

The study involved the collaboration of research staff from the Universitat Autònoma de Barcelona, Trinity College, the Universidad Complutense de Madrid, the University of Köln, and the Université Libre de Bruxelles.

The study received support from SEA2LAND (Starting Grant funded by the European Research Council), and from the Catalan Biogenome Project, which funded the sequencing of one of the worm genomes.

Materialsprovided bySpanish National Research Council (CSIC).Note: Content may be edited for style and length.

You hear the beep, but can’t find the car: The hidden flaw in electric vehicle safety

As electric cars become more common, vulnerable road users are encountering more and more warning signals from them. Now, new research from Chalmers University of Technology in Sweden, shows that one of the most common signal types is very difficult for humans to locate, especially when multiple similar vehicles are in motion simultaneously.

In a recently published study, researchers from Chalmers investigated how well people can locate three common types of warning (or AVAS -Acoustic Vehicle Alerting System) signals from hybrid and electric vehicles moving at low speeds. The researchers' tests showed that all the signal types were harder to locate than the sound of an internal combustion engine. For one of the signals, the majority of test subjects were unable to distinguish the direction of the sound or determine whether they were hearing one, two or more vehicles simultaneously.

"The requirements placed on car manufacturers relate to detection, or detectability, not about locating sound direction or the number of vehicles involved. But if you imagine, say, a supermarket carpark, it's not inconceivable that several similar car models with the same AVAS signal will be moving at the same time and in different directions," says Leon Müller, a doctoral student at the Department of Architecture and Civil Engineering at Chalmers.

Today's electric and hybrid vehicles meet the requirements set for acoustic warning systems according to international standards. In Europe, plus China and Japan, for example, vehicles travelling at a speed below 20 kph must emit a warning signal consisting of tones or noise, to allow pedestrians, cyclists and other non-car users to detect them. In the United States, warning signals are required from vehicles traveling at speeds of up to 30 kph.

"The way the requirements are worded allows car manufacturers to design their own signature sounds. These warning signals are often tested without the complication of background noise. But in a real traffic environment there are usually many different types of sound," says Wolfgang Kropp, professor of acoustics at the Department of Architecture and Civil Engineering at Chalmers.

Trying multiple different signals

The experiments involved some 52 test subjects and were conducted in Chalmers' acoustics laboratory in soundproofed, anechoic chambers. The aim of the tests was to emulate real conditions in, say, larger carparks. The subject was placed at the center of the room and surrounded by 24 loudspeakers placed in a ring at chest height. Three types of simulated vehicle sounds were played on the loudspeakers, corresponding to the signals from one, two or more electric and hybrid vehicles, plus an internal combustion engine. One of the signals consisted of two tones, one had multiple tones and one was just noise. The test subjects heard a vehicle warning signal at about 7.5 meters away, mixed with pre-recorded background noise from a quiet city carpark. When they heard the signal, the subjects had to mark the direction it was coming from as quickly as possible. The signal comprising two tones coming from three vehicles simultaneously was the most difficult and none of the test subjects managed to locate all the two-tone signals within the ten-second time limit.

The test subjects were easily able to locate the sound corresponding to an internal combustion engine. Leon Müller says this sound consists of short pulses comprising all frequencies; something that is easier for the ear to perceive than a fixed tone at a single frequency. The fact that people can more easily perceive this type of sound may also be because of its familiarity.

"Naturally, as acousticians, we welcome the fact that electric cars are significantly quieter than internal combustion engines but it's important to find a balance," says Müller.

Existing research has focused mainly on detectability and what is usually referred to as "detection distance." No previous studies have investigated what happens when two or three cars emit the same type of signal. The researchers see a major need for further knowledge of how people react in traffic situations involving electric vehicles.

"From a traffic safety point of view, it would be desirable to find a signal that's as effective as possible in terms of detection and localisation but which doesn't affect people negatively; something our previous research has shown to be true of traffic noise," says Kropp.

In a follow-up study, the researchers have begun investigating how AVAS signals are perceived and what effect they may have on non-road users.

The article Auditory Localization of Multiple Stationary Electric Vehicles, is published in The Journal of the Acoustical Society of America.

The authors are Leon Müller, Jens Forssén and Wolfgang Kropp, all working at the Division of Engineering Acoustics, Department of Architecture and Civil Engineering at Chalmers University of Technology in Sweden.

Materialsprovided byChalmers University of Technology.Note: Content may be edited for style and length.

How can we make fewer mistakes? US Navy invests $860k in placekeeping

A team of cognitive psychologists from the Michigan State University Department of Psychology have received a $860,000 grant from the Office of Naval Research to develop assessments for identifying people who are good at performing complex procedural tasks, even under challenging conditions like sleep deprivation and frequent interruptions.

"If we develop the right tools, we can identify people who are going to be better at performing a wide range of procedures. This is important because Navy personnel are increasingly called upon to do lots of different tasks as military systems become more complex," said Erik Altmann, lead investigator of the study and professor in MSU's psychology department. "The goal is to get the right person in the right job at the right time."

This multiyear study will look at individual differences in placekeeping, which is the cognitive ability to remember what step you are on in a procedural sequence. The researchers will also test whether incorporating task interruptions during training can help personnel develop cognitive strategies for placekeeping during deployment, when personnel may be sleep-deprived.

"We know that under conditions of sleep deprivation, people make more procedural errors, especially when they're interrupted in the middle of a task. Procedural errors can be catastrophic, so the Navy is interested in reducing them," said Altmann.

The research team, which also includes cognitive psychologists Kimberly Fenn and Zach Hambrick, has been funded since 2016 by the Office of Naval Research, with past studies looking at multitasking and the effect of sleep on cognitive performance.

The results of this study could improve personnel selection and classification in the Navy and in other fields where procedural accuracy is critical; results also could suggest approaches to training that make people more resilient to effects of stressors like sleep deprivation and task interruption.

Materialsprovided byMichigan State University. Original written by Shelly DeJong.Note: Content may be edited for style and length.