Sun unleashes monster solar storm: Rare G4 alert issued for Earth

Local weather alerts are familiar warnings for potentially dangerous conditions, but an alert that puts all of Earth on warning is rare.

On May 31, U.S. Naval Research Laboratory's (NRL) space-based instrumentation captured real-time observations of a powerful Coronal Mass Ejection (CME) that erupted from the Sun initiating a "severe geomagnetic storm" alert for Earth.

"Our observations demonstrated that the eruption was a so-called 'halo CME,' meaning it was Earth-directed, with our preliminary analysis of the data showing an apparent velocity of over 1,700 kilometers per second for the event," stated Karl Battams, Ph.D., computational scientist for NRL's Heliospheric Science Division.

A geomagnetic storm is a major disturbance of Earth's magnetosphere that's caused by the highly efficient transfer of energy from the solar wind into our planet's surrounding space environment. These disruptions are primarily driven by sustained periods of high-speed solar wind and, crucially, a southward-directed solar wind magnetic field that can peel away Earth's field on the dayside of the magnetosphere. Energy from the solar wind can open Earth's magnetic shield.

The National Oceanic and Atmospheric Administration's (NOAA) Space Weather Prediction Center classified the recent solar storm as G4, the second-highest classification on its five-level geomagnetic scale.

Powerful storms such as this are typically associated with CMEs. The repercussions can range from temporary outages and data corruption to permanent damage to satellites, increased atmospheric drag on low-Earth orbit spacecraft altering their trajectories, and disruptions to high-frequency radio communications.

"Such disturbances can compromise situational awareness, hinder command and control, affect precision-guided systems, and even impact the electrical power grid, directly affecting military readiness and operational effectiveness," Battams said.

CMEs are colossal expulsions of plasma and magnetic field from the Sun's corona, often carrying billions of tons of material. While CMEs generally take several days to reach Earth, the most intense events have been observed to arrive in as little as 18 hours.

"CMEs are the explosive release of mass from the Sun's low corona and are a primary driver of space weather, playing a central role in understanding the conditions of the Earth's magnetosphere, ionosphere, and thermosphere," explained Arnaud Thernisien, Ph.D., a research physicist from the Advanced Sensor Technology Section within NRL's Space Science Division.

The May 30 event saw a relatively slow but powerful solar flare erupt from the Earth-facing side of the Sun. The energy released blasted a CME directly toward Earth, leading to the geomagnetic storm that has produced auroras as far south as New Mexico.

NRL's space-based instrumentation, operating on NASA and NOAA spacecraft, provided vital real-time observations of this event. Notably, NRL's venerable Large Angle Spectrometric Coronagraph (LASCO), which has been in operation since 1996, and the Compact Coronagraph 1 (CCOR-1), launched in 2024, both relayed critical data.

Such observations are paramount for operational space weather monitoring, allowing forecasters to predict the timing of the event's arrival at Earth and the potential geomagnetic storm it could induce. While precisely predicting the severity, exact timing, or duration of a geomagnetic storm remains challenging, these advance warnings are vital for enabling the Department of Defense (DoD) and other agencies to prepare.

The potential impacts of severe geomagnetic storms on DoD and Department of the Navy missions are significant and far-reaching. These events can disrupt or degrade critical systems and capabilities, including satellite communications, Global Positioning System (GPS) navigation and timing, and various remote sensing systems.

"NRL has been a pioneer in heliophysics and space weather research since the very inception of the field, dating back to the first discovery of CMEs through NRL space-based observations in 1971," Battams said. "Since then, NRL has consistently maintained its position at the forefront of coronal imaging with a portfolio of groundbreaking instrumentation that has driven heliospheric and space weather studies."

These assets, particularly instruments like LASCO and CCOR-1, are indispensable for providing the crucial real-time imagery necessary for forecasters to analyze and assess CMEs, determine Earth-impact likelihood, and issue timely warnings.

"They form the backbone of our ability to anticipate and mitigate the effects of space weather. As the G4 severe geomagnetic storm watch continues, the public and critical infrastructure operators are encouraged to visit NOAA's Space Weather Prediction Center for the latest information and updates," Thernisien said.

The journey of the CME, from its fierce eruption on the Sun to its arrival at Earth, approximately one million miles away, highlights the dynamic nature of our solar system and the ongoing importance of NRL's vital contributions to heliophysics research and space weather preparedness. The data collected from events such as this will be instrumental in future research, further enhancing our understanding and predictive capabilities and ultimately bolstering the resilience of national security and critical infrastructure.

Materialsprovided byNaval Research Laboratory.Note: Content may be edited for style and length.

Sharper than lightning: Oxford’s one-in-6. 7-million quantum breakthrough

Physicists at the University of Oxford have set a new global benchmark for the accuracy of controlling a single quantum bit, achieving the lowest-ever error rate for a quantum logic operation — just 0.000015%, or one error in 6.7 million operations. This record-breaking result represents nearly an order of magnitude improvement over the previous benchmark, set by the same research group a decade ago.

To put the result in perspective: a person is more likely to be struck by lightning in a given year (1 in 1.2 million) than for one of Oxford's quantum logic gates to make a mistake.

The findings, published inPhysical Review Letters,are a major advance towards having robust and useful quantum computers.

"As far as we are aware, this is the most accurate qubit operation ever recorded anywhere in the world," said Professor David Lucas, co-author on the paper, from the University of Oxford's Department of Physics. "It is an important step toward building practical quantum computers that can tackle real-world problems."

To perform useful calculations on a quantum computer, millions of operations will need to be run across many qubits. This means that if the error rate is too high, the final result of the calculation will be meaningless. Although error correction can be used to fix mistakes, this comes at the cost of requiring many more qubits. By reducing the error, the new method reduces the number of qubits required and consequently the cost and size of the quantum computer itself.

Co-lead author Molly Smith (Graduate Student, Department of Physics, University of Oxford), said: "By drastically reducing the chance of error, this work significantly reduces the infrastructure required for error correction, opening the way for future quantum computers to be smaller, faster, and more efficient. Precise control of qubits will also be useful for other quantum technologies such as clocks and quantum sensors."

This unprecedented level of precision was achieved using a trapped calcium ion as the qubit (quantum bit). These are a natural choice to store quantum information due to their long lifetime and their robustness. Unlike the conventional approach, which uses lasers, the Oxford team controlled the quantum state of the calcium ions using electronic (microwave) signals.

This method offers greater stability than laser control and also has other benefits for building a practical quantum computer. For instance, electronic control is much cheaper and more robust than lasers, and easier to integrate in ion trapping chips. Furthermore, the experiment was conducted at room temperature and without magnetic shielding, thus simplifying the technical requirements for a working quantum computer.

The previous best single-qubit error rate, also achieved by the Oxford team, in 2014, was 1 in 1 million. The group's expertise led to the launch of the spinout company Oxford Ionics in 2019, which has become an established leader in high-performance trapped-ion qubit platforms.

Whilst this record-breaking result marks a major milestone, the research team caution that it is part of a larger challenge. Quantum computing requires both single- and two-qubit gates to function together. Currently, two-qubit gates still have significantly higher error rates — around 1 in 2000 in the best demonstrations to date — so reducing these will be crucial to building fully fault-tolerant quantum machines.

The experiments were carried out at the University of Oxford's Department of Physics by Molly Smith, Aaron Leu, Dr Mario Gely and Professor David Lucas, together with a visiting researcher, Dr Koichiro Miyanishi, from the University of Osaka's Centre for Quantum Information and Quantum Biology.

The Oxford scientists are part of the UK Quantum Computing and Simulation (QCS) Hub, which was a part of the ongoing UK National Quantum Technologies Programme.

Materialsprovided byUniversity of Oxford.Note: Content may be edited for style and length.

Scientists uncover why “stealth” volcanoes stay silent until eruption

When volcanoes are preparing to erupt, scientists rely on typical signs to warn people living nearby: deformation of the ground and earthquakes, caused by underground chambers filling up with magma and volcanic gas. But some volcanoes, called 'stealthy' volcanoes, don't give obvious warning signs. Now scientists studying Veniaminof, Alaska, have developed a model which could explain and predict stealthy eruptions.

"Despite major advances in monitoring, some volcanoes erupt with little or no detectable precursors, significantly increasing the risk to nearby populations," said Dr Yuyu Li of the University of Illinois, lead author of the study inFrontiers in Earth Science. "Some of these volcanoes are located near major air routes or close to communities: examples include Popocatépetl and Colima in Mexico, Merapi in Indonesia, Galeras in Colombia, and Stromboli in Italy.

"Our work helps explain how this happens, by identifying the key internal conditions — such as low magma supply and warm host rock — that make eruptions stealthy."

Veniaminof is an ice-clad volcano in the Aleutian Arc of Alaska. It's carefully monitored, but only two of its 13 eruptions since 1993 have been preceded by enough signs to alert observing scientists. In fact, a 2021 eruption wasn't caught until three days after it had started.

"Veniaminof is a case study in how a volcano can appear quiet while still being primed to erupt," said Li. "It is one of the most active volcanoes in Alaska. In recent decades, it has produced several VEI 3 eruptions — moderate-sized explosive events that can send ash up to 15 km high, disrupt air traffic, and pose regional hazards to nearby communities and infrastructure — often without clear warning signs."

To understand Veniaminof better, the scientists used monitoring data over three summer seasons immediately before the 2018 stealthy eruption, which produced only ambiguous warning signs immediately before it happened. They created a model of the volcano's behavior in different conditions which would change the impact of a filling magma reservoir on the ground above: six potential volumes of magma reservoir, a range of magma flow rates and reservoir depths, and three shapes of reservoir. They then compared the models to the data to see which matched best, and which conditions produced eruptions, stealthy or otherwise.

They found that a high flow of magma into a chamber increases the deformation of the ground and the likelihood of an eruption. If magma is flowing quickly into a large chamber, an eruption may not occur, but if one does the ground will deform enough to warn scientists first. Similarly, a high flow of magma into a small chamber is likely to produce an eruption, but not a stealthy one. Stealthy eruptions become likely when a low flow of magma enters a relatively small chamber. Compared to observational data, the results suggest that Veniaminof has a small magma chamber and a low flow of magma.

The model also suggests that different conditions could produce different warning signs. Magma flowing into larger, flatter chambers may cause minimal earthquakes, while smaller, more elongated chambers may produce little deformation of the ground. But stealthy eruptions only happen when all the conditions are in place — the right magma flow and the right chamber size, shape, and depth.

However, when the scientists added temperature to their model, they found that if magma is consistently present over time so that the rock of the chamber is warm, size and shape matters less. If the rock is warm, it's less likely to fail in ways that cause detectable earthquakes or deformation of the ground when magma flows into the chamber, increasing the likelihood of a stealthy eruption.

"To mitigate the impact of these potential surprise eruptions, we need to integrate high-precision instruments like borehole tiltmeters and strainmeters and fiber optic sensing, as well as newer approaches such as infrasound and gas emission monitoring," said Li. "Machine learning has also shown promise in detecting subtle changes in volcanic behavior, especially in earthquake signal picking."

At Veniaminof, taking measures to improve the coverage of satellite monitoring and adding tiltmeters and strainmeters could improve the rate of detection. In the meantime, scientists now know which volcanoes they need to watch most closely: volcanoes with small, warm reservoirs and slow magma flows.

"Combining these models with real-time observations represents a promising direction for improving volcano forecasting," said Li. "In the future, this approach can enable improved monitoring for these stealthy systems, ultimately leading to more effective responses to protect nearby communities."

Materialsprovided byFrontiers.Note: Content may be edited for style and length.

What a dinosaur ate 100 million years ago—Preserved in a fossilized time capsule

Plant fossils found in the abdomen of a sauropod support the long-standing hypothesis that these dinosaurs were herbivores, finds a study published on June 9 in the Cell Press journalCurrent Biology. The dinosaur, which was alive an estimated 94 to 101 million years ago, ate a variety of plants and relied almost entirely on its gut microbes for digestion.

"No genuine sauropod gut contents had ever been found anywhere before, despite sauropods being known from fossils found on every continent and despite the group being known to span at least 130 million years of time," says lead author Stephen Poropat of Curtin University. "This finding confirms several hypotheses about the sauropod diet that had been made based on studies of their anatomy and comparisons with modern-day animals."

Knowledge of the diet of dinosaurs is critical for understanding their biology and the role they played in ancient ecosystems, say the researchers. However, very few dinosaur fossils have been found with cololites, or preserved gut contents. Sauropod cololites have remained particularly elusive, even though these dinosaurs may have been the most ecologically impactful terrestrial herbivores worldwide throughout much of the Jurassic and Cretaceous periods, given their gigantic sizes. Due to this lack of direct evidence when it comes to diet, the specifics of sauropod herbivory — including the plant taxa they ate — have been largely inferred based on anatomical features such as tooth wear, jaw morphology, and neck length.

In the summer of 2017, the staff and volunteers at the Australian Age of Dinosaurs Museum of Natural History were excavating a relatively complete subadult skeleton of the sauropod Diamantinasaurus matildae from the mid-Cretaceous period, which was found in the Winton Formation of Queensland, Australia. During this process, they noticed an unusual, fractured rock layer that appeared to contain the sauropod's cololite, which consisted of many well-preserved plant fossils.

Analysis of the plant specimens within the cololite showed that sauropods likely only engaged in minimal oral processing of their food, relying instead on fermentation and their gut microbiota for digestion. The cololite consisted of a variety of plants, including foliage from conifers (cone-bearing seed plants), seed-fern fruiting bodies (plant structures that hold seeds), and leaves from angiosperms (flowering plants), indicating that Diamantinasaurus was an indiscriminate, bulk feeder.

"The plants within show evidence of having been severed, possibly bitten, but have not been chewed, supporting the hypothesis of bulk feeding in sauropods," says Poropat.

The researchers also found chemical biomarkers of both angiosperms and gymnosperms — a group of woody, seed-producing plants that include conifers. "This implies that at least some sauropods were not selective feeders, instead eating whatever plants they could reach and safely process," Poropat says. "These findings largely corroborate past ideas regarding the enormous influence that sauropods must have had on ecosystems worldwide during the Mesozoic Era."

Although it was not unexpected that the gut contents provided support for sauropod herbivory and bulk feeding, Poropat was surprised to find angiosperms in the dinosaur's gut. "Angiosperms became approximately as diverse as conifers in Australia around 100 to 95 million years ago, when this sauropod was alive," he says. "This suggests that sauropods had successfully adapted to eat flowering plants within 40 million years of the first evidence of the presence of these plants in the fossil record."

Based on these findings, the team suggests that Diamantinasaurus likely fed on both low- and high-growing plants, at least before adulthood. As hatchlings, sauropods could only access plants found close to the ground, but as they grew, so did their viable dietary options. In addition, the prevalence of small shoots, bracts, and seed pods in the cololite implies that subadult Diamantinasaurus targeted new growth portions of conifers and seed ferns, which are easier to digest.

According to the authors, the strategy of indiscriminate bulk feeding seems to have served sauropods well for 130 million years and might have enabled their success and longevity as a clade. Despite the importance of this discovery, Poropat pointed out a few caveats.

"The primary limitation of this study is that the sauropod gut contents we describe constitute a single data point," he explains. "These gut contents only tell us about the last meal or several meals of a single subadult sauropod individual," says Poropat. "We don't know if the plants preserved in our sauropod represent its typical diet or the diet of a stressed animal. We also don't know how indicative the plants in the gut contents are of juvenile or adult sauropods, since ours is a subadult, and we don't know how seasonality might have affected this sauropod's diet."

This research was supported by funding from the Australian Research Council.

Materialsprovided byCell Press.Note: Content may be edited for style and length.

AI sees through chaos—and reaches the edge of what physics allows

No image is infinitely sharp. For 150 years, it has been known that no matter how ingeniously you build a microscope or a camera, there are always fundamental resolution limits that cannot be exceeded in principle. The position of a particle can never be measured with infinite precision; a certain amount of blurring is unavoidable. This limit does not result from technical weaknesses, but from the physical properties of light and the transmission of information itself.

TU Wien (Vienna), the University of Glasgow and the University of Grenoble therefore posed the question: Where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? And indeed, the international team succeeded in specifying a lowest limit for the theoretically achievable precision and in developing AI algorithms for neural networks that come very close to this limit after appropriate training. This strategy is now set to be employed in imaging procedures, such as those used in medicine.

"Let's imagine we are looking at a small object behind an irregular, cloudy pane of glass," says Prof Stefan Rotter from the Institute of Theoretical Physics at TU Wien. "We don't just see an image of the object, but a complicated light pattern consisting of many lighter and darker patches of light. The question now is: how precisely can we estimate where the object actually is based on this image — and where is the absolute limit of this precision?"

Such scenarios are important in biophysics or medical imaging, for example. When light is scattered by biological tissue, it appears to lose information about deeper tissue structures. But how much of this information can be recovered in principle? This question is not only of technical nature, but physics itself sets fundamental limits here.

The answer to this question is provided by a theoretical measure: the so-calledFisher information. This measure describes how much information an optical signal contains about an unknown parameter — such as the object position. If the Fisher information is low, precise determination is no longer possible, no matter how sophisticatedly the signal is analysed. Based on this Fisher information concept, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.

Neural networks learn from chaotic light patterns

While the team at TU Wien was providing theoretical input, a corresponding experiment was designed and implemented by Dorian Bouchet from the University of Grenoble (F) together with Ilya Starshynov and Daniele Faccio from the University of Glasgow (UK). In this experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid, so that the recorded images only showed highly distorted light patterns. The measurement conditions varied depending on the turbidity — and therefore also the difficulty of obtaining precise position information from the signal.

"To the human eye, these images look like random patterns," says Maximilian Weimar (TU Wien), one of the authors of the study. "But if we feed many such images — each with a known object position — into a neural network, the network can learn which patterns are associated with which positions." After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.

Particularly noteworthy: the precision of the prediction was only minimally worse than the theoretically achievable maximum, calculated using Fisher information. "This means that our AI-supported algorithm is not only effective, but almost optimal," says Stefan Rotter. "It achieves almost exactly the precision that is permitted by the laws of physics."

This realisation has far-reaching consequences: With the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas — from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems.

Materialsprovided byVienna University of Technology.Note: Content may be edited for style and length.

Sand clouds and moon nurseries: Webb’s dazzling exoplanet reveal

Astrophysicists have gained precious new insights into how distant "exoplanets" form and what their atmospheres can look like, after using the James Webb Telescope to image two young exoplanets in extraordinary detail. Among the headline findings were the presence of silicate clouds in one of the planet's atmospheres, and a circumplanetary disk thought to feed material that can form moons around the other.

In broader terms, understanding how the "YSES-1" super-solar system formed offers further insight into the origins of our own solar system, and gives us the opportunity to watch and learn as a planet similar to Jupiter forms in real time.

"Directly imaged exoplanets — planets outside our own Solar System — are the only exoplanets that we can truly take photos of," said Dr Evert Nasedkin, a Postdoctoral Fellow in Trinity College Dublin's School of Physics, who is a co-author of the research article just published in leading international journal,Nature."These exoplanets are typically still young enough that they are still hot from their formation and it is this warmth, seen in the thermal infrared, that we as astronomers observe."

Using spectroscopic instruments on board the James Webb Space Telescope (JWST), Dr Kielan Hoch and a large international team, including astronomers at Trinity College Dublin, obtained broad spectra of two young, giant exoplanets which orbit a sun-like star, YSES-1. These planets are several times larger than Jupiter, and orbit far from their host star, highlighting the diversity of exoplanet systems even around stars like our own sun.

The main goal of measuring the spectra of these exoplanets was to understand their atmospheres. Different molecules and cloud particles all absorb different wavelengths of light, imparting a characteristic fingerprint onto the emission spectrum from the planets.

Dr Nasedkin said: "When we looked at the smaller, farther-out companion, known as YSES 1-c, we found the tell-tale signature of silicate clouds in the mid-infrared. Essentially made of sand-like particles, this is the strongest silicate absorption feature observed in an exoplanet yet."

"We believe this is linked to the relative youth of the planets: younger planets are slightly larger in radius, and this extended atmosphere may allow the cloud to absorb more of the light emitted by the planet. Using detailed modelling, we were able to identify the chemical composition of these clouds, as well as details about the shapes and sizes of the cloud particles."

The inner planet, YSES-1b offered up other surprises: while the whole planetary system is young, at 16.7 million years old, it is too old to find signs of the planet-forming disk around the host star. But around YSES-1b the team observed a disk around the planet itself, thought to feed material onto the planet and serve as the birthplace of moons – similar to those seen around Jupiter. Only three other such disks have been identified to date, both around objects significantly younger than YSES-1b, raising new questions as to how this disk could be so long-lived.

Dr Nasedkin added: "Overall, this work highlights the incredible abilities of JWST to characterise exoplanet atmospheres. With only a handful of exoplanets that can be directly imaged, the YSES-1 system offers unique insights into the atmospheric physics and formation processes of these distant giants."

In broad terms, understanding how this super-solar system formed offers further insight into the origins of our own solar system, giving us an opportunity to watch as a planet similar to Jupiter forms in real time. Understanding how long it takes to form planets, and the chemical makeup at the end of formation is important to learn what the building blocks of our own solar system looked like. Scientists can compare these young systems to our own, which provides hints of how our own planets have changed over time.

Dr Kielan Hoch, Giacconi Fellow at the Space Telescope Science Institute, said: "This program was proposed before the launch of JWST. It was unique, as we hypothesized that the NIRSpec instrument on the future telescope should be able to observe both planets in its field of view in a single exposure, essentially, giving us two for the price of one. Our simulations ended up being correct post-launch, providing the most detailed dataset of a multi-planet system to-date."

"The YSES-1 system planets are also too widely separated to be explained through current formation theories, so the additional discoveries of distinct silicate clouds around YSES-1 c and small hot dusty material around YSES-1 b leads to more mysteries and complexities for determining how planets form and evolve."

"This research was also led by a team of early career researchers such as postdocs and graduate students who make up the first five authors of the paper. This work would not have been possible without their creativity and hard work, which is what aided in making these incredible multidisciplinary discoveries."

Materialsprovided byTrinity College Dublin.Note: Content may be edited for style and length.

Ginger vs. Cancer: Natural compound targets tumor metabolism

Looking to nature for answers to complex questions can reveal new and unprecedented results that can even affect cells on molecular levels.

For instance, human cells oxidize glucose to produce ATP (adenosine triphosphate), an energy source necessary for life. Cancer cells produce ATP through glycolysis, which does not utilize oxygen even under conditions where oxygen is present, and convert glucose into pyruvic acid and lactic acid. This method of producing ATP, known as the Warburg effect, is considered inefficient, thus raising questions as to why cancer cells choose this energy pathway to fuel their proliferation and survival.

In search for this energy catalyst, Associate Professor Akiko Kojima-Yuasa's team at Osaka Metropolitan University's Graduate School of Human Life and Ecology analyzed the cinnamic acid ester ethyl p-methoxycinnamate, a main component of kencur ginger, and its mechanism of action. In previous research, the team discovered that ethyl p-methoxycinnamate has inhibitory effects on cancer cells. Furthering their study, the acid ester was administered to Ehrlich ascites tumor cells to assess which component of the cancer cells' energy pathway was being affected.

Results revealed that the acid ester inhibits ATP production by disrupting de novo fatty acid synthesis and lipid metabolism, rather than through glycolysis as commonly theorized. Further, the researchers discovered acid ester-induced inhibition triggered increased glycolysis, which acted as a possible survival mechanism in the cells. This adaptability was theorized to be attributed to ethyl p-methoxycinnamate's inability to induce cell death.

"These findings not only provide new insights that supplement and expand the theory of the Warburg effect, which can be considered the starting point of cancer metabolism research, but are also expected to lead to the discovery of new therapeutic targets and the development of new treatment methods," stated Professor Kojima-Yuasa.

Materialsprovided byOsaka Metropolitan University.Note: Content may be edited for style and length.

The global rule that predicts where life thrives—and where it fails

A simple rule that seems to govern how life is organized on Earth is described in a new study published on June 4 inNature Ecology & Evolution.

The research team led, by Umeå University and involving the University of Reading, believe this rule helps explain why species are spread the way they are across the planet. The discovery will help to understand life on Earth – including how ecosystems respond to global environmental changes.

The rule is simple: in every region on Earth, most species cluster together in small 'hotspot' areas, then gradually spread outward with fewer and fewer species able to survive farther away from these hotspots.

Rubén Bernardo-Madrid, lead author and researcher at Umeå University (Sweden), said: "In every bioregion, there is always a core area where most species live. From that core, species expand into surrounding areas, but only a subset manages to persist. It seems these cores provide optimal conditions for species survival and diversification, acting as a source from which biodiversity radiates outward."

This pattern highlights the disproportionate ecological role these small areas play in sustaining the biodiversity of entire bioregions. Jose Luis Tella, from the Estación Biológica de Doñana-CSIC (Spain), said: "Safeguarding these core zones is therefore essential, as they represent critical priorities for conservation strategies."

Researchers studied bioregions across the world, examining species from very different life forms: amphibians, birds, dragonflies, mammals, marine rays, reptiles, and trees. Given the vast differences in life strategies — some species fly, others crawl, swim, or remain rooted — and the contrasting environmental and historical backgrounds of each bioregion, the researchers expected that species distribution would vary widely across bioregions. Surprisingly, they found the same pattern everywhere.

The pattern points to a general process known as environmental filtering. Environmental filtering has long been considered a key theoretical principle in ecology for explaining species distribution on Earth. Until now, however, global empirical evidence has been scarce. This study provides broad confirmation across multiple branches of life and at a planetary scale.

Professor Manuela González-Suárez, co-author of the study at the University of Reading, said: "It doesn't matter whether the limiting factor is heat, cold, drought, or salinity. The result is always the same: only species able to tolerate local conditions establish and persist, creating a predictable distribution of life on Earth."

The existence of a universal organising mechanism has profound implications for our understanding of life on Earth. Joaquín Calatayud, co-author from the Rey Juan Carlos University (Spain), said: "This pattern suggests that life on Earth may be, to some extent, predictable." Such predictable patterns can help scientists trace how life has diversified through time and offer valuable insights into how ecosystems might react to global environmental changes.

Materialsprovided byUniversity of Reading.Note: Content may be edited for style and length.

Unusual carbon build-up found in lungs of COPD patients

Cells taken from the lungs of people with chronic obstructive pulmonary disease (COPD) have a larger accumulation of soot-like carbon deposits compared to cells taken from people who smoke but do not have COPD, according to a study published today, June 10, inERJ Open Research. Carbon can enter the lungs via cigarette smoke, diesel exhaust and polluted air.

The cells, called alveolar macrophages, normally protect the body by engulfing any particles or bacteria that reach the lungs. But, in their new study, researchers found that when these cells are exposed to carbon they grow larger and encourage inflammation.

The research was led by Dr James Baker and Dr Simon Lea from the University of Manchester, UK. Dr Baker said: "COPD is a complex disease that has a number of environmental and genetic risk factors. One factor is exposure to carbon from smoking or breathing polluted air.

"We wanted to study what happens in the lungs of COPD patients when this carbon builds up in alveolar macrophage cells, as this may influence the cells' ability to protect the lungs."

The researchers used samples of lung tissue from surgery for suspected lung cancer. They studied samples (that did not contain any cancer cells) from 28 people who had COPD and 15 people who were smokers but did not have COPD.

Looking specifically at alveolar macrophage cells under a microscope, the researchers measured the sizes of the cells and the amount of carbon accumulated in the cells.

They found that the average amount of carbon was more than three times greater in alveolar macrophage cells from COPD patients compared to smokers. Cells containing carbon were consistently larger than cells with no visible carbon.

Patients with larger deposits of carbon in their alveolar macrophages had worse lung function, according to a measure called FEV1%, which quantifies how much and how forcefully patients can breathe out.

When the researchers exposed macrophages to carbon particles in the lab, they saw the cells become much larger and found that they were producing higher levels of proteins that lead to inflammation.

Dr Lea said: "As we compared cells from COPD patients with cells from smokers, we can see that this build-up of carbon is not a direct result of cigarette smoking. Instead, we show alveolar macrophages in COPD patients contain more carbon and are inherently different in terms of their form and function compared to those in smokers.

"Our research raises an interesting question as to the cause of the increased levels of carbon in COPD patients' macrophages. It could be that people with COPD are less able to clear the carbon they breathe in. It could also be that people exposed to more particulate matter are accumulating this carbon and developing COPD as a result.

"In future, it would be interesting to study how this carbon builds up and how lung cells respond over a longer period of time."

Professor Fabio Ricciardolo is Chair of the European Respiratory Society's group on monitoring airway disease, based at the University of Torino, Italy, and was not involved in the research. He said: "This set of experiments suggest that people with COPD accumulate unusually large amounts of carbon in the cells of their lungs. This build-up seems to be altering those cells, potentially causing inflammation in the lungs and leading to worse lung function.

"In addition, this research offers some clues about why polluted air might cause or worsen COPD. However, we know that smoking and air pollution are risk factors for COPD and other lung conditions, so we need to reduce levels of pollution in the air we breathe and we need to help people to quit smoking."

Materialsprovided byEuropean Respiratory Society.Note: Content may be edited for style and length.

This mind-bending physics breakthrough could redefine timekeeping

How can the strange properties of quantum particles be exploited to perform extremely accurate measurements? This question is at the heart of the research field of quantum metrology. One example is the atomic clock, which uses the quantum properties of atoms to measure time much more accurately than would be possible with conventional clocks.

However, the fundamental laws of quantum physics always involve a certain degree of uncertainty. Some randomness or a certain amount of statistical noise has to be accepted. This results in fundamental limits to the accuracy that can be achieved. Until now, it seemed to be an immutable law that a clock twice as accurate requires at least twice as much energy. But now a team of researchers from TU Wien, Chalmers University of Technology, Sweden, and University of Malta have demonstrated that special tricks can be used to increase accuracy exponentially. The crucial point is using two different time scales – similar to how a clock has a second hand and a minute hand.

"We have analyzed in principle, which clocks could be theoretically possible," says Prof. Marcus Huber from the Atomic Institute at the TU Wien. "Every clock needs two components: first, a time base generator – such as a pendulum in a pendulum clock, or even a quantum oscillation. And second, a counter – any element that counts how many time units defined by the time base generator have already passed."

The time base generator can always return to exactly the same state. After one complete oscillation, the pendulum of a pendulum clock is exactly where it was before. After a certain number of oscillations, the caesium atom in an atomic clock returns to exactly the same state it was in before. The counter, on the other hand, must change – otherwise the clock is useless.

"This means that every clock must be connected to an irreversible process," says Florian Meier from TU Wien. "In the language of thermodynamics, this means that every clock increases the entropy in the universe; otherwise, it is not a clock." The pendulum of a pendulum clock generates a little heat and disorder among the air molecules around it, and every laser beam that reads the state of an atomic clock generates heat, radiation and thus entropy.

"We can now consider how much entropy a hypothetical clock with extremely high precision would have to generate – and, accordingly, how much energy such a clock would need," says Marcus Huber. "Until now, there seemed to be a linear relationship: if you want a thousand times the precision, you have to generate at least a thousand times as much entropy and expend a thousand times as much energy."

Quantum time and classical time

However, the research team at TU Wien, together with the Austrian Academy of Sciences (ÖAW) in Vienna and the teams from Chalmers University of Technology, Sweden, and University of Malta, has now shown that this apparent rule can be circumvented by using two different time scales.

"For example, you can use particles that move from one area to another to measure time, similar to how grains of sand indicate the time by falling from the top of the glass to the bottom," says Florian Meier. You can connect a whole series of such time-measuring devices in series and count how many of them have already passed through – similar to how one clock hand counts how many laps the other clock hand has already completed.

"This way, you can increase accuracy, but not without investing more energy," says Marcus Huber. "Because every time one clock hand completes a full rotation and the other clock hand is measured at a new location – you could also say every time the environment around it notices that this hand has moved to a new location – the entropy increases. This counting process is irreversible."

However, quantum physics also allows for another kind of particle transport: the particles can also travel through the entire structure, i.e. across the entire clock dial, without being measured anywhere. In a sense, the particle is then everywhere at once during this process; it has no clearly defined location until it finally arrives – and only then is it actually measured, in an irreversible process that increases entropy.

Like second and minute clock hands

"So we have a fast process that does not cause entropy – quantum transport – and a slow one, namely the arrival of the particle at the very end," explains Yuri Minoguchi, TU Wien. "The crucial thing about our method is that one hand behaves purely in terms of quantum physics, and only the other, slower hand actually has an entropy-generating effect."

The team has now been able to show that this strategy enables an exponential increase in accuracy per increase in entropy. This means that much higher precision can be achieved than would have been thought possible according to previous theories. "What's more, the theory could be tested in the real world using superconducting circuits, one of the most advanced quantum technologies currently available.," says Simone Gasparinetti, co-author of the study and leader of the experimental team at Chalmers. "This is an important result for research into high-precision quantum measurements and suppression of unwanted fluctuations," says Marcus Huber, "and at the same time it helps us to better understand one of the great unsolved mysteries of physics: the connection between quantum physics and thermodynamics."

Materialsprovided byVienna University of Technology.Note: Content may be edited for style and length.