Semua Kabar

What a dinosaur ate 100 million years ago—Preserved in a fossilized time capsule

Plant fossils found in the abdomen of a sauropod support the long-standing hypothesis that these dinosaurs were herbivores, finds a study published on June 9 in the Cell Press journalCurrent Biology. The dinosaur, which was alive an estimated 94 to 101 million years ago, ate a variety of plants and relied almost entirely on its gut microbes for digestion.

"No genuine sauropod gut contents had ever been found anywhere before, despite sauropods being known from fossils found on every continent and despite the group being known to span at least 130 million years of time," says lead author Stephen Poropat of Curtin University. "This finding confirms several hypotheses about the sauropod diet that had been made based on studies of their anatomy and comparisons with modern-day animals."

Knowledge of the diet of dinosaurs is critical for understanding their biology and the role they played in ancient ecosystems, say the researchers. However, very few dinosaur fossils have been found with cololites, or preserved gut contents. Sauropod cololites have remained particularly elusive, even though these dinosaurs may have been the most ecologically impactful terrestrial herbivores worldwide throughout much of the Jurassic and Cretaceous periods, given their gigantic sizes. Due to this lack of direct evidence when it comes to diet, the specifics of sauropod herbivory — including the plant taxa they ate — have been largely inferred based on anatomical features such as tooth wear, jaw morphology, and neck length.

In the summer of 2017, the staff and volunteers at the Australian Age of Dinosaurs Museum of Natural History were excavating a relatively complete subadult skeleton of the sauropod Diamantinasaurus matildae from the mid-Cretaceous period, which was found in the Winton Formation of Queensland, Australia. During this process, they noticed an unusual, fractured rock layer that appeared to contain the sauropod's cololite, which consisted of many well-preserved plant fossils.

Analysis of the plant specimens within the cololite showed that sauropods likely only engaged in minimal oral processing of their food, relying instead on fermentation and their gut microbiota for digestion. The cololite consisted of a variety of plants, including foliage from conifers (cone-bearing seed plants), seed-fern fruiting bodies (plant structures that hold seeds), and leaves from angiosperms (flowering plants), indicating that Diamantinasaurus was an indiscriminate, bulk feeder.

"The plants within show evidence of having been severed, possibly bitten, but have not been chewed, supporting the hypothesis of bulk feeding in sauropods," says Poropat.

The researchers also found chemical biomarkers of both angiosperms and gymnosperms — a group of woody, seed-producing plants that include conifers. "This implies that at least some sauropods were not selective feeders, instead eating whatever plants they could reach and safely process," Poropat says. "These findings largely corroborate past ideas regarding the enormous influence that sauropods must have had on ecosystems worldwide during the Mesozoic Era."

Although it was not unexpected that the gut contents provided support for sauropod herbivory and bulk feeding, Poropat was surprised to find angiosperms in the dinosaur's gut. "Angiosperms became approximately as diverse as conifers in Australia around 100 to 95 million years ago, when this sauropod was alive," he says. "This suggests that sauropods had successfully adapted to eat flowering plants within 40 million years of the first evidence of the presence of these plants in the fossil record."

Based on these findings, the team suggests that Diamantinasaurus likely fed on both low- and high-growing plants, at least before adulthood. As hatchlings, sauropods could only access plants found close to the ground, but as they grew, so did their viable dietary options. In addition, the prevalence of small shoots, bracts, and seed pods in the cololite implies that subadult Diamantinasaurus targeted new growth portions of conifers and seed ferns, which are easier to digest.

According to the authors, the strategy of indiscriminate bulk feeding seems to have served sauropods well for 130 million years and might have enabled their success and longevity as a clade. Despite the importance of this discovery, Poropat pointed out a few caveats.

"The primary limitation of this study is that the sauropod gut contents we describe constitute a single data point," he explains. "These gut contents only tell us about the last meal or several meals of a single subadult sauropod individual," says Poropat. "We don't know if the plants preserved in our sauropod represent its typical diet or the diet of a stressed animal. We also don't know how indicative the plants in the gut contents are of juvenile or adult sauropods, since ours is a subadult, and we don't know how seasonality might have affected this sauropod's diet."

This research was supported by funding from the Australian Research Council.

Materialsprovided byCell Press.Note: Content may be edited for style and length.

AI sees through chaos—and reaches the edge of what physics allows

No image is infinitely sharp. For 150 years, it has been known that no matter how ingeniously you build a microscope or a camera, there are always fundamental resolution limits that cannot be exceeded in principle. The position of a particle can never be measured with infinite precision; a certain amount of blurring is unavoidable. This limit does not result from technical weaknesses, but from the physical properties of light and the transmission of information itself.

TU Wien (Vienna), the University of Glasgow and the University of Grenoble therefore posed the question: Where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? And indeed, the international team succeeded in specifying a lowest limit for the theoretically achievable precision and in developing AI algorithms for neural networks that come very close to this limit after appropriate training. This strategy is now set to be employed in imaging procedures, such as those used in medicine.

"Let's imagine we are looking at a small object behind an irregular, cloudy pane of glass," says Prof Stefan Rotter from the Institute of Theoretical Physics at TU Wien. "We don't just see an image of the object, but a complicated light pattern consisting of many lighter and darker patches of light. The question now is: how precisely can we estimate where the object actually is based on this image — and where is the absolute limit of this precision?"

Such scenarios are important in biophysics or medical imaging, for example. When light is scattered by biological tissue, it appears to lose information about deeper tissue structures. But how much of this information can be recovered in principle? This question is not only of technical nature, but physics itself sets fundamental limits here.

The answer to this question is provided by a theoretical measure: the so-calledFisher information. This measure describes how much information an optical signal contains about an unknown parameter — such as the object position. If the Fisher information is low, precise determination is no longer possible, no matter how sophisticatedly the signal is analysed. Based on this Fisher information concept, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.

Neural networks learn from chaotic light patterns

While the team at TU Wien was providing theoretical input, a corresponding experiment was designed and implemented by Dorian Bouchet from the University of Grenoble (F) together with Ilya Starshynov and Daniele Faccio from the University of Glasgow (UK). In this experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid, so that the recorded images only showed highly distorted light patterns. The measurement conditions varied depending on the turbidity — and therefore also the difficulty of obtaining precise position information from the signal.

"To the human eye, these images look like random patterns," says Maximilian Weimar (TU Wien), one of the authors of the study. "But if we feed many such images — each with a known object position — into a neural network, the network can learn which patterns are associated with which positions." After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.

Particularly noteworthy: the precision of the prediction was only minimally worse than the theoretically achievable maximum, calculated using Fisher information. "This means that our AI-supported algorithm is not only effective, but almost optimal," says Stefan Rotter. "It achieves almost exactly the precision that is permitted by the laws of physics."

This realisation has far-reaching consequences: With the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas — from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems.

Materialsprovided byVienna University of Technology.Note: Content may be edited for style and length.

Sand clouds and moon nurseries: Webb’s dazzling exoplanet reveal

Astrophysicists have gained precious new insights into how distant "exoplanets" form and what their atmospheres can look like, after using the James Webb Telescope to image two young exoplanets in extraordinary detail. Among the headline findings were the presence of silicate clouds in one of the planet's atmospheres, and a circumplanetary disk thought to feed material that can form moons around the other.

In broader terms, understanding how the "YSES-1" super-solar system formed offers further insight into the origins of our own solar system, and gives us the opportunity to watch and learn as a planet similar to Jupiter forms in real time.

"Directly imaged exoplanets — planets outside our own Solar System — are the only exoplanets that we can truly take photos of," said Dr Evert Nasedkin, a Postdoctoral Fellow in Trinity College Dublin's School of Physics, who is a co-author of the research article just published in leading international journal,Nature."These exoplanets are typically still young enough that they are still hot from their formation and it is this warmth, seen in the thermal infrared, that we as astronomers observe."

Using spectroscopic instruments on board the James Webb Space Telescope (JWST), Dr Kielan Hoch and a large international team, including astronomers at Trinity College Dublin, obtained broad spectra of two young, giant exoplanets which orbit a sun-like star, YSES-1. These planets are several times larger than Jupiter, and orbit far from their host star, highlighting the diversity of exoplanet systems even around stars like our own sun.

The main goal of measuring the spectra of these exoplanets was to understand their atmospheres. Different molecules and cloud particles all absorb different wavelengths of light, imparting a characteristic fingerprint onto the emission spectrum from the planets.

Dr Nasedkin said: "When we looked at the smaller, farther-out companion, known as YSES 1-c, we found the tell-tale signature of silicate clouds in the mid-infrared. Essentially made of sand-like particles, this is the strongest silicate absorption feature observed in an exoplanet yet."

"We believe this is linked to the relative youth of the planets: younger planets are slightly larger in radius, and this extended atmosphere may allow the cloud to absorb more of the light emitted by the planet. Using detailed modelling, we were able to identify the chemical composition of these clouds, as well as details about the shapes and sizes of the cloud particles."

The inner planet, YSES-1b offered up other surprises: while the whole planetary system is young, at 16.7 million years old, it is too old to find signs of the planet-forming disk around the host star. But around YSES-1b the team observed a disk around the planet itself, thought to feed material onto the planet and serve as the birthplace of moons – similar to those seen around Jupiter. Only three other such disks have been identified to date, both around objects significantly younger than YSES-1b, raising new questions as to how this disk could be so long-lived.

Dr Nasedkin added: "Overall, this work highlights the incredible abilities of JWST to characterise exoplanet atmospheres. With only a handful of exoplanets that can be directly imaged, the YSES-1 system offers unique insights into the atmospheric physics and formation processes of these distant giants."

In broad terms, understanding how this super-solar system formed offers further insight into the origins of our own solar system, giving us an opportunity to watch as a planet similar to Jupiter forms in real time. Understanding how long it takes to form planets, and the chemical makeup at the end of formation is important to learn what the building blocks of our own solar system looked like. Scientists can compare these young systems to our own, which provides hints of how our own planets have changed over time.

Dr Kielan Hoch, Giacconi Fellow at the Space Telescope Science Institute, said: "This program was proposed before the launch of JWST. It was unique, as we hypothesized that the NIRSpec instrument on the future telescope should be able to observe both planets in its field of view in a single exposure, essentially, giving us two for the price of one. Our simulations ended up being correct post-launch, providing the most detailed dataset of a multi-planet system to-date."

"The YSES-1 system planets are also too widely separated to be explained through current formation theories, so the additional discoveries of distinct silicate clouds around YSES-1 c and small hot dusty material around YSES-1 b leads to more mysteries and complexities for determining how planets form and evolve."

"This research was also led by a team of early career researchers such as postdocs and graduate students who make up the first five authors of the paper. This work would not have been possible without their creativity and hard work, which is what aided in making these incredible multidisciplinary discoveries."

Materialsprovided byTrinity College Dublin.Note: Content may be edited for style and length.

Ginger vs. Cancer: Natural compound targets tumor metabolism

Looking to nature for answers to complex questions can reveal new and unprecedented results that can even affect cells on molecular levels.

For instance, human cells oxidize glucose to produce ATP (adenosine triphosphate), an energy source necessary for life. Cancer cells produce ATP through glycolysis, which does not utilize oxygen even under conditions where oxygen is present, and convert glucose into pyruvic acid and lactic acid. This method of producing ATP, known as the Warburg effect, is considered inefficient, thus raising questions as to why cancer cells choose this energy pathway to fuel their proliferation and survival.

In search for this energy catalyst, Associate Professor Akiko Kojima-Yuasa's team at Osaka Metropolitan University's Graduate School of Human Life and Ecology analyzed the cinnamic acid ester ethyl p-methoxycinnamate, a main component of kencur ginger, and its mechanism of action. In previous research, the team discovered that ethyl p-methoxycinnamate has inhibitory effects on cancer cells. Furthering their study, the acid ester was administered to Ehrlich ascites tumor cells to assess which component of the cancer cells' energy pathway was being affected.

Results revealed that the acid ester inhibits ATP production by disrupting de novo fatty acid synthesis and lipid metabolism, rather than through glycolysis as commonly theorized. Further, the researchers discovered acid ester-induced inhibition triggered increased glycolysis, which acted as a possible survival mechanism in the cells. This adaptability was theorized to be attributed to ethyl p-methoxycinnamate's inability to induce cell death.

"These findings not only provide new insights that supplement and expand the theory of the Warburg effect, which can be considered the starting point of cancer metabolism research, but are also expected to lead to the discovery of new therapeutic targets and the development of new treatment methods," stated Professor Kojima-Yuasa.

Materialsprovided byOsaka Metropolitan University.Note: Content may be edited for style and length.

The global rule that predicts where life thrives—and where it fails

A simple rule that seems to govern how life is organized on Earth is described in a new study published on June 4 inNature Ecology & Evolution.

The research team led, by Umeå University and involving the University of Reading, believe this rule helps explain why species are spread the way they are across the planet. The discovery will help to understand life on Earth – including how ecosystems respond to global environmental changes.

The rule is simple: in every region on Earth, most species cluster together in small 'hotspot' areas, then gradually spread outward with fewer and fewer species able to survive farther away from these hotspots.

Rubén Bernardo-Madrid, lead author and researcher at Umeå University (Sweden), said: "In every bioregion, there is always a core area where most species live. From that core, species expand into surrounding areas, but only a subset manages to persist. It seems these cores provide optimal conditions for species survival and diversification, acting as a source from which biodiversity radiates outward."

This pattern highlights the disproportionate ecological role these small areas play in sustaining the biodiversity of entire bioregions. Jose Luis Tella, from the Estación Biológica de Doñana-CSIC (Spain), said: "Safeguarding these core zones is therefore essential, as they represent critical priorities for conservation strategies."

Researchers studied bioregions across the world, examining species from very different life forms: amphibians, birds, dragonflies, mammals, marine rays, reptiles, and trees. Given the vast differences in life strategies — some species fly, others crawl, swim, or remain rooted — and the contrasting environmental and historical backgrounds of each bioregion, the researchers expected that species distribution would vary widely across bioregions. Surprisingly, they found the same pattern everywhere.

The pattern points to a general process known as environmental filtering. Environmental filtering has long been considered a key theoretical principle in ecology for explaining species distribution on Earth. Until now, however, global empirical evidence has been scarce. This study provides broad confirmation across multiple branches of life and at a planetary scale.

Professor Manuela González-Suárez, co-author of the study at the University of Reading, said: "It doesn't matter whether the limiting factor is heat, cold, drought, or salinity. The result is always the same: only species able to tolerate local conditions establish and persist, creating a predictable distribution of life on Earth."

The existence of a universal organising mechanism has profound implications for our understanding of life on Earth. Joaquín Calatayud, co-author from the Rey Juan Carlos University (Spain), said: "This pattern suggests that life on Earth may be, to some extent, predictable." Such predictable patterns can help scientists trace how life has diversified through time and offer valuable insights into how ecosystems might react to global environmental changes.

Materialsprovided byUniversity of Reading.Note: Content may be edited for style and length.

Unusual carbon build-up found in lungs of COPD patients

Cells taken from the lungs of people with chronic obstructive pulmonary disease (COPD) have a larger accumulation of soot-like carbon deposits compared to cells taken from people who smoke but do not have COPD, according to a study published today, June 10, inERJ Open Research. Carbon can enter the lungs via cigarette smoke, diesel exhaust and polluted air.

The cells, called alveolar macrophages, normally protect the body by engulfing any particles or bacteria that reach the lungs. But, in their new study, researchers found that when these cells are exposed to carbon they grow larger and encourage inflammation.

The research was led by Dr James Baker and Dr Simon Lea from the University of Manchester, UK. Dr Baker said: "COPD is a complex disease that has a number of environmental and genetic risk factors. One factor is exposure to carbon from smoking or breathing polluted air.

"We wanted to study what happens in the lungs of COPD patients when this carbon builds up in alveolar macrophage cells, as this may influence the cells' ability to protect the lungs."

The researchers used samples of lung tissue from surgery for suspected lung cancer. They studied samples (that did not contain any cancer cells) from 28 people who had COPD and 15 people who were smokers but did not have COPD.

Looking specifically at alveolar macrophage cells under a microscope, the researchers measured the sizes of the cells and the amount of carbon accumulated in the cells.

They found that the average amount of carbon was more than three times greater in alveolar macrophage cells from COPD patients compared to smokers. Cells containing carbon were consistently larger than cells with no visible carbon.

Patients with larger deposits of carbon in their alveolar macrophages had worse lung function, according to a measure called FEV1%, which quantifies how much and how forcefully patients can breathe out.

When the researchers exposed macrophages to carbon particles in the lab, they saw the cells become much larger and found that they were producing higher levels of proteins that lead to inflammation.

Dr Lea said: "As we compared cells from COPD patients with cells from smokers, we can see that this build-up of carbon is not a direct result of cigarette smoking. Instead, we show alveolar macrophages in COPD patients contain more carbon and are inherently different in terms of their form and function compared to those in smokers.

"Our research raises an interesting question as to the cause of the increased levels of carbon in COPD patients' macrophages. It could be that people with COPD are less able to clear the carbon they breathe in. It could also be that people exposed to more particulate matter are accumulating this carbon and developing COPD as a result.

"In future, it would be interesting to study how this carbon builds up and how lung cells respond over a longer period of time."

Professor Fabio Ricciardolo is Chair of the European Respiratory Society's group on monitoring airway disease, based at the University of Torino, Italy, and was not involved in the research. He said: "This set of experiments suggest that people with COPD accumulate unusually large amounts of carbon in the cells of their lungs. This build-up seems to be altering those cells, potentially causing inflammation in the lungs and leading to worse lung function.

"In addition, this research offers some clues about why polluted air might cause or worsen COPD. However, we know that smoking and air pollution are risk factors for COPD and other lung conditions, so we need to reduce levels of pollution in the air we breathe and we need to help people to quit smoking."

Materialsprovided byEuropean Respiratory Society.Note: Content may be edited for style and length.

This mind-bending physics breakthrough could redefine timekeeping

How can the strange properties of quantum particles be exploited to perform extremely accurate measurements? This question is at the heart of the research field of quantum metrology. One example is the atomic clock, which uses the quantum properties of atoms to measure time much more accurately than would be possible with conventional clocks.

However, the fundamental laws of quantum physics always involve a certain degree of uncertainty. Some randomness or a certain amount of statistical noise has to be accepted. This results in fundamental limits to the accuracy that can be achieved. Until now, it seemed to be an immutable law that a clock twice as accurate requires at least twice as much energy. But now a team of researchers from TU Wien, Chalmers University of Technology, Sweden, and University of Malta have demonstrated that special tricks can be used to increase accuracy exponentially. The crucial point is using two different time scales – similar to how a clock has a second hand and a minute hand.

"We have analyzed in principle, which clocks could be theoretically possible," says Prof. Marcus Huber from the Atomic Institute at the TU Wien. "Every clock needs two components: first, a time base generator – such as a pendulum in a pendulum clock, or even a quantum oscillation. And second, a counter – any element that counts how many time units defined by the time base generator have already passed."

The time base generator can always return to exactly the same state. After one complete oscillation, the pendulum of a pendulum clock is exactly where it was before. After a certain number of oscillations, the caesium atom in an atomic clock returns to exactly the same state it was in before. The counter, on the other hand, must change – otherwise the clock is useless.

"This means that every clock must be connected to an irreversible process," says Florian Meier from TU Wien. "In the language of thermodynamics, this means that every clock increases the entropy in the universe; otherwise, it is not a clock." The pendulum of a pendulum clock generates a little heat and disorder among the air molecules around it, and every laser beam that reads the state of an atomic clock generates heat, radiation and thus entropy.

"We can now consider how much entropy a hypothetical clock with extremely high precision would have to generate – and, accordingly, how much energy such a clock would need," says Marcus Huber. "Until now, there seemed to be a linear relationship: if you want a thousand times the precision, you have to generate at least a thousand times as much entropy and expend a thousand times as much energy."

Quantum time and classical time

However, the research team at TU Wien, together with the Austrian Academy of Sciences (ÖAW) in Vienna and the teams from Chalmers University of Technology, Sweden, and University of Malta, has now shown that this apparent rule can be circumvented by using two different time scales.

"For example, you can use particles that move from one area to another to measure time, similar to how grains of sand indicate the time by falling from the top of the glass to the bottom," says Florian Meier. You can connect a whole series of such time-measuring devices in series and count how many of them have already passed through – similar to how one clock hand counts how many laps the other clock hand has already completed.

"This way, you can increase accuracy, but not without investing more energy," says Marcus Huber. "Because every time one clock hand completes a full rotation and the other clock hand is measured at a new location – you could also say every time the environment around it notices that this hand has moved to a new location – the entropy increases. This counting process is irreversible."

However, quantum physics also allows for another kind of particle transport: the particles can also travel through the entire structure, i.e. across the entire clock dial, without being measured anywhere. In a sense, the particle is then everywhere at once during this process; it has no clearly defined location until it finally arrives – and only then is it actually measured, in an irreversible process that increases entropy.

Like second and minute clock hands

"So we have a fast process that does not cause entropy – quantum transport – and a slow one, namely the arrival of the particle at the very end," explains Yuri Minoguchi, TU Wien. "The crucial thing about our method is that one hand behaves purely in terms of quantum physics, and only the other, slower hand actually has an entropy-generating effect."

The team has now been able to show that this strategy enables an exponential increase in accuracy per increase in entropy. This means that much higher precision can be achieved than would have been thought possible according to previous theories. "What's more, the theory could be tested in the real world using superconducting circuits, one of the most advanced quantum technologies currently available.," says Simone Gasparinetti, co-author of the study and leader of the experimental team at Chalmers. "This is an important result for research into high-precision quantum measurements and suppression of unwanted fluctuations," says Marcus Huber, "and at the same time it helps us to better understand one of the great unsolved mysteries of physics: the connection between quantum physics and thermodynamics."

Materialsprovided byVienna University of Technology.Note: Content may be edited for style and length.

From the Andes to the beginning of time: Telescopes detect 13-billion-year-old signal

For the first time, scientists have used Earth-based telescopes to look back over 13 billion years to see how the first stars in the universe affect light emitted from the Big Bang.

Using telescopes high in the Andes mountains of northern Chile, astrophysicists have measured this polarized microwave light to create a clearer picture of one of the least understood epochs in the history of the universe, the Cosmic Dawn.

"People thought this couldn't be done from the ground. Astronomy is a technology-limited field, and microwave signals from the Cosmic Dawn are famously difficult to measure," said Tobias Marriage, project leader and a Johns Hopkins professor of physics and astronomy. "Ground-based observations face additional challenges compared to space. Overcoming those obstacles makes this measurement a significant achievement."

Cosmic microwaves are mere millimeters in wavelength and very faint. The signal from polarized microwave light is about a million times fainter. On Earth, broadcast radio waves, radar, and satellites can drown out their signal, while changes in the atmosphere, weather, and temperature can distort it. Even in perfect conditions, measuring this type of microwave requires extremely sensitive equipment.

Scientists from the U.S. National Science Foundation's Cosmology Large Angular Scale Surveyor, or CLASS, project used telescopes uniquely designed to detect the fingerprints left by the first stars in the relic Big Bang light — a feat that previously had only been accomplished by technology deployed in space, such as the U.S. National Aeronautics and Space Administration Wilkinson Microwave Anisotropy Probe (WMAP) and European Space Agency Planck space telescopes.

The new research, led by Johns Hopkins University and the University of Chicago, was published today inThe Astrophysical Journal.

By comparing the CLASS telescope data with the data from the Planck and WMAP space missions, the researchers identified interference and narrowed in on a common signal from the polarized microwave light.

Polarization happens when light waves run into something and then scatter.

"When light hits the hood of your car and you see a glare, that's polarization. To see clearly, you can put on polarized glasses to take away glare," said first author Yunyang Li, who was a PhD student at Johns Hopkins and then a fellow at University of Chicago during the research. "Using the new common signal, we can determine how much of what we're seeing is cosmic glare from light bouncing off the hood of the Cosmic Dawn, so to speak."

After the Big Bang, the universe was a fog of electrons so dense that light energy was unable to escape. As the universe expanded and cooled, protons captured the electrons to form neutral hydrogen atoms, and microwave light was then free to travel through the space in between. When the first stars formed during the Cosmic Dawn, their intense energy ripped electrons free from the hydrogen atoms. The research team measured the probability that a photon from the Big Bang encountered one of the freed electrons on its way through the cloud of ionized gas and skittered off course.

The findings will help better define signals coming from the residual glow of the Big Bang, or the cosmic microwave background, and form a clearer picture of the early universe.

"Measuring this reionization signal more precisely is an important frontier of cosmic microwave background research," said Charles Bennett, a Bloomberg Distinguished Professor at Johns Hopkins who led the WMAP space mission. "For us, the universe is like a physics lab. Better measurements of the universe help to refine our understanding of dark matter and neutrinos, abundant but elusive particles that fill the universe. By analyzing additional CLASS data going forward, we hope to reach the highest possible precision that's achievable."

Building on research published last year that used the CLASS telescopes to map 75% of the night sky, the new results also help solidify the CLASS team's approach.

"No other ground-based experiment can do what CLASS is doing," says Nigel Sharp, program director in the NSF Division of Astronomical Sciences which has supported the CLASS instrument and research team since 2010. "The CLASS team has greatly improved measurement of the cosmic microwave polarization signal and this impressive leap forward is a testament to the scientific value produced by NSF's long-term support."

The CLASS observatory operates in the Parque Astronómico Atacama in northern Chile under the auspices of the Agencia Nacional de Investigación y Desarrollo.

Other collaborators are at Villanova University, the NASA Goddard Space Flight Center, the University of Chicago, the National Institute of Standards and Technology, the Argonne National Laboratory, the Los Alamos National Laboratory, the Harvard-Smithsonian Center for Astrophysics, the University of Oslo, Massachusetts Institute of Technology, and the University of British Columbia. Collaborators in Chile are at the Universidad de Chile, Pontificia Universidad Católica de Chile, Universidad de Concepción, and the Universidad Católica de la Santísima Concepción.

The observatory is funded by the National Science Foundation, Johns Hopkins, and private donors.

Materialsprovided byJohns Hopkins University.Note: Content may be edited for style and length.

Clean energy, dirty secrets: Inside the corruption plaguing california’s solar market

Solar power is growing by leaps and bounds in the United States, propelled by climate mitigation policies and carbon-free energy goals — and California is leading the way as the nation's top producer of solar electricity. A new study inEnergy Strategy Reviewshas revealed a dark side to the state's breakneck pace for solar investment, deployment, and adoption, taking a first-time look at patterns of public and private sector corruption in the California solar market.

Researchers at the Boston University Institute for Global Sustainability (IGS) have identified seven distinct types of corruption abuses and risks in California solar energy. Among them, favoritism in project approvals, including a high-profile incident at the senior ranks of the U.S. Department of the Interior involving an intimate relationship with a solar company lobbyist. To fully realize a just energy transition, the authors call for major solar reforms in California as the U.S. increasingly relies on solar energy to decarbonize its electricity sector.

"It's a wake-up call that the solar industry cannot continue on its current trajectory of bad governance and bad behavior."

"In this groundbreaking study, we find that efforts to accelerate solar infrastructure deployment in California end up contributing to a sobering array of corruption practices and risks. These include shocking abuses of power in the approval and licensing phases as well as the displacement of Indigenous groups, and also nefarious patterns of tax evasion or the falsification of information about solar projects," says lead author Benjamin Sovacool, who is the director of IGS and a Boston University professor of earth and environment. "It's a wake-up call that the solar industry cannot continue on its current trajectory of bad governance and bad behavior."

Drawing on a literature review and original interviews and fieldwork, the study's authors arrive at a framework that helps explore the wider socio-political realities driving corruption at a time of explosive growth in the California solar market, from 2010 to 2024. During this period, the state's solar energy production increased exponentially, reaching 79,544 gigawatt hours in 2024, or enough to power approximately 7.4 million U.S. households for a year, according to the State of Renewable Energy dashboard.

The research implicates solar energy in numerous corruption practices and risks that have adversely affected communities, policymaking and regulation, and siting decisions and planning.

"The most eye-opening finding for me is how common corruption is at every level of solar development, from small-scale vendors to high-level government officials, even in a well-regulated, progressive state like California," says co-author Alexander Dunlap, an IGS research fellow.

Favoritism and other forms of corruption

To understand how corruption undermines the solar market, the researchers focused on numerous utility-scale deployments in Riverside County, the fourth most populous county in California. They set out to document patterns of perceived corruption from a broad range of voices, gaining insights through organized focus groups and observation at different solar sites, as well as conducting interviews door-to-door and in a local supermarket parking lot. Respondents included residents in Blythe and Desert Center, California, impacted by solar energy development, solar construction workers, non-governmental organizations, solar company employees, federal agencies, and state and local governments.

While the study's authors acknowledge the difficulty of confirming individual claims of corruption, their mixed-methods research approach combines these personal assertions with analysis of news stories, court testimony, and other official sources to support their findings.

They point to a blend of public, private, social, and political patterns of corruption in the California solar energy market.

Outside of a few headline-making scandals, corruption in California's renewable energy sector has gone largely unexamined, allowing the underlying dynamics at play to erode the potential of a just energy transition. To remedy this, the study's authors recommend corruption risk mapping to document problematic practices and entities, subsidy registers and sunset clauses to deter rent-seeking and tax evasion, transparency initiatives aimed at environmental changes and data production (for Environmental Impact Assessment), strong enforcement of anti-corruption laws, and shared ownership models for solar to improve accountability.

This newly published study, "Sex for Solar? Examining Patterns of Public and Private Sector Corruption within the Booming California Solar Energy Market," is part of a larger IGS research project looking at injustices in U.S. solar and wind energy supply chains.

Materialsprovided byBoston University.Note: Content may be edited for style and length.

Scientists discover natural cancer-fighting sugar in sea cucumbers

Sea cucumbers are the ocean's janitors, cleaning the seabed and recycling nutrients back into the water. But this humble marine invertebrate could also hold the key to stopping the spread of cancer.

A sugar compound found in sea cucumbers can effectively block Sulf-2, an enzyme that plays a major role in cancer growth, according to a University of Mississippi-led study published inGlycobiology.

"Marine life produces compounds with unique structures that are often rare or not found in terrestrial vertebrates," said Marwa Farrag, a fourth-year doctoral candidate in the UM Department of BioMolecular Sciences.

"And so, the sugar compounds in sea cucumbers are unique. They aren't commonly seen in other organisms. That's why they're worth studying."

Farrag, a native of Assiut, Egypt, and the study's lead author, worked with a team of researchers from Ole Miss and Georgetown University on the project.

Human cells, and those of most mammals, are covered in tiny, hairlike structures called glycans that help with cell communication, immune responses and the recognition of threats such as pathogens. Cancer cells alter the expression of certain enzymes, including Sulf-2, which in turn modifies the structure of glycans. This modification helps cancer spread.

"The cells in our body are essentially covered in 'forests' of glycans," said Vitor Pomin, associate professor of pharmacognosy. "And enzymes change the function of this forest – essentially prunes the leaves of that forest.

"If we can inhibit that enzyme, theoretically, we are fighting against the spread of cancer."

Using both computer modeling and laboratory testing, the research team found that the sugar – fucosylated chondroitin sulfate – from the sea cucumber Holothuria floridana can effectively inhibit Sulf-2.

"We were able to compare what we generated experimentally with what the simulation predicted, and they were consistent," said Robert Doerksen, professor of medicinal chemistry. "That gives us more confidence in the results."

Unlike other Sulf-2 regulating medications, the sea cucumber compound does not interfere with blood clotting, said Joshua Sharp, UM associate professor of pharmacology.

"As you can imagine, if you are treating a patient with a molecule that inhibits blood coagulation, then one of the adverse effects that can be pretty devastating is uncontrolled bleeding," he said. "So, it's very promising that this particular molecule that we're working with doesn't have that effect."

As a marine-based cancer therapy, the sea cucumber compound may be easier to create and safer to use.

"Some of these drugs we have been using for 100 years, but we're still isolating them from pigs because chemically synthesizing it would be very, very difficult and very expensive," Sharp said. "That's why a natural source is really a preferred way to get at these carbohydrate-based drugs."

Unlike extracting carbohydrate-based drugs from pigs or other land mammals, extracting the compound from sea cucumbers does not carry a risk of transferring viruses and other harmful agents, Pomin said.

"It's a more beneficial and cleaner resource," he said. "The marine environment has many advantages compared to more traditional sources."

But sea cucumbers – some variants of which are a culinary delicacy in the Pacific Rim – aren't so readily abundant that scientists could go out and harvest enough to create a line of medication. The next step in the research is to find a way to synthesize the sugar compound for future testing.

"One of the problems in developing this as a drug would be the low yield, because you can't get tons and tons of sea cucumbers," Pomin said. "So, we have to have a chemical route, and when we've developed that, we can begin applying this to animal models."

The interdisciplinary nature of the scientific study, which featured researchers from chemistry, pharmacognosy and computational biology, underscored the importance of cross-disciplinary collaboration in tackling complex diseases like cancer, Pomin said.

"This research took multiple expertise – mass spectrometry, biochemistry, enzyme inhibition, computation," Pomin said. "It's the effort of the whole team."

This work is based on material supported by the National Institutes of Health grant nos. 1P20GM130460-01A1-7936, R01CA238455, P30CA51008 and S10OD028623.

Materialsprovided byUniversity of Mississippi.Note: Content may be edited for style and length.