Sand clouds and moon nurseries: Webb’s dazzling exoplanet reveal

Astrophysicists have gained precious new insights into how distant "exoplanets" form and what their atmospheres can look like, after using the James Webb Telescope to image two young exoplanets in extraordinary detail. Among the headline findings were the presence of silicate clouds in one of the planet's atmospheres, and a circumplanetary disk thought to feed material that can form moons around the other.

In broader terms, understanding how the "YSES-1" super-solar system formed offers further insight into the origins of our own solar system, and gives us the opportunity to watch and learn as a planet similar to Jupiter forms in real time.

"Directly imaged exoplanets — planets outside our own Solar System — are the only exoplanets that we can truly take photos of," said Dr Evert Nasedkin, a Postdoctoral Fellow in Trinity College Dublin's School of Physics, who is a co-author of the research article just published in leading international journal,Nature."These exoplanets are typically still young enough that they are still hot from their formation and it is this warmth, seen in the thermal infrared, that we as astronomers observe."

Using spectroscopic instruments on board the James Webb Space Telescope (JWST), Dr Kielan Hoch and a large international team, including astronomers at Trinity College Dublin, obtained broad spectra of two young, giant exoplanets which orbit a sun-like star, YSES-1. These planets are several times larger than Jupiter, and orbit far from their host star, highlighting the diversity of exoplanet systems even around stars like our own sun.

The main goal of measuring the spectra of these exoplanets was to understand their atmospheres. Different molecules and cloud particles all absorb different wavelengths of light, imparting a characteristic fingerprint onto the emission spectrum from the planets.

Dr Nasedkin said: "When we looked at the smaller, farther-out companion, known as YSES 1-c, we found the tell-tale signature of silicate clouds in the mid-infrared. Essentially made of sand-like particles, this is the strongest silicate absorption feature observed in an exoplanet yet."

"We believe this is linked to the relative youth of the planets: younger planets are slightly larger in radius, and this extended atmosphere may allow the cloud to absorb more of the light emitted by the planet. Using detailed modelling, we were able to identify the chemical composition of these clouds, as well as details about the shapes and sizes of the cloud particles."

The inner planet, YSES-1b offered up other surprises: while the whole planetary system is young, at 16.7 million years old, it is too old to find signs of the planet-forming disk around the host star. But around YSES-1b the team observed a disk around the planet itself, thought to feed material onto the planet and serve as the birthplace of moons – similar to those seen around Jupiter. Only three other such disks have been identified to date, both around objects significantly younger than YSES-1b, raising new questions as to how this disk could be so long-lived.

Dr Nasedkin added: "Overall, this work highlights the incredible abilities of JWST to characterise exoplanet atmospheres. With only a handful of exoplanets that can be directly imaged, the YSES-1 system offers unique insights into the atmospheric physics and formation processes of these distant giants."

In broad terms, understanding how this super-solar system formed offers further insight into the origins of our own solar system, giving us an opportunity to watch as a planet similar to Jupiter forms in real time. Understanding how long it takes to form planets, and the chemical makeup at the end of formation is important to learn what the building blocks of our own solar system looked like. Scientists can compare these young systems to our own, which provides hints of how our own planets have changed over time.

Dr Kielan Hoch, Giacconi Fellow at the Space Telescope Science Institute, said: "This program was proposed before the launch of JWST. It was unique, as we hypothesized that the NIRSpec instrument on the future telescope should be able to observe both planets in its field of view in a single exposure, essentially, giving us two for the price of one. Our simulations ended up being correct post-launch, providing the most detailed dataset of a multi-planet system to-date."

"The YSES-1 system planets are also too widely separated to be explained through current formation theories, so the additional discoveries of distinct silicate clouds around YSES-1 c and small hot dusty material around YSES-1 b leads to more mysteries and complexities for determining how planets form and evolve."

"This research was also led by a team of early career researchers such as postdocs and graduate students who make up the first five authors of the paper. This work would not have been possible without their creativity and hard work, which is what aided in making these incredible multidisciplinary discoveries."

Materialsprovided byTrinity College Dublin.Note: Content may be edited for style and length.

Ginger vs. Cancer: Natural compound targets tumor metabolism

Looking to nature for answers to complex questions can reveal new and unprecedented results that can even affect cells on molecular levels.

For instance, human cells oxidize glucose to produce ATP (adenosine triphosphate), an energy source necessary for life. Cancer cells produce ATP through glycolysis, which does not utilize oxygen even under conditions where oxygen is present, and convert glucose into pyruvic acid and lactic acid. This method of producing ATP, known as the Warburg effect, is considered inefficient, thus raising questions as to why cancer cells choose this energy pathway to fuel their proliferation and survival.

In search for this energy catalyst, Associate Professor Akiko Kojima-Yuasa's team at Osaka Metropolitan University's Graduate School of Human Life and Ecology analyzed the cinnamic acid ester ethyl p-methoxycinnamate, a main component of kencur ginger, and its mechanism of action. In previous research, the team discovered that ethyl p-methoxycinnamate has inhibitory effects on cancer cells. Furthering their study, the acid ester was administered to Ehrlich ascites tumor cells to assess which component of the cancer cells' energy pathway was being affected.

Results revealed that the acid ester inhibits ATP production by disrupting de novo fatty acid synthesis and lipid metabolism, rather than through glycolysis as commonly theorized. Further, the researchers discovered acid ester-induced inhibition triggered increased glycolysis, which acted as a possible survival mechanism in the cells. This adaptability was theorized to be attributed to ethyl p-methoxycinnamate's inability to induce cell death.

"These findings not only provide new insights that supplement and expand the theory of the Warburg effect, which can be considered the starting point of cancer metabolism research, but are also expected to lead to the discovery of new therapeutic targets and the development of new treatment methods," stated Professor Kojima-Yuasa.

Materialsprovided byOsaka Metropolitan University.Note: Content may be edited for style and length.

The global rule that predicts where life thrives—and where it fails

A simple rule that seems to govern how life is organized on Earth is described in a new study published on June 4 inNature Ecology & Evolution.

The research team led, by Umeå University and involving the University of Reading, believe this rule helps explain why species are spread the way they are across the planet. The discovery will help to understand life on Earth – including how ecosystems respond to global environmental changes.

The rule is simple: in every region on Earth, most species cluster together in small 'hotspot' areas, then gradually spread outward with fewer and fewer species able to survive farther away from these hotspots.

Rubén Bernardo-Madrid, lead author and researcher at Umeå University (Sweden), said: "In every bioregion, there is always a core area where most species live. From that core, species expand into surrounding areas, but only a subset manages to persist. It seems these cores provide optimal conditions for species survival and diversification, acting as a source from which biodiversity radiates outward."

This pattern highlights the disproportionate ecological role these small areas play in sustaining the biodiversity of entire bioregions. Jose Luis Tella, from the Estación Biológica de Doñana-CSIC (Spain), said: "Safeguarding these core zones is therefore essential, as they represent critical priorities for conservation strategies."

Researchers studied bioregions across the world, examining species from very different life forms: amphibians, birds, dragonflies, mammals, marine rays, reptiles, and trees. Given the vast differences in life strategies — some species fly, others crawl, swim, or remain rooted — and the contrasting environmental and historical backgrounds of each bioregion, the researchers expected that species distribution would vary widely across bioregions. Surprisingly, they found the same pattern everywhere.

The pattern points to a general process known as environmental filtering. Environmental filtering has long been considered a key theoretical principle in ecology for explaining species distribution on Earth. Until now, however, global empirical evidence has been scarce. This study provides broad confirmation across multiple branches of life and at a planetary scale.

Professor Manuela González-Suárez, co-author of the study at the University of Reading, said: "It doesn't matter whether the limiting factor is heat, cold, drought, or salinity. The result is always the same: only species able to tolerate local conditions establish and persist, creating a predictable distribution of life on Earth."

The existence of a universal organising mechanism has profound implications for our understanding of life on Earth. Joaquín Calatayud, co-author from the Rey Juan Carlos University (Spain), said: "This pattern suggests that life on Earth may be, to some extent, predictable." Such predictable patterns can help scientists trace how life has diversified through time and offer valuable insights into how ecosystems might react to global environmental changes.

Materialsprovided byUniversity of Reading.Note: Content may be edited for style and length.

Unusual carbon build-up found in lungs of COPD patients

Cells taken from the lungs of people with chronic obstructive pulmonary disease (COPD) have a larger accumulation of soot-like carbon deposits compared to cells taken from people who smoke but do not have COPD, according to a study published today, June 10, inERJ Open Research. Carbon can enter the lungs via cigarette smoke, diesel exhaust and polluted air.

The cells, called alveolar macrophages, normally protect the body by engulfing any particles or bacteria that reach the lungs. But, in their new study, researchers found that when these cells are exposed to carbon they grow larger and encourage inflammation.

The research was led by Dr James Baker and Dr Simon Lea from the University of Manchester, UK. Dr Baker said: "COPD is a complex disease that has a number of environmental and genetic risk factors. One factor is exposure to carbon from smoking or breathing polluted air.

"We wanted to study what happens in the lungs of COPD patients when this carbon builds up in alveolar macrophage cells, as this may influence the cells' ability to protect the lungs."

The researchers used samples of lung tissue from surgery for suspected lung cancer. They studied samples (that did not contain any cancer cells) from 28 people who had COPD and 15 people who were smokers but did not have COPD.

Looking specifically at alveolar macrophage cells under a microscope, the researchers measured the sizes of the cells and the amount of carbon accumulated in the cells.

They found that the average amount of carbon was more than three times greater in alveolar macrophage cells from COPD patients compared to smokers. Cells containing carbon were consistently larger than cells with no visible carbon.

Patients with larger deposits of carbon in their alveolar macrophages had worse lung function, according to a measure called FEV1%, which quantifies how much and how forcefully patients can breathe out.

When the researchers exposed macrophages to carbon particles in the lab, they saw the cells become much larger and found that they were producing higher levels of proteins that lead to inflammation.

Dr Lea said: "As we compared cells from COPD patients with cells from smokers, we can see that this build-up of carbon is not a direct result of cigarette smoking. Instead, we show alveolar macrophages in COPD patients contain more carbon and are inherently different in terms of their form and function compared to those in smokers.

"Our research raises an interesting question as to the cause of the increased levels of carbon in COPD patients' macrophages. It could be that people with COPD are less able to clear the carbon they breathe in. It could also be that people exposed to more particulate matter are accumulating this carbon and developing COPD as a result.

"In future, it would be interesting to study how this carbon builds up and how lung cells respond over a longer period of time."

Professor Fabio Ricciardolo is Chair of the European Respiratory Society's group on monitoring airway disease, based at the University of Torino, Italy, and was not involved in the research. He said: "This set of experiments suggest that people with COPD accumulate unusually large amounts of carbon in the cells of their lungs. This build-up seems to be altering those cells, potentially causing inflammation in the lungs and leading to worse lung function.

"In addition, this research offers some clues about why polluted air might cause or worsen COPD. However, we know that smoking and air pollution are risk factors for COPD and other lung conditions, so we need to reduce levels of pollution in the air we breathe and we need to help people to quit smoking."

Materialsprovided byEuropean Respiratory Society.Note: Content may be edited for style and length.

This mind-bending physics breakthrough could redefine timekeeping

How can the strange properties of quantum particles be exploited to perform extremely accurate measurements? This question is at the heart of the research field of quantum metrology. One example is the atomic clock, which uses the quantum properties of atoms to measure time much more accurately than would be possible with conventional clocks.

However, the fundamental laws of quantum physics always involve a certain degree of uncertainty. Some randomness or a certain amount of statistical noise has to be accepted. This results in fundamental limits to the accuracy that can be achieved. Until now, it seemed to be an immutable law that a clock twice as accurate requires at least twice as much energy. But now a team of researchers from TU Wien, Chalmers University of Technology, Sweden, and University of Malta have demonstrated that special tricks can be used to increase accuracy exponentially. The crucial point is using two different time scales – similar to how a clock has a second hand and a minute hand.

"We have analyzed in principle, which clocks could be theoretically possible," says Prof. Marcus Huber from the Atomic Institute at the TU Wien. "Every clock needs two components: first, a time base generator – such as a pendulum in a pendulum clock, or even a quantum oscillation. And second, a counter – any element that counts how many time units defined by the time base generator have already passed."

The time base generator can always return to exactly the same state. After one complete oscillation, the pendulum of a pendulum clock is exactly where it was before. After a certain number of oscillations, the caesium atom in an atomic clock returns to exactly the same state it was in before. The counter, on the other hand, must change – otherwise the clock is useless.

"This means that every clock must be connected to an irreversible process," says Florian Meier from TU Wien. "In the language of thermodynamics, this means that every clock increases the entropy in the universe; otherwise, it is not a clock." The pendulum of a pendulum clock generates a little heat and disorder among the air molecules around it, and every laser beam that reads the state of an atomic clock generates heat, radiation and thus entropy.

"We can now consider how much entropy a hypothetical clock with extremely high precision would have to generate – and, accordingly, how much energy such a clock would need," says Marcus Huber. "Until now, there seemed to be a linear relationship: if you want a thousand times the precision, you have to generate at least a thousand times as much entropy and expend a thousand times as much energy."

Quantum time and classical time

However, the research team at TU Wien, together with the Austrian Academy of Sciences (ÖAW) in Vienna and the teams from Chalmers University of Technology, Sweden, and University of Malta, has now shown that this apparent rule can be circumvented by using two different time scales.

"For example, you can use particles that move from one area to another to measure time, similar to how grains of sand indicate the time by falling from the top of the glass to the bottom," says Florian Meier. You can connect a whole series of such time-measuring devices in series and count how many of them have already passed through – similar to how one clock hand counts how many laps the other clock hand has already completed.

"This way, you can increase accuracy, but not without investing more energy," says Marcus Huber. "Because every time one clock hand completes a full rotation and the other clock hand is measured at a new location – you could also say every time the environment around it notices that this hand has moved to a new location – the entropy increases. This counting process is irreversible."

However, quantum physics also allows for another kind of particle transport: the particles can also travel through the entire structure, i.e. across the entire clock dial, without being measured anywhere. In a sense, the particle is then everywhere at once during this process; it has no clearly defined location until it finally arrives – and only then is it actually measured, in an irreversible process that increases entropy.

Like second and minute clock hands

"So we have a fast process that does not cause entropy – quantum transport – and a slow one, namely the arrival of the particle at the very end," explains Yuri Minoguchi, TU Wien. "The crucial thing about our method is that one hand behaves purely in terms of quantum physics, and only the other, slower hand actually has an entropy-generating effect."

The team has now been able to show that this strategy enables an exponential increase in accuracy per increase in entropy. This means that much higher precision can be achieved than would have been thought possible according to previous theories. "What's more, the theory could be tested in the real world using superconducting circuits, one of the most advanced quantum technologies currently available.," says Simone Gasparinetti, co-author of the study and leader of the experimental team at Chalmers. "This is an important result for research into high-precision quantum measurements and suppression of unwanted fluctuations," says Marcus Huber, "and at the same time it helps us to better understand one of the great unsolved mysteries of physics: the connection between quantum physics and thermodynamics."

Materialsprovided byVienna University of Technology.Note: Content may be edited for style and length.

From the Andes to the beginning of time: Telescopes detect 13-billion-year-old signal

For the first time, scientists have used Earth-based telescopes to look back over 13 billion years to see how the first stars in the universe affect light emitted from the Big Bang.

Using telescopes high in the Andes mountains of northern Chile, astrophysicists have measured this polarized microwave light to create a clearer picture of one of the least understood epochs in the history of the universe, the Cosmic Dawn.

"People thought this couldn't be done from the ground. Astronomy is a technology-limited field, and microwave signals from the Cosmic Dawn are famously difficult to measure," said Tobias Marriage, project leader and a Johns Hopkins professor of physics and astronomy. "Ground-based observations face additional challenges compared to space. Overcoming those obstacles makes this measurement a significant achievement."

Cosmic microwaves are mere millimeters in wavelength and very faint. The signal from polarized microwave light is about a million times fainter. On Earth, broadcast radio waves, radar, and satellites can drown out their signal, while changes in the atmosphere, weather, and temperature can distort it. Even in perfect conditions, measuring this type of microwave requires extremely sensitive equipment.

Scientists from the U.S. National Science Foundation's Cosmology Large Angular Scale Surveyor, or CLASS, project used telescopes uniquely designed to detect the fingerprints left by the first stars in the relic Big Bang light — a feat that previously had only been accomplished by technology deployed in space, such as the U.S. National Aeronautics and Space Administration Wilkinson Microwave Anisotropy Probe (WMAP) and European Space Agency Planck space telescopes.

The new research, led by Johns Hopkins University and the University of Chicago, was published today inThe Astrophysical Journal.

By comparing the CLASS telescope data with the data from the Planck and WMAP space missions, the researchers identified interference and narrowed in on a common signal from the polarized microwave light.

Polarization happens when light waves run into something and then scatter.

"When light hits the hood of your car and you see a glare, that's polarization. To see clearly, you can put on polarized glasses to take away glare," said first author Yunyang Li, who was a PhD student at Johns Hopkins and then a fellow at University of Chicago during the research. "Using the new common signal, we can determine how much of what we're seeing is cosmic glare from light bouncing off the hood of the Cosmic Dawn, so to speak."

After the Big Bang, the universe was a fog of electrons so dense that light energy was unable to escape. As the universe expanded and cooled, protons captured the electrons to form neutral hydrogen atoms, and microwave light was then free to travel through the space in between. When the first stars formed during the Cosmic Dawn, their intense energy ripped electrons free from the hydrogen atoms. The research team measured the probability that a photon from the Big Bang encountered one of the freed electrons on its way through the cloud of ionized gas and skittered off course.

The findings will help better define signals coming from the residual glow of the Big Bang, or the cosmic microwave background, and form a clearer picture of the early universe.

"Measuring this reionization signal more precisely is an important frontier of cosmic microwave background research," said Charles Bennett, a Bloomberg Distinguished Professor at Johns Hopkins who led the WMAP space mission. "For us, the universe is like a physics lab. Better measurements of the universe help to refine our understanding of dark matter and neutrinos, abundant but elusive particles that fill the universe. By analyzing additional CLASS data going forward, we hope to reach the highest possible precision that's achievable."

Building on research published last year that used the CLASS telescopes to map 75% of the night sky, the new results also help solidify the CLASS team's approach.

"No other ground-based experiment can do what CLASS is doing," says Nigel Sharp, program director in the NSF Division of Astronomical Sciences which has supported the CLASS instrument and research team since 2010. "The CLASS team has greatly improved measurement of the cosmic microwave polarization signal and this impressive leap forward is a testament to the scientific value produced by NSF's long-term support."

The CLASS observatory operates in the Parque Astronómico Atacama in northern Chile under the auspices of the Agencia Nacional de Investigación y Desarrollo.

Other collaborators are at Villanova University, the NASA Goddard Space Flight Center, the University of Chicago, the National Institute of Standards and Technology, the Argonne National Laboratory, the Los Alamos National Laboratory, the Harvard-Smithsonian Center for Astrophysics, the University of Oslo, Massachusetts Institute of Technology, and the University of British Columbia. Collaborators in Chile are at the Universidad de Chile, Pontificia Universidad Católica de Chile, Universidad de Concepción, and the Universidad Católica de la Santísima Concepción.

The observatory is funded by the National Science Foundation, Johns Hopkins, and private donors.

Materialsprovided byJohns Hopkins University.Note: Content may be edited for style and length.

Clean energy, dirty secrets: Inside the corruption plaguing california’s solar market

Solar power is growing by leaps and bounds in the United States, propelled by climate mitigation policies and carbon-free energy goals — and California is leading the way as the nation's top producer of solar electricity. A new study inEnergy Strategy Reviewshas revealed a dark side to the state's breakneck pace for solar investment, deployment, and adoption, taking a first-time look at patterns of public and private sector corruption in the California solar market.

Researchers at the Boston University Institute for Global Sustainability (IGS) have identified seven distinct types of corruption abuses and risks in California solar energy. Among them, favoritism in project approvals, including a high-profile incident at the senior ranks of the U.S. Department of the Interior involving an intimate relationship with a solar company lobbyist. To fully realize a just energy transition, the authors call for major solar reforms in California as the U.S. increasingly relies on solar energy to decarbonize its electricity sector.

"It's a wake-up call that the solar industry cannot continue on its current trajectory of bad governance and bad behavior."

"In this groundbreaking study, we find that efforts to accelerate solar infrastructure deployment in California end up contributing to a sobering array of corruption practices and risks. These include shocking abuses of power in the approval and licensing phases as well as the displacement of Indigenous groups, and also nefarious patterns of tax evasion or the falsification of information about solar projects," says lead author Benjamin Sovacool, who is the director of IGS and a Boston University professor of earth and environment. "It's a wake-up call that the solar industry cannot continue on its current trajectory of bad governance and bad behavior."

Drawing on a literature review and original interviews and fieldwork, the study's authors arrive at a framework that helps explore the wider socio-political realities driving corruption at a time of explosive growth in the California solar market, from 2010 to 2024. During this period, the state's solar energy production increased exponentially, reaching 79,544 gigawatt hours in 2024, or enough to power approximately 7.4 million U.S. households for a year, according to the State of Renewable Energy dashboard.

The research implicates solar energy in numerous corruption practices and risks that have adversely affected communities, policymaking and regulation, and siting decisions and planning.

"The most eye-opening finding for me is how common corruption is at every level of solar development, from small-scale vendors to high-level government officials, even in a well-regulated, progressive state like California," says co-author Alexander Dunlap, an IGS research fellow.

Favoritism and other forms of corruption

To understand how corruption undermines the solar market, the researchers focused on numerous utility-scale deployments in Riverside County, the fourth most populous county in California. They set out to document patterns of perceived corruption from a broad range of voices, gaining insights through organized focus groups and observation at different solar sites, as well as conducting interviews door-to-door and in a local supermarket parking lot. Respondents included residents in Blythe and Desert Center, California, impacted by solar energy development, solar construction workers, non-governmental organizations, solar company employees, federal agencies, and state and local governments.

While the study's authors acknowledge the difficulty of confirming individual claims of corruption, their mixed-methods research approach combines these personal assertions with analysis of news stories, court testimony, and other official sources to support their findings.

They point to a blend of public, private, social, and political patterns of corruption in the California solar energy market.

Outside of a few headline-making scandals, corruption in California's renewable energy sector has gone largely unexamined, allowing the underlying dynamics at play to erode the potential of a just energy transition. To remedy this, the study's authors recommend corruption risk mapping to document problematic practices and entities, subsidy registers and sunset clauses to deter rent-seeking and tax evasion, transparency initiatives aimed at environmental changes and data production (for Environmental Impact Assessment), strong enforcement of anti-corruption laws, and shared ownership models for solar to improve accountability.

This newly published study, "Sex for Solar? Examining Patterns of Public and Private Sector Corruption within the Booming California Solar Energy Market," is part of a larger IGS research project looking at injustices in U.S. solar and wind energy supply chains.

Materialsprovided byBoston University.Note: Content may be edited for style and length.

Scientists discover natural cancer-fighting sugar in sea cucumbers

Sea cucumbers are the ocean's janitors, cleaning the seabed and recycling nutrients back into the water. But this humble marine invertebrate could also hold the key to stopping the spread of cancer.

A sugar compound found in sea cucumbers can effectively block Sulf-2, an enzyme that plays a major role in cancer growth, according to a University of Mississippi-led study published inGlycobiology.

"Marine life produces compounds with unique structures that are often rare or not found in terrestrial vertebrates," said Marwa Farrag, a fourth-year doctoral candidate in the UM Department of BioMolecular Sciences.

"And so, the sugar compounds in sea cucumbers are unique. They aren't commonly seen in other organisms. That's why they're worth studying."

Farrag, a native of Assiut, Egypt, and the study's lead author, worked with a team of researchers from Ole Miss and Georgetown University on the project.

Human cells, and those of most mammals, are covered in tiny, hairlike structures called glycans that help with cell communication, immune responses and the recognition of threats such as pathogens. Cancer cells alter the expression of certain enzymes, including Sulf-2, which in turn modifies the structure of glycans. This modification helps cancer spread.

"The cells in our body are essentially covered in 'forests' of glycans," said Vitor Pomin, associate professor of pharmacognosy. "And enzymes change the function of this forest – essentially prunes the leaves of that forest.

"If we can inhibit that enzyme, theoretically, we are fighting against the spread of cancer."

Using both computer modeling and laboratory testing, the research team found that the sugar – fucosylated chondroitin sulfate – from the sea cucumber Holothuria floridana can effectively inhibit Sulf-2.

"We were able to compare what we generated experimentally with what the simulation predicted, and they were consistent," said Robert Doerksen, professor of medicinal chemistry. "That gives us more confidence in the results."

Unlike other Sulf-2 regulating medications, the sea cucumber compound does not interfere with blood clotting, said Joshua Sharp, UM associate professor of pharmacology.

"As you can imagine, if you are treating a patient with a molecule that inhibits blood coagulation, then one of the adverse effects that can be pretty devastating is uncontrolled bleeding," he said. "So, it's very promising that this particular molecule that we're working with doesn't have that effect."

As a marine-based cancer therapy, the sea cucumber compound may be easier to create and safer to use.

"Some of these drugs we have been using for 100 years, but we're still isolating them from pigs because chemically synthesizing it would be very, very difficult and very expensive," Sharp said. "That's why a natural source is really a preferred way to get at these carbohydrate-based drugs."

Unlike extracting carbohydrate-based drugs from pigs or other land mammals, extracting the compound from sea cucumbers does not carry a risk of transferring viruses and other harmful agents, Pomin said.

"It's a more beneficial and cleaner resource," he said. "The marine environment has many advantages compared to more traditional sources."

But sea cucumbers – some variants of which are a culinary delicacy in the Pacific Rim – aren't so readily abundant that scientists could go out and harvest enough to create a line of medication. The next step in the research is to find a way to synthesize the sugar compound for future testing.

"One of the problems in developing this as a drug would be the low yield, because you can't get tons and tons of sea cucumbers," Pomin said. "So, we have to have a chemical route, and when we've developed that, we can begin applying this to animal models."

The interdisciplinary nature of the scientific study, which featured researchers from chemistry, pharmacognosy and computational biology, underscored the importance of cross-disciplinary collaboration in tackling complex diseases like cancer, Pomin said.

"This research took multiple expertise – mass spectrometry, biochemistry, enzyme inhibition, computation," Pomin said. "It's the effort of the whole team."

This work is based on material supported by the National Institutes of Health grant nos. 1P20GM130460-01A1-7936, R01CA238455, P30CA51008 and S10OD028623.

Materialsprovided byUniversity of Mississippi.Note: Content may be edited for style and length.

This “Healthy” Fat May Secretly Be Fueling Obesity

Eating a high-fat diet containing a large amount of oleic acid – a type of fatty acid commonly found in olive oil – could drive obesity more than other types of dietary fats, according to a study published in the journalCell Reports.

The study found that oleic acid, a monounsaturated fat associated with obesity, causes the body to make more fat cells. By boosting a signaling protein called AKT2 and reducing the activity of a regulating protein called LXR, high levels of oleic acid resulted in faster growth of the precursor cells that form new fat cells.

"We know that the types of fat that people eat have changed during the obesity epidemic. We wanted to know whether simply overeating a diet rich in fat causes obesity, or whether the composition of these fatty acids that make up the oils in the diet is important. Do specific fat molecules trigger responses in the cells?" said Michael Rudolph, Ph.D., assistant professor of biochemistry and physiology at the University of Oklahoma College of Medicine and member of OU Health Harold Hamm Diabetes Center.

Rudolph and his team, including Matthew Rodeheffer, Ph.D., of Yale University School of Medicine and other collaborators at Yale and New York University School of Medicine, fed mice a variety of specialized diets enriched in specific individual fatty acids, including those found in coconut oil, peanut oil, milk, lard and soybean oil. Oleic acid was the only one that caused the precursor cells that give rise to fat cells to proliferate more than other fatty acids.

"You can think of the fat cells as an army," Rudolph said. "When you give oleic acid, it initially increases the number of 'fat cell soldiers' in the army, which creates a larger capacity to store excess dietary nutrients. Over time, if the excess nutrients overtake the number of fat cells, obesity can occur, which can then lead to cardiovascular disease or diabetes if not controlled."

Unfortunately, it's not quite so easy to isolate different fatty acids in a human diet. People generally consume a complex mixture if they have cream in their coffee, a salad for lunch and meat and pasta for dinner. However, Rudolph said, there are increasing levels of oleic acid in the food supply, particularly when access to food variety is limited and fast food is an affordable option.

"I think the take-home message is moderation and to consume fats from a variety of different sources," he said. "Relatively balanced levels of oleic acid seem to be beneficial, but higher and prolonged levels may be detrimental. If someone is at risk for heart disease, high levels of oleic acid may not be a good idea."

Materialsprovided byUniversity of Oklahoma.Note: Content may be edited for style and length.

Scientists found the brain glitch that makes you think you’re still hungry

Researchers identify "meal memory" neurons in laboratory rats that could explain why forgetting lunch leads to overeating.

Scientists have discovered a specific group of brain cells that create memories of meals, encoding not just what food was eaten but when it was eaten. The findings, published today inNature Communications, could explain why people with memory problems often overeat and why forgetting about a recent meal can trigger excessive hunger and lead to disordered eating.

During eating, neurons in the ventral hippocampus region of the brain become active and form what the team of researchers call "meal engrams" — specialized memory traces that store information about the experience of food consumption. While scientists have long studied engrams for their role in storing memories and other experiences in the brain, the new study identified engrams dedicated to meal experiences.

"An engram is the physical trace that a memory leaves behind in the brain," said Scott Kanoski, professor of biological sciences at the USC Dornsife College of Letters, Arts and Sciences and corresponding author of the study. "Meal engrams function like sophisticated biological databases that store multiple types of information such as where you were eating, as well as the time that you ate."

The discovery has immediate relevance for understanding human eating disorders. Patients with memory impairments, such as those with dementia or brain injuries that affect memory formation, may often consume multiple meals in quick succession because they cannot remember eating.

Furthermore, distracted eating — such as mindlessly snacking while watching television or scrolling on a phone — may impair meal memories and contribute to overconsumption.

Based on the experiment's findings, meal engrams are formed during brief pauses between bites when the brain of laboratory rats naturally survey the eating environment. These moments of awareness allow specialized hippocampal neurons to integrate multiple streams of information.

Kanoski said it can be assumed a human's brain would undergo a similar phenomenon. When someone's attention is focused elsewhere — on phone or television screens — these critical encoding moments are compromised. "The brain fails to properly catalog the meal experience," said Lea Decarie-Spain, postdoctoral scholar at USC Dornsife and the study's first author, "leading to weak or incomplete meal engrams."

The research team used advanced neuroscience techniques to observe the brain activity of laboratory rats as they ate, providing the first real-time view of how meal memories form.

The meal memory neurons are distinct from brain cells involved in other types of memory formation. When researchers selectively destroyed these neurons, lab rats showed impaired memory for food locations but retained normal spatial memory for non-food-related tasks, indicating a specialized system dedicated to meal-related information processing. The study revealed that meal memory neurons communicate with the lateral hypothalamus, a brain region long known to control hunger and eating behavior. When this hippocampus-hypothalamus connection was blocked, the lab rats overate and could not remember where meals were consumed.

Kanoski said the findings could eventually inform new clinical approaches for treating obesity and weight management. Current weight management strategies often focus on restricting food intake or increasing exercise, but the new research suggests that enhancing meal memory formation could be equally important.

"We're finally beginning to understand that remembering what and when you ate is just as crucial for healthy eating as the food choices themselves," Kanoski said.

In addition to Kanoski, other study authors include Lea Decarie-Spain, Cindy Gu, Logan Tierno Lauer, Alicia E. Kao, Iris Deng, Molly E. Klug, Alice I. Waldow, Ashyah Hewage Galbokke, Olivia Moody, Kristen N. Donohue, Keshav S. Subramanian, Serena X. Gao, Alexander G. Bashaw and Jessica J. Rea of USC; and Samar N. Chehimi, Richard C. Crist, Benjamin C. Reiner and Matthew R. Hayes from the University of Pennsylvania's Perelman School of Medicine; and Mingxin Yang and Guillaume de Lartigue from the Monell Chemical Senses Center; and Kevin P. Myers from the Department of Psychology at Bucknell University.

The study was supported by a Quebec Research Funds Postdoctoral Fellowship (315201), an Alzheimer's Association Research Fellowship (AARFD-22-972811), a National Science Foundation Graduate Research Fellowship (DK105155), and a National Institute of Diabetes and Digestive and Kidney Diseases grant (K104897).

Materialsprovided byUniversity of Southern California.Note: Content may be edited for style and length.