Semua Kabar

The gene that hijacks fear: How PTEN rewires the brain’s anxiety circuit

PTEN Connection to Autism: Up to 25% of those with brain overgrowth and autistic spectrum disorder (ASD) carry variations in a gene called PTEN; PTEN-deficient mouse models exhibit ASD-like characteristics Cell-type Specific Model: PTEN loss in specific neurons leads to circuit imbalance and altered behavior Excitation-Inhibition Imbalance: Strengthened excitatory drive and loss of local inhibitory connections in an amygdala circuit Behavioral effects: This circuit imbalance results in increased fear learning and anxiety in mice — core traits seen in ASD. Researchers at the Max Planck Florida Institute for Neuroscience have discovered how loss of a gene strongly associated with autism and macrocephaly (large head size) rewires circuits and alters behavior. Their findings, published inFrontiers in Cellular Neuroscience, reveal specific circuit changes in the amygdala resulting from PTEN loss in inhibitory neurons, providing new insights into the underlying circuit alterations that contribute to heightened fear and anxiety.

PTEN has emerged as one of the most significant autism risk genes. Variations in this gene are found in a significant proportion of people with autism who also exhibit brain overgrowth, making it a key player in understanding differences in brain function. To investigate the impact of PTEN misregulation, researchers have turned to animal models, where global reduction of PTEN results in altered sociability, repetitive behaviors, and increased anxiety that are often associated with ASD in humans.

But understanding how PTEN dysfunction results in specific circuit and behavioral changes has been difficult in animal models that disrupt PTEN throughout the nervous system. Therefore, MPFI research group leader Dr. McLean Bolton and her team have focused on the changes in the central lateral amygdala driven by loss of PTEN in a critical neuronal population — somatostatin-expressing inhibitory neurons.

Alterations in the function of inhibitory neurons in the development of ASD have been seen through both human tissue studies and genetic mouse models. Moreover, the PTEN gene is known to regulate the development of inhibitory neurons. Therefore, a cell-type-specific disruption of PTEN in inhibitory neurons was a valuable target for understanding specific circuit changes associated with ASD.

"Although a cell-type specific disruption does not replicate the genome-wide changes seen in humans, it is essential to examine how genetic risk factors operate within distinct neural circuits," explained Dr. Bolton. "Understanding these mechanisms is a crucial step toward targeted interventions for specific traits such as severe anxiety."

The team, led by Dr. Tim Holford, combined a genetic model that disrupted PTEN only in somatostatin-containing inhibitory neurons with a unique circuit mapping approach previously developed in the lab. This approach measured the electrical responses of individual neurons to the sequential optogenetic activation of hundreds of nearby neurons, allowing rapid mapping of connectivity and strength with the precision of electrical recordings and the scale of imaging approaches.

"This is a powerful method that we can use to determine changes in local neuron connectivity and strength resulting from genetic variations. We were interested in uncovering how the disruption of PTEN signaling in a single cell type would change the way the brain processes information and contribute to the broad ASD phenotype," described Dr. Holford.

The scientists focused on the circuits in the central amygdala (CeL) – a brain region known to serve as an inhibitory gate on the downstream expression of fear responses – and found striking results. Deleting PTEN specifically in somatostatin-containing interneurons disrupted local inhibitory connectivity in the CeL by roughly 50% and reduced the strength of the inhibitory connections that remained. This diminished connectivity between inhibitory connections within the CeL was contrasted by an increase in the strength of excitatory inputs received from the basolateral amygdala (BLA), a nearby brain region that relays emotionally-relevant sensory information to the CeL.

Behavioral analysis of the genetic model demonstrated that this imbalance in neural signaling was linked to heightened anxiety and increased fear learning, but not alterations in social behavior or repetitive behavior traits commonly observed in ASD. The results not only confirm that PTEN loss in this specific cell type is sufficient to induce specific ASD-like behaviors, but also provide one of the most detailed maps to date of how local inhibitory networks in the amygdala are affected by genetic variations associated with neurological disorders. Importantly, the altered circuitry did not affect all ASD-relevant behaviors — social interactions remained largely intact — suggesting that PTEN-related anxiety and fear behaviors may stem from specific microcircuit changes.

As Dr. Holford explains, "By teasing out the local circuitry underlying specific traits, we hope to differentiate the roles of specific microcircuits within the umbrella of neurological disorders, which may one day help in developing targeted therapeutics for specific cognitive and behavioral characteristics. In future studies, we hope to evaluate these circuits in different genetic models to determine if these microcircuit alterations are convergent changes that underlie heightened fear and anxiety expression across diverse genetic profiles."

Materialsprovided byMax Planck Florida Institute for Neuroscience.Note: Content may be edited for style and length.

Scientists just reconstructed half the neanderthal genome—thanks to Indian DNA

India's population is genetically one of the most diverse in the world, yet it remains underrepresented in global datasets. In a study publishing in the Cell Press journalCell, researchers analyzed genomic data from more than 2,700 people from across India, capturing genetic variation from most geographic regions, linguistic groups, and communities. They found that most modern-day Indian people's ancestry can be traced back to Neolithic Iranian farmers, Eurasian Steppe pastoralists, and South Asian hunter-gatherers.

"This study fills a critical gap and reshapes our understanding of how ancient migrations, archaic admixture, and social structures have shaped Indian genetic variation," says senior author Priya Moorjani of the University of California, Berkeley. "Studying these subpopulations allows us to explore how ancient ancestry, geography, language, and social practices interacted to shape genetic variation. We hope our study will provide a deeper understanding of the origin of functional variation and inform precision health strategies in India."

The researchers used data from the Longitudinal Aging Study in India, Diagnostic Assessment of Dementia (LASI-DAD) and generated whole-genome sequences from 2,762 individuals in India, including people who spoke a range of different languages. They used these data to reconstruct the evolutionary history of India over the past 50,000 years at fine scale, showing how history impacts adaptation and disease in present-day Indians. They showed that most Indians derive ancestry from populations related to three ancestral groups: Neolithic Iranian farmers, Eurasian Steppe pastoralists, and South Asian hunter-gatherers.

"In India, genetic and linguistic variation often go hand in hand, shaped by ancient migrations and social practices," says lead author Elise Kerdoncuff of UC-Berkeley. "Ensuring linguistic variation among the people whose genomes we include helps prevent biased interpretations of genetic patterns and uncover functional variation related to all major communities to inform both evolutionary research and future biomedical surveys."

One of the key goals of the study was to understand how India's complex population history has shaped genetic variation related to disease. In India, many subpopulations have an increased risk of recessive genetic disorders, which is due largely to historical isolation and marrying within communities.

Another focus was on the impact of archaic hominin ancestry — specifically, Neanderthal and Denisovan — on disease susceptibility. For example, some of the genes inherited from these archaic groups have an impact on immune functions.

"One of the most striking and unexpected findings was that India harbors the highest variation in Neanderthal ancestry among non-Africans," says co-lead author Laurits Skov, also of UC-Berkeley. "This allowed us to reconstruct around 50% of the Neanderthal genome and 20% of the Denisovan genome from Indian individuals, more than any other previous archaic ancestry study."

One constraint of this work was the limited availability of ancient DNA from South and Central Asia. As more ancient genomes become available, the researchers will be able to refine this work and identify the source of Neolithic Iranian farmer and Steppe pastoralist-related ancestry in contemporary Indians. They also plan to continue studying the LASI-DAD cohort to enable a closer look at the source of the genetic adaptations and disease variants across India.

Materials provided byCell Press.Note: Content may be edited for style and length.

Buried for 23,000 years: These footprints are rewriting American history

Vance Holliday jumped at the invitation to go do geology at New Mexico's White Sands. The landscape, just west of Alamogordo, looks surreal – endless, rolling dunes of fine beige gypsum, left behind by ancient seas. It's one of the most unique geologic features in the world.

But a national park protects much of the area's natural resources, and the U.S. Army uses an adjacent swath as a missile range, making research at White Sands impossible much of the time. So it was an easy call for Holliday, a University of Arizona archaeologist and geologist, to accept an invitation in 2012 to do research in the park. While he was there, he asked, skeptically, if he could look at a site on the missile range.

"Well, next thing I know, there we were on the missile range," he said.

Holliday and a graduate student spent several days examining geologic layers in trenches, dug by previous researchers, to piece together a timeline for the area. They had no idea that, about 100 yards away, were footprints, preserved in ancient clay and buried under gypsum, that would help spark a wholly new theory about when humans arrived in the Americas.

Researchers from Bournemouth University in the United Kingdom and the U.S. National Park Service excavated those footprints in 2019 and published their paper in 2021. Holliday did not participate in the excavation but became a co-author after some of his 2012 data helped date the footprints.

The tracks showed human activity in the area occurred between 23,000 and 21,000 years ago – a timeline that would upend anthropologists' understanding of when cultures developed in North America. It would make the prints about 10,000 years older than remains found 90 years ago at a site near Clovis, New Mexico, which gave its name to an artifact assemblage long understood by archaeologists to represent the earliest known culture in North America. Critics have spent the last four years questioning the 2021 findings, largely arguing that the ancient seeds and pollen in the soil used to date the footprints were unreliable markers.

Now, Holliday leads a new study that supports the 2021 findings – this time relying on ancient mud to radiocarbon date the footprints, not seeds and pollen, and an independent lab to make the analysis. The paper was published today in the journal Science Advances.

Specifically, the new paper finds that the mud is between 20,700 and 22,400 years old – which correlates with the original finding that the footprints are between 21,000 and 23,000 years old. The new study now marks the third type of material – mud in addition to seeds and pollen – used to date the footprints, and by three different labs. Two separate research groups now have a total of 55 consistent radiocarbon dates.

"It's a remarkably consistent record," said Holliday, a professor emeritus in the School of Anthropology and Department of Geosciences who has studied the "peopling of the Americas" for nearly 50 years, focusing largely on the Great Plains and the Southwest.

"You get to the point where it's really hard to explain all this away," he added. "As I say in the paper, it would be serendipity in the extreme to have all these dates giving you a consistent picture that's in error."

Millennia ago, White Sands was a series of lakes that eventually dried up. Wind erosion piled the gypsum into the dunes that define the area today. The footprints were excavated in the beds of a stream that flowed into one such ancient lake.

"The wind erosion destroyed part of the story, so that part is just gone," Holliday said. "The rest is buried under the world's biggest pile of gypsum sand."

For the latest study, Holliday and Jason Windingstad, a doctoral candidate in environmental science, returned to White Sands in 2022 and 2023 and dug a new series of trenches for a closer look at the geology of the lake beds. Windingstad had worked at White Sands as a consulting geoarchaeologist for other research teams when he agreed to join Holliday's study.

"It's a strange feeling when you go out there and look at the footprints and see them in person," Windingstad said. "You realize that it basically contradicts everything that you've been taught about the peopling of North America."

Holliday acknowledges that the new study doesn't address a question he's heard from critics since 2021: Why are there no signs of artifacts or settlements left behind by those who made the footprints?

It's a fair question, Holliday and Windingstad said, and Holliday still does not have a peer-reviewed answer. Some of the footprints uncovered for the 2021 study were part of trackways that would have taken just a few seconds to walk, Holliday estimates. It's perfectly reasonable, he said, to assume that hunter-gatherers would be careful not to leave behind any resources in such a short time frame.

"These people live by their artifacts, and they were far away from where they can get replacement material. They're not just randomly dropping artifacts," he said. "It's not logical to me that you're going to see a debris field."

Even though he was confident in the 2021 findings to begin with, Holliday said, he's glad to have more data to support them.

"I really had no doubt from the outset because the dating we had was already consistent," Holliday said. "We have direct data from the field – and a lot of it now."

Materialsprovided byUniversity of Arizona.Note: Content may be edited for style and length.

Fighting fire with fire: How prescribed burns reduce wildfire damage and pollution

As wildfires increasingly threaten lives, landscapes, and air quality across the U.S., a Stanford-led study published inAGU AdvancesJune 26 finds that prescribed burns can help reduce risks. The research reveals that prescribed burns can reduce the severity of subsequent wildfires by an average of 16% and net smoke pollution by an average of 14%.

"Prescribed fire is often promoted as a promising toolin theoryto dampen wildfire impacts, but we show clear empirical evidence that prescribed burning worksin practice," said lead author Makoto Kelp, a postdoctoral fellow in Earth system science at the Stanford Doerr School of Sustainability. "It's not a cure-all, but it's a strategy that can reduce harm from extreme wildfires when used effectively."

Experts consider prescribed burns an effective strategy to reduce the threat of wildfires, and nearly $2 billion of federal funding had been set aside to conduct these and similar treatments to reduce hazardous fuel. Still, the use of prescribed burning in western states has expanded only slightly in recent years. Little research exists to quantify its effectiveness, and public opinion remains mixed amid concerns that prescribed burns can lead to smoky air and escaped fires.

At Stanford, Kelp is working with climate scientist Noah Diffenbaugh and environmental economist Marshall Burke through the National Oceanic and Atmospheric Administration Climate and Global Change Postdoctoral Fellowship Program. Using high-resolution satellite imagery, land management records, and smoke emissions inventories, the research team compared areas treated with prescribed fire between late 2018 and spring 2020 to adjacent untreated areas that both later burned in the extreme 2020 fire season. The analysis found that areas treated with prescribed fire burned less severely and produced significantly less smoke.

That finding is particularly important given the growing recognition of wildfire smoke as a major public health threat. Fine particulate matter (PM2.5) from wildfires has been linked to respiratory and cardiovascular problems and is increasingly driving poor air quality across the U.S.

"People often think of wildfires just in terms of flames and evacuations," said Burke, an associate professor of environmental social sciences in the Doerr School of Sustainability. "But the smoke is a silent and far-reaching hazard, and prescribed fire may be one of the few tools that actually reduces total smoke exposure."

The study also highlights a key nuance: the authors found that prescribed fires were significantly more effective outside of the wildland-urban interface (WUI) — the zones where homes meet wildland vegetation — than within it. In WUI areas, where agencies often rely on mechanical thinning due to concerns about smoke and safety, fire severity was reduced by just 8.5%, compared to 20% in non-WUI zones.

"We already know that population is growing fastest in the areas of the wildland-urban interface where the vegetation is most sensitive to climate-induced intensification of wildfire risk," said Diffenbaugh, the Kara J Foundation Professor in the Stanford Doerr School of Sustainability and the Kimmelman Family Senior Fellow at the Stanford Woods Institute for the Environment. "So, understanding why the prescribed fire treatments are less effective in those areas is a key priority for effectively managing that intensifying risk."

Smoke tradeoffs and policy implications

The study addresses concerns about smoky air from prescribed burning, finding that the approach produces only about 17% of the PM2.5smoke that would be emitted by a wildfire in the same area. The researchers estimate that if California met its goal of treating one million acres annually with prescribed fire, it could cut PM2.5emissions by 655,000 tons over five years — more than half of the total smoke pollution from the state's devastating 2020 wildfire season.

The authors note that their findings likely represent a conservative estimate of the benefits of prescribed fire, as such treatments can have protective spillover effects on surrounding untreated areas.

"This kind of empirical evidence is critical for effective policy," said Kelp. "My hope is that it helps inform the ongoing conversation around prescribed fire as a potential wildfire mitigation strategy in California."

Coauthors of the study also include Minghao Qiu of Stony Brook University,Iván Higuera-Mendieta, a PhD student in Earth system science at Stanford; and Tianjia Liu of the University of British Columbia.

Burke is also a senior fellow at theStanford Woods Institute for the Environment, theFreeman Spogli Institute for International Studies, and theStanford Institute for Economic Policy Research (SIEPR); an associate professor (by courtesy) ofEarth system science; and a member ofBio-Xand theWu Tsai Neurosciences Institute.

The study was funded by the National Oceanic and Atmospheric Administration and Stanford University.

Materials provided byStanford University.Note: Content may be edited for style and length.

Fire smoke exposure leaves toxic metals and lasting immune changes

Exposure to fire smoke — which can be composed of particulate matter, gases, materials from buildings such as perfluoroalkyl and polyfluoroalkyl substances (PFAS), toxic metals, and carcinogenic compounds — may alter the immune system on a cellular level, according to a new study led by researchers at Harvard T.H. Chan School of Public Health. The study is the first to examine the specific cellular changes associated with fire smoke exposure, documenting how smoke can damage the body through the immune system.

"We've known that smoke exposure causes poor respiratory, cardiac, neurological, and pregnancy outcomes, but we haven't understood how," said corresponding author Kari Nadeau, John Rock Professor of Climate and Population Studies and chair of the Department of Environmental Health. "Our study fills in this knowledge gap, so that clinicians and public health leaders are better equipped to respond to the growing threat of difficult to contain, toxic wildfires."

The study was published pn June 26 inNature Medicine.

The researchers collected blood from two cohorts matched by age, sex, and socioeconomic status: 31 smoke-exposed adults, both firefighters and civilians, and 29 non-smoke-exposed adults. None of the participants had an acute or chronic condition, and none were taking immunomodulatory drugs at or before the time of the blood draw, which took place within one month after participants had been exposed to fire smoke.

Using cutting-edge single-cell -omic techniques — epigenetic assays and mass cytometry — and bioinformatic analytical tools, the researchers examined and analyzed individual cells within each blood sample.

The study found several cellular-level changes in the smoke-exposed individuals compared to the non-smoke-exposed individuals. Smoke-exposed individuals showed an increase in memory CD8+ T cells (a type of immune cell critical to long-term immunity against pathogens) and elevated activation and chemokine receptor biomarkers (indicators of inflammation and immune activity) within multiple cell types. Additionally, those who had been exposed to smoke showed changes in133 genes related to allergies and asthma, and more of their immune cells were bound with toxic metals, including mercury and cadmium.

"Our findings demonstrate that the immune system is extremely sensitive to environmental exposures like fire smoke, even in healthy individuals," said lead author Mary Johnson, principal research scientist in the Department of Environmental Health. "Knowing exactly how may help us detect immune dysfunction from smoke exposure earlier and could pave the way for new therapeutics to mitigate, or prevent altogether, the health effects of smoke exposure and environmental contaminants."

The researchers also noted that the study could help inform environmental and public health policies and investments. "Knowing more about exactly how smoke exposure is harming the body, we may increase public health campaigns about the dangers of smoke exposure and the importance of following evacuation procedures during wildfires," Nadeau said. "We may also reconsider what levels of smoke exposure we consider toxic."

Other Harvard Chan co-authors included Abhinav Kaushik, Olivia Kline, Xiaoying Zhou, and Elisabeth Simonin.

The study was funded by the National Institute of Environmental Health Sciences (R01 ES032253), the National Heart, Lung, and Blood Institute (P01 HL152953, T32HL007118), the National Institute of Allergy and Infectious Diseases (U19AI167903), the San Francisco Cancer Prevention Foundation, the Asthma and Allergic Diseases Cooperative Research Center, and the Keck Foundation.

Materials provided byHarvard T.H. Chan School of Public Health.Note: Content may be edited for style and length.

Quantum computers just beat classical ones — Exponentially and unconditionally

Quantum computers have the potential to speed up computation, help design new medicines, break codes, and discover exotic new materials — but that's only when they are truly functional.

One key thing that gets in the way: noise or the errors that are produced during computations on a quantum machine — which in fact makes them less powerful than classical computers – until recently.

Daniel Lidar, holder of the Viterbi Professorship in Engineering and Professor of Electrical & Computing Engineering at the USC Viterbi School of Engineering, has been iterating on quantum error correction, and in a new study along with collaborators at USC and Johns Hopkins, has been able to demonstrate a quantum exponential scaling advantage, using two 127-qubit IBM Quantum Eagle processor-powered quantum computers, over the cloud. The paper, "Demonstration of Algorithmic Quantum Speedup for an Abelian Hidden Subgroup Problem," was published in APS flagship journalPhysical Review X.

"There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. "But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers."

The key milestone for quantum computing, Lidar says, has always been to demonstrate that we can execute entire algorithms with a scaling speedup relative to ordinary "classical" computers.

He clarifies that a scaling speedup doesn't mean that you can do things, say, 100 times faster. "Rather, it's that as you increase a problem's size by including more variables, the gap between the quantum and the classical performance keeps growing. And an exponential speedup means that the performance gap roughly doubles for every additional variable. Moreover, the speedup we demonstrated is unconditional."

What makes a speedup "unconditional," Lidar explains, is that it doesn't rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of "Simon's problem," an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.

Simon's problem involves finding a hidden repeating pattern in a mathematical function and is considered the precursor to what's known as Shor's factoring algorithm, which can be used to break codes and launched the entire field of quantum computing. Simon's problem is like a guessing game, where the players try to guess a secret number known only to the game host (the "oracle"). Once a player guesses two numbers for which the answers returned by the oracle are identical, the secret number is revealed, and that player wins. Quantum players can win this game exponentially faster than classical players.

So, how did the team achieve their exponential speedup? Phattharaporn Singkanipa, USC doctoral researcher and first author, says, "The key was squeezing every ounce of performance from the hardware: shorter circuits, smarter pulse sequences, and statistical error mitigation."

The researchers achieved this in four different ways:

First, they limited the data input by restricting how many secret numbers would be allowed (technically, by limiting the number of 1's in the binary representation of the set of secret numbers). This resulted in fewer quantum logic operations than would be needed otherwise, which reduced the opportunity for error buildup.

Second, they compressed the number of required quantum logic operations as much as possible using a method known as transpilation.

Third, and most crucially, the researchers applied a method called "dynamical decoupling," which means applying sequences of carefully designed pulses to detach the behavior of qubits within the quantum computer from their noisy environment and keep the quantum processing on track. Dynamical decoupling had the most dramatic impact on their ability to demonstrate a quantum speedup.

Finally, they applied "measurement error mitigation," a method that finds and corrects certain errors that are left over after dynamical decoupling due to imperfections in measuring the qubits' state at the end of the algorithm.

Says Lidar, who is also a professor of Chemistry and Physics at the USC Dornsife College of Letters, Arts and Science, "The quantum computing community is showing how quantum processors are beginning to outperform their classical counterparts in targeted tasks, and are stepping into a territory classical computing simply can't reach., Our result shows that already today's quantum computers firmly lie on the side of a scaling quantum advantage."

He adds that with this new research, The performance separation cannot be reversed because the exponential speedup we've demonstrated is, for the first time, unconditional." In other words, the quantum performance advantage is becoming increasingly difficult to dispute.

Lidar cautions that "this result doesn't have practical applications beyond winning guessing games, and much more work remains to be done before quantum computers can be claimed to have solved a practical real-world problem."

This will require demonstrating speedups that don't rely on "oracles" that know the answer in advance and making significant advances in methods for further reducing noise and decoherence in ever larger quantum computers. Nevertheless, quantum computers' previously "on-paper promise" to provide exponential speedups has now been firmly demonstrated.

Disclosure: USC is an IBM Quantum Innovation Center. Quantum Elements, Inc. Is a startup in the IBM Quantum Network.

Materials provided byUniversity of Southern California.Note: Content may be edited for style and length.

Scientists discover ‘off switch’ enzyme that could stop heart disease and diabetes

Scientists at The University of Texas at Arlington have identified a new enzyme that can be "turned off" to help the body maintain healthy cholesterol levels — a significant development that could lead to new treatments for diseases that affect millions of Americans.

"We found that by blocking the enzyme IDO1, we are able to control the inflammation in immune cells called macrophages," said Subhrangsu S. Mandal, lead author of a new peer-reviewed study and professor of chemistry at UT Arlington. "Inflammation is linked to so many conditions — everything from heart disease to cancer to diabetes to dementia. By better understanding IDO1 and how to block it, we have the potential to better control inflammation and restore proper cholesterol processing, stopping many of these diseases in their tracks."

Inflammation plays a crucial role in the immune system, helping the body fight infections and heal injuries. But when inflammation becomes abnormal — due to stress, injury or infection — it can damage cells, disrupt normal functions and increase the risk of serious diseases. During these periods, macrophages can't absorb cholesterol properly, which can lead to chronic disease.

The team — Dr. Mandal, postdoctoral researcher Avisankar Chini; doctoral students Prarthana Guha, Ashcharya Rishi and Nagashree Bhat; master's student Angel Covarrubias; and undergraduate researchers Valeria Martinez, Lucine Devejian and Bao Nhi Nguyen — found that the enzyme IDO1 becomes activated during inflammation, producing a substance called kynurenine that interferes with how macrophages process cholesterol.

When IDO1 is blocked, however, macrophages regain their ability to absorb cholesterol. This suggests that reducing IDO1 activity could offer a new way to help prevent heart disease by keeping cholesterol levels in check.

The researchers also found that nitric oxide synthase (NOS), another enzyme linked in inflammation, worsens the effects of IDO1. They believe that inhibiting NOS could provide another potential therapy for managing cholesterol problems driven by inflammation.

"These findings are important because we know too much cholesterol buildup in macrophages can lead to clogged arteries, heart disease and a host of other illnesses," Mandal said. "Understanding how to prevent the inflammation affecting cholesterol regulation could lead to new treatments for conditions like heart disease, diabetes, cancer and others."

Next, the research team plans to dig deeper into how IDO1 interacts with cholesterol regulation and whether other enzymes play a role. If they can find a safe way to block IDO1, it could open the door for more effective drugs to prevent inflammation-related diseases.

This work is supported by the National Institutes of Health (1 R15 997 HL170257-01), National Science Foundation (NSF AGEP 998 Award — 2243017), and the Schwartzberg Companies.

Materialsprovided byUniversity of Texas at Arlington.Note: Content may be edited for style and length.

Planets may start forming before stars even finish growing

Signs of planet formation may appear earlier than expected around still-forming baby stars, according to new results of higher resolution images produced using new improved techniques to reanalyze radio astronomy archive data. These newly discovered signs of planet formation will provide a better understanding of when it begins around a young star, thereby elucidating the process that leads to planet formation, including habitable planets like Earth.

Planets form in disks composed of low-temperature molecular gas and dust, known as protoplanetary disks, found around protostars. Protostars are stars still in the process of forming. The nascent planets are too small to observe directly, but the gravity from a planet can create detectable patterns like rings or spirals in a protoplanetary disk. However, it is difficult to know when these patterns first appeared due to the limited number of protoplanetary disks that are close enough to Earth to be observed in high resolution.

A research team led by Ayumu Shoshi of Kyushu University and the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA) used improved data processing techniques to search for previously overlooked signs of planet formation in archive data from the ALMA (Atacama Large Millimeter/submillimeter Array) radio telescope. The team reanalyzed data for 78 disks in the Ophiuchus star-forming region, located 460 light-years away in the direction of the constellation Ophiuchus. More than half of the images produced in this study achieved a resolution over three times better than that of previous images.

The new high-resolution images show ring or spiral patterns in 27 of the disks. Of these, 15 were identified for the first time in this study. Combining this new sample with pervious work for a different star-forming region, the team found that the characteristic disk substructures emerge in disks larger than 30 au (astronomical units, 1 au = 149,597,870,700 m, the distance between the Earth and the Sun) around stars in the early stage of star formation, just a few hundred thousand years after a star was born. This suggests that planets begin to form at a much earlier stage than previously believed, when the disk still possesses abundant gas and dust. In other words, planets grow together with their very young host stars.

Materialsprovided byNational Institutes of Natural Sciences.Note: Content may be edited for style and length.

Ancient DNA reveals leprosy hit the Americas long before colonization

Hansen's Disease, more commonly known as leprosy, is a chronic disease that can lead to physical impairment. Today it exists in over 100 countries, and while the infection is treatable, access to treatment varies widely with socioeconomic conditions. Its mention in historical texts give us a glimpse into its past impact on population health in Europe and Asia. Prolonged untreated infection can result in characteristic changes in bone, and these have been documented in archaeological skeletons as early as 5000 years ago in Europe, Asia, and Oceania. So far, absence of these characteristic changes in the pre-contact American contexts suggests that leprosy was one of the many diseases introduced to the continent in the colonial period. Thereafter it afflicted humans and curiously also armadillos.

From a genetic perspective, Hansen's Disease is caused by either the globally dominantMycobacterium lepraeor the newly identified and rareMycobacterium lepromatosis. The recovery ofM. lepraefrom archaeological bone in Europe suggests the disease originated in Eurasia sometime during the Neolithic transition about 7000 years ago. Similar emergence estimates have been proposed for other notorious diseases such as plague, tuberculosis, and typhoid fever. Ancient genomes forM. lepromatosishave remained elusive, and these may hold important clues on the history of Hansen's Disease.

Past disease in the American continent

We know comparatively little about the infectious disease experience of the diverse communities of people living in the Americas before the colonial period. This accounts for almost 20,000 years of human history, and the diverse ecosystems into which humans integrated across the continent would have presented challenges to the immune system not otherwise encountered in other parts of the world. We know very little of these diseases, as they were smothered by the onslaught of pathogens that Europeans later brought with them. Archaeological study of human bones from the pre-contact Americas confirm that the time was far from disease-free, but often the traces seen on the bones aren't specific enough to assign to a known disease.

"Ancient DNA has become a great tool that allows us to dig deeper into diseases that have had a long history in the Americas," says Kirsten Bos, group leader for Molecular Paleopathology at the Max Planck Institute for Evolutionary Anthropology. She and her team have been studying pathological bone from American context for over a decade. Some diseases that the group has found were expected – just last year they found evidence that the family of diseases closely related to syphilis had its roots in the Americas, which many had suspected. "The advanced techniques now used to study ancient pathogen DNA allows us to look beyond the suspects and into other diseases that might not be expected from the context," she adds.

Re-writing the history of Hansen's Disease

Bos's team worked closely with researchers from Argentina and Chile to both identify bones suitable for analysis and to carry out the meticulous work of isolating the DNA of ancient pathogens. Doctoral candidate Darío Ramirez of the University of Córdoba, Argentina, worked extensively with such material, and was the first to identify a genetic signature related to leprosy in some 4000-year-old skeletons from Chile. "We were initially suspicious, since leprosy is regarded a colonial-era disease, but more careful evaluation of the DNA revealed the pathogen to be of the lepromatosis form." This provided the first clue thatM. lepromatosisandM. leprae, though nominally both pathogens that cause Hansen's Disease, might have very different histories. Reconstruction of the genome was key in looking into this. While putting the molecular puzzle of an ancient genome back together is never an easy task, these pathogens in particular had "amazing preservation, which is uncommon in ancient DNA, especially from specimens of that age," comments Lesley Sitter, a postdoctoral researcher in Bos's team who carried out the analysis.

The pathogen is related to all known modern forms ofM. lepromatosis, but as there are so few genomes available for comparison, there is still much to be learned about it. This work has shown that a pathogen considered rare in a modern context caused disease for thousands of years in the Americas. Rodrigo Nores, professor of Anthropology at the University of Córdoba, Argentina is convinced that more cases, both ancient and modern, will be identified in the coming years: "this disease was present in Chile as early as 4000 years ago, and now that we know it was there, we can specifically look for it in other contexts." Once more genomes surface, we'll be able to piece out further details of its history and better understand its global distribution today. The pathogen has recently been discovered in squirrel populations from the United Kingdom and Ireland, but in the Americas it has yet to be found in any species other than humans. With such little data, mystery still surrounds its origin. "It remains to be determined if the disease originated in the Americas, or if it joined some of the first settlers from Eurasia," adds Bos. "So far the evidence points in the direction of an American origin, but we'll need more genomes from other time periods and contexts to be sure."

Materialsprovided byMax Planck Institute for Evolutionary Anthropology.Note: Content may be edited for style and length.

This AI tracks lung tumors as you breathe — and it might save lives

In radiation therapy, precision can save lives. Oncologists must carefully map the size and location of a tumor before delivering high-dose radiation to destroy cancer cells while sparing healthy tissue. But this process, called tumor segmentation, is still done manually, takes time, varies between doctors — and can lead to critical tumor areas being overlooked.

Now, a team of Northwestern Medicine scientists has developed an AI tool called iSeg that not only matches doctors in accurately outlining lung tumors on CT scans but can also identify areas that some doctors may miss, reports a large new study.

Unlike earlier AI tools that focused on static images, iSeg is the first 3D deep learning tool shown to segment tumors as they move with each breath — a critical factor in planning radiation treatment, which half of all cancer patients in the U.S. receive during their illness.

"We're one step closer to cancer treatments that are even more precise than any of us imagined just a decade ago," said senior author Dr. Mohamed Abazeed, chair and professor of radiation oncology at Northwestern University Feinberg School of Medicine.

"The goal of this technology is to give our doctors better tools," added Abazeed, who leads a research team developing data-driven tools to personalize and improve cancer treatment and is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.

The study was published today (June 30) in the journalnpj Precision Oncology.

The Northwestern scientists trained iSeg using CT scans and doctor-drawn tumor outlines from hundreds of lung cancer patients treated at nine clinics within the Northwestern Medicine and Cleveland Clinic health systems. That's far beyond the small, single-hospital datasets used in many past studies.

After training, the AI was tested on patient scans it hadn't seen before. Its tumor outlines were then compared to those drawn by physicians. The study found that iSeg consistently matched expert outlines across hospitals and scan types. It also flagged additional areas that some doctors missed — and those missed areas were linked to worse outcomes if left untreated. This suggests iSeg may help catch high-risk regions that often go unnoticed.

"Accurate tumor targeting is the foundation of safe and effective radiation therapy, where even small errors in targeting can impact tumor control or cause unnecessary toxicity," Abazeed said.

"By automating and standardizing tumor contouring, our AI tool can help reduce delays, ensure fairness across hospitals and potentially identify areas that doctors might miss — ultimately improving patient care and clinical outcomes," added first author Sagnik Sarkar, a senior research technologist at Feinberg who holds a Master of Science in artificial intelligence from Northwestern.

Clinical deployment possible 'within a couple years'

The research team is now testing iSeg in clinical settings, comparing its performance to physicians in real time. They are also integrating features like user feedback and working to expand the technology to other tumor types, such as liver, brain and prostate cancers. The team also plans to adapt iSeg to other imaging methods, including MRI and PET scans.

"We envision this as a foundational tool that could standardize and enhance how tumors are targeted in radiation oncology, especially in settings where access to subspecialty expertise is limited," said co- author Troy Teo, instructor of radiation oncology at Feinberg.

"This technology can help support more consistent care across institutions, and we believe clinical deployment could be possible within a couple of years," Teo added.

This study is titled "Deep learning for automated, motion- resolved tumor segmentation in radiotherapy."

Materialsprovided byNorthwestern University.Note: Content may be edited for style and length.