Heavy particles, big secrets: What happened right after the Big Bang

An international team of scientists has published a new report that moves towards a better understanding of the behaviour of some of the heaviest particles in the universe under extreme conditions, which are similar to those just after the big bang. The paper, published in the journalPhysics Reports, is signed by physicists Juan M. Torres-Rincón, from the Institute of Cosmos Sciences at the University of Barcelona (ICCUB), Santosh K. Das, from the Indian Institute of Technology Goa (India), and Ralf Rapp, from Texas A&M University (United States).

The authors have published a comprehensive review that explores how particles containing heavy quarks (known as charm and bottom hadrons) interact in a hot, dense environment calledhadronic matter. This environment is created in the last phase of high-energy collisions of atomic nuclei, such as those taking place at the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC). The new study highlights the importance of including hadronic interactions in simulations to accurately interpret data from experiments at these large scientific infrastructures.

The study broadens the perspective on how matter behaves under extreme conditions and helps to solve some great unknowns about the origin of the universe.

Reproducing the primordial universe

When two atomic nuclei collide at near-light speeds, they generate temperatures more than a 1,000 times higher than those at the centre of the Sun. These collisions briefly produce a state of matter called a quark-gluon plasma (QGP), a soup of fundamental particles that existed microseconds after the big bang. As this plasma cools, it transforms into hadronic matter, a phase composed of particles such as protons and neutrons, as well as other baryons and mesons.

The study focuses on what happens to heavy-flavour hadrons (particles containing charmed or background quarks, such as D and B mesons) during this transition and the hadronic phase expansion that follows it.

Heavy quarks are like tiny sensors. Being so massive, they are produced just after the initial nuclear collision and move more slowly, thus interacting differently with the surrounding matter. Knowing how they scatter and spread is key to learning about the properties of the medium through which they travel.

Researchers have reviewed a wide range of theoretical models and experimental data to understand how heavy hadrons, such as D and B mesons, interact with light particles in the hadronic phase. They have also examined how these interactions affect observable quantities such as particle flux and momentum loss.

"To really understand what we see in the experiments, it is crucial to observe how the heavy particles move and interact also during the later stages of these nuclear collisions," says Juan M. Torres-Rincón, member of the Department of Quantum Physics and Astrophysics and ICCUB.

"This phase, when the system has already cooled down, still plays an important role in how the particles lose energy and flow together. It is also necessary to address the microscopic and transport properties of these heavy systems right at the transition point to the quark-gluon plasma," he continues. "This is the only way to achieve the degree of precision required by current experiments and simulations."

A simple analogy can be used to better understand these results: when we drop a heavy ball into a crowded pool, even after the biggest waves have dissipated, the ball continues to move and collide with people. Similarly, heavy particles created in nuclear collisions continue to interact with other particles around them, even after the hottest and most chaotic phase. These continuous interactions subtly modify the motion of particles, and studying these changes helps scientists to better understand the conditions of the early universe. Ignoring this phase would therefore mean missing an important part of the story.

Understanding how heavy particles behave in hot matter is fundamental to mapping the properties of the early universe and the fundamental forces that rule it. The findings also pave the way for future experiments at lower energies, such as those planned at CERN's Super Proton Super Synchrotron (SPS) and the future FAIR facility in Darmstadt, Germany. ​​​​​​​

Materialsprovided byUniversity of Barcelona.Note: Content may be edited for style and length.

Saving energy: New method guides magnetism without magnets

Researchers at Paul Scherrer Institute PSI have demonstrated an innovative method to control magnetism in materials using an energy-efficient electric field. The discovery focuses on materials known as magnetoelectrics, which offer promise for next-generation energy technologies, data storage, energy conversion, and medical devices. The findings are published in the journalNature Communications.

With AI and data centers demanding more and more energy, scientists are searching for smarter, greener technologies. That's where magnetoelectric materials come in — special compounds where electric and magnetic properties are linked. This connection lets researchers control magnetism using electric fields, which could pave the way for super-energy-efficient memory and computing devices.

One such magnetoelectric material is the olive-green crystalcopper oxyselenide(Cu2OSeO₃). At low temperatures, the atomic spins arrange themselves into exotic magnetic textures, forming structures such as helices and cones. These patterns are much larger than the underlying atomic lattice and not fixed to its geometry, making them highly tuneable.

Neutrons watch as electric fields redirect magnetism

Now, scientists at PSI have demonstrated that an electric field can steer these magnetic textures inside copper oxyselenide. In typical materials, magnetic structures – formed from the twisting and alignment of atomic spins — are locked in specific orientations. In copper oxyselenide with the right voltage, the researchers could nudge and reorient them.

This is the first time that the propagation direction of a magnetic texture could be continuously reorientated in a material using an electric field – an effect known as magnetoelectric deflection.

To investigate the magnetic structures, the team used the SANS-I beamline at the Swiss Spallation Neutron Source SINQ, a facility that uses beams of neutrons to map the arrangement and orientation of magnetic structures within a solid at the nanoscale. A custom-designed sample environment enabled the researchers to apply a high electric field whilst simultaneously probing the magnetisation inside the crystal with small-angle neutron scattering (SANS).

"The ability to steer such large magnetic textures with electric fields shows what's possible when creative experiments are paired with world-class research infrastructures," says Jonathan White, beamline scientist at PSI. "The reason we can capture such a subtle effect as magnetoelectric deflection is due to the exceptional resolution and versatility of SANS-I."

The newly discovered magnetoelectric deflection response prompted a deeper investigation into its underlying physics. What they found was intriguing: the magnetic structures didn't just respond — they behaved inthree distinct waysdepending on the strength of the electric field. Low electric fieldsgently deflected the magnetic structures with a linear response.Medium fieldsbrought in more complex, non-linear behaviour. High fieldscaused dramatic 90-degree flips in the direction of propagation of the magnetic texture.

"Each of these regimes present unique signatures that could be integrated into sensing and storage devices," says Sam Moody, postdoctoral researcher at PSI and lead author of the study. "One particularly exciting possibility is hybrid devices that use the ability to tune the onset of these regimes by varying the strength of the applied magnetic field."

The magnetoelectric deflection response offers a powerful new tool to control magnetism without relying on energy-intensive magnetic fields. The high level of flexibility with which the researchers could manipulate the magnetism makes their discovery an exciting prospect for applications in sustainable technology.

Materialsprovided byPaul Scherrer Institute.Note: Content may be edited for style and length.

Monster salamander with powerful jaws unearthed in Tennessee fossil find

A giant, strong-jawed salamander once tunneled through ancient Tennessee soil.

And thanks to a fossil unearthed near East Tennessee State University, scientists now better understand how it helped shape Appalachian amphibian diversity.

The giant plethodontid salamander now joins the remarkable roster of fossils from the Gray Fossil Site & Museum.

The findings appeared in the journalHistorical Biology, authored by a team of researchers from the Gray Fossil Site & Museum and ETSU: Assistant Collections Manager Davis Gunnin, Director and Professor of Geosciences Dr. Blaine Schubert, Head Curator and Associate Professor of Geosciences Dr. Joshua Samuels, Museum Specialist Keila Bredehoeft and Assistant Collections Manager Shay Maden.

"Our researchers are not only uncovering ancient life, they are modeling the kind of collaboration and curiosity that define ETSU," said Dr. Joe Bidwell, dean of the College of Arts and Sciences. "This exciting find underscores the vital role our university plays in preserving and exploring Appalachia's deep natural history."

Today, Southern Appalachian forests are renowned for their diversity and abundance of salamander species, especially lungless salamanders of the family Plethodontidae. Tennessee alone is home to more than 50 different salamanders – one in eight of all living salamander species.

Dusky salamanders, common in Appalachian Mountain streams, likely evolved from burrowing ancestors, relatives of Alabama's Red Hills salamander, a large, underground-dwelling species with a worm-like body and small limbs. Their explosive diversification began around 12 million years ago, shaping much of the region's salamander diversity today.

Dynamognathus robertsoni, the powerful, long-extinct salamander recently discovered at the site, had a bite to match its name. Roughly 16 inches long, it ranked among the largest salamanders ever to crawl across the region's ancient forests.

"Finding something that looks like a Red Hills salamander here in East Tennessee was a bit of a surprise," Gunnin said. "Today they're only found in a few counties in southern Alabama, and researchers thought of them as a highly specialized dead-end lineage not particularly relevant to the evolution of the dusky salamanders. Discovery of Dynamognathus robertsoni here in Southern Appalachia shows that these types of relatively large, burrowing salamanders were once more widespread in eastern North America and may have had a profound impact on the evolution of Appalachian salamander communities."

Dynamognathus robertsoni is "the largest plethodontid salamander and one of the largest terrestrial salamanders in the world," Gunnin said. Dusky salamanders in the Appalachians today reach only seven inches long at their largest.

Researchers believe predators like this one may have driven the rapid evolution of Appalachian stream-dwelling salamanders, highlighting the region's key role in salamander diversification.

"The warmer climate in Tennessee 5 million years ago, followed by cooling during the Pleistocene Ice Ages, may have restricted large, burrowing salamanders to lower latitudes, like southern Alabama, where the Red Hills salamander lives today," said Samuels.

Maden explained the naming of this new salamander.

"This group of salamanders has unusual cranial anatomy that gives them a strong bite force, so the genus name – Dynamognathus – Greek for 'powerful jaw,' is given to highlight the great size and power of the salamander compared to its living relatives," said Maden.

The species name robertsoni honors longtime Gray Fossil Site volunteer Wayne Robertson, who discovered the first specimen of the new salamander and has personally sifted through more than 50 tons of fossil-bearing sediment since 2000.

From volunteers and students to staff to faculty, the ETSU Gray Fossil Site & Museum is represented by a dynamic team of lifelong learners and is one of the many reasons ETSU is the flagship institution of Appalachia.

"The latest salamander publication is a testament to this teamwork and search for answers," said Schubert. "When Davis Gunnin, the lead author, began volunteering at the museum as a teenager with an interest in fossil salamanders, I was thrilled, because this region is known for its salamander diversity today, and we know so little about their fossil record. Thus, the possibility of finding something exciting seemed imminent."

Materialsprovided byEast Tennessee State University.Note: Content may be edited for style and length.

Biggest boom since the Big Bang? Astronomers record 25x supernova brightness

Astronomers from the University of Hawaiʻi’s Institute for Astronomy (IfA) have discovered the most energetic cosmic explosions yet discovered, naming the new class of events “extreme nuclear transients” (ENTs). These extraordinary phenomena occur when massive stars—at least three times heavier than our Sun—are torn apart after wandering too close to a supermassive black hole. Their disruption releases vast amounts of energy visible across enormous distances. The team's findings were recently detailed in the journal Science Advances.

"We’ve observed stars getting ripped apart as tidal disruption events for over a decade, but these ENTs are different beasts, reaching brightnesses nearly ten times more than what we typically see," said Jason Hinkle, who led the study as the final piece of his doctoral research at IfA. “Not only are ENTs far brighter than normal tidal disruption events, but they remain luminous for years, far surpassing the energy output of even the brightest known supernova explosions.”

The immense luminosities and energies of these ENTs are truly unprecedented. The most energetic ENT studied, named Gaia18cdj, emitted an astonishing 25 times more energy than the most energetic supernovae known. While typical supernovae emit as much energy in just one year as the Sun does in its 10 billion-year lifetime, ENTs radiate the energy of 100 Suns over a single year.

ENTs were first uncovered when Hinkle began a systematic search of public transient surveys for long-lived flares emanating from the centers of galaxies. He identified two unusual flares in data from the European Space Agency’s Gaia mission that brightened over a timescale much longer than known transients and without characteristics common to known transients.

"Gaia doesn’t tell you what a transient is, just that something changed in brightness," said Hinkle. "But when I saw these smooth, long-lived flares from the centers of distant galaxies, I knew we were looking at something unusual."

The discovery launched a multi-year follow-up campaign to figure out what these sources were. With help from UH’s Asteroid Terrestrial-impact Last Alert System, the W. M. Keck Observatory, and other telescopes across the globe, the team gathered data across the electromagnetic spectrum. Because ENTs evolve slowly over several years, capturing their full story took patience and persistence. Recently, a third event with similar properties was discovered by the Zwicky Transient Facility and reported independently by two teams, adding strong support that ENTs are a distinct new class of extreme astrophysical events.

The authors determined these extraordinary events could not be supernovae because they release far more energy than any known stellar explosion. The sheer energy budget, combined with their smooth and prolonged light curves, firmly pointed to an alternative mechanism: accretion onto a supermassive black hole.

However, ENTs differ significantly from normal black hole accretion which typically shows irregular and unpredictable changes in brightness. The smooth and long-lived flares of ENTs indicated a distinct physical process—the gradual accretion of a disrupted star by a supermassive black hole.

Benjamin Shappee, Associate Professor at IfA and study co-author, emphasized the implications: "ENTs provide a valuable new tool for studying massive black holes in distant galaxies. Because they're so bright, we can see them across vast cosmic distances—and in astronomy, looking far away means looking back in time. By observing these prolonged flares, we gain insights into black hole growth when the universe was half its current age when galaxies were happening places—forming stars and feeding their supermassive black holes 10 times more vigorously than they do today."

The rarity of ENTs, occurring at least 10 million times less frequently than supernovae, makes their detection challenging and dependent on sustained monitoring of the cosmos. Future observatories like the Vera C. Rubin Observatory and NASA’s Roman Space Telescope promise to uncover many more of these spectacular events, revolutionizing our understanding of black hole activity in the distant, early universe.

"These ENTs don’t just mark the dramatic end of a massive star’s life. They illuminate the processes responsible for growing the largest black holes in the universe," concluded Hinkle.

Materialsprovided byUniversity of Hawaii at Manoa.Note: Content may be edited for style and length.

These beetles can see a color most insects can’t

Insect eyes are generally sensitive to ultraviolet, blue and green light. With the exception of some butterflies, they cannot see the color red. Nevertheless, bees and other insects are also attracted to red flowers such as poppies. In this case, however, they are not attracted by the red color, but because they recognize the UV light reflected by the poppy flower.

However, two beetle species from the eastern Mediterranean region can indeed perceive the color red, as an international research team was able to show. The beetles arePygopleurus chrysonotusandPygopleurus syriacusfrom the family Glaphyridae. They feed mainly on pollen and prefer to visit plants with red flowers, such as poppies, anemones and buttercups.

Beetles Have Photoreceptors for long-wave Light

'To our knowledge, we are the first to have experimentally demonstrated that beetles can actually perceive the color red,' says Dr Johannes Spaethe from the Chair of Zoology II at the Biocentre of Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany. He gained the new insights together with Dr Elena Bencúrová from the Würzburg Bioinformatics Chair and researchers from the Universities of Ljubljana (Slovenia) and Groningen (Netherlands). The study has been published in the Journal of Experimental Biology.

The scientists used electrophysiology, behavioral experiments and color trapping. Among other things, they found that the two Mediterranean beetles possess four types of photoreceptors in their retinas that respond to UV light as well as blue, green and deep red light. Field experiments also showed that the animals use true color vision to identify red targets and that they have a clear preference for red colors.

New Model System for Ecological and Evolutionary Questions

The researchers consider the Glaphyrid family to be a promising new model system for investigating the visual ecology of beetles and the evolution of flower signals and flower detection by pollinators.

'The prevailing opinion in science is that flower colors have adapted to the visual systems of pollinators over the course of evolution,' says Johannes Spaethe. However, based on the new findings, it is now possible to speculate whether this evolutionary scenario also applies to Glaphyrid beetles and the flowers they visit.

Why do the researchers think this? The three genera of this beetle family (Eulasia,GlaphyrusandPygopleurus) show considerable differences in their preferences for flower colors, which vary between red, violet, white and yellow. This suggests that the physiological and/or behavioral basis for seeing red and other colors is relatively labile.

The great variety of flower colors in the Mediterranean region and the considerable variation in the color preferences of the beetles made it plausible that the visual systems of these pollinators may adapt to flower colors than is commonly assumed.

Materialsprovided byUniversity of Würzburg.Note: Content may be edited for style and length.

MIT uncovers the hidden playbook your brain uses to outsmart complicated problems

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans' skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

"What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don't have the means to solve a complex problem, we manage by using simpler heuristics that get the job done," says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT's McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD '24 and graduate student Cheng Tang are the lead authors of the paper, which appears today inNature Human Behavior. Nicholas Watters PhD '25 is also a co-author.

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don't know much about how the brain decides which one to use in a given situation.

"This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?" Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball's path is a task that is impossible for humans to solve with perfect accuracy.

"It requires four parallel simulations in your mind, and no human can do that. It's analogous to having four conversations at a time," Jazayeri says. "The task allows us to tap into this set of algorithms that the humans use, because you just can't solve it optimally."

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects' performance with the models' predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn't compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

"People rely on counterfactuals to the degree that it's helpful," Jazayeri says. "People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who's really good at retrieving information from the recent past, you may go back to the other side."

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball's path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model's ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model's memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

"What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior," Jazayeri says. "This is really saying that humans are acting rationally under the constraints that they have to function under."

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

Materialsprovided byMassachusetts Institute of Technology.Note: Content may be edited for style and length.

This tiny patch could replace biopsies—and revolutionize how we detect cancer

A patch containing tens of millions of microscopic nanoneedles could soon replace traditional biopsies, scientists have found.

The patch offers a painless and less invasive alternative for millions of patients worldwide who undergo biopsies each year to detect and monitor diseases like cancer and Alzheimer's.

Biopsies are among the most common diagnostic procedures worldwide, performed millions of times every year to detect diseases. However, they are invasive, can cause pain and complications, and can deter patients from seeking early diagnosis or follow-up tests. Traditional biopsies also remove small pieces of tissue, limiting how often and how comprehensively doctors can analyse diseased organs like the brain.

Now, scientists at King's College London have developed a nanoneedle patch that painlessly collects molecular information from tissues without removing or damaging them. This could allow healthcare teams to monitor disease in real time and perform multiple, repeatable tests from the same area – something impossible with standard biopsies.

Because the nanoneedles are 1,000 times thinner than a human hair and do not remove tissue, they cause no pain or damage, making the process less painful for patients compared to standard biopsies. For many, this could mean earlier diagnosis and more regular monitoring, transforming how diseases are tracked and treated.

Dr Ciro Chiappini, who led the research published today inNature Nanotechnology, said: "We have been working on nanoneedles for twelve years, but this is our most exciting development yet. It opens a world of possibilities for people with brain cancer, Alzheimer's, and for advancing personalised medicine. It will allow scientists – and eventually clinicians – to study disease in real time like never before."

The patch is covered in tens of millions of nanoneedles. In preclinical studies, the team applied the patch to brain cancer tissue taken from human biopsies and mouse models. The nanoneedles extracted molecular 'fingerprints' — including lipids, proteins, and mRNAs — from cells, without removing or harming the tissue.

The tissue imprint is then analysed using mass spectrometry and artificial intelligence, giving healthcare teams detailed insights into whether a tumour is present, how it is responding to treatment, and how disease is progressing at the cellular level.

Dr Chiappini said: "This approach provides multidimensional molecular information from different types of cells within the same tissue. Traditional biopsies simply cannot do that. And because the process does not destroy the tissue, we can sample the same tissue multiple times, which was previously impossible."

This technology could be used during brain surgery to help surgeons make faster, more precise decisions. For example, by applying the patch to a suspicious area, results could be obtained within 20 minutes and guide real-time decisions about removing cancerous tissue.

Made using the same manufacturing techniques as computer chips, the nanoneedles can be integrated into common medical devices such as bandages, endoscopes and contact lenses.

Dr Chippani added: "This could be the beginning of the end for painful biopsies. Our technology opens up new ways to diagnose and monitor disease safely and painlessly – helping doctors and patients make better, faster decisions."

The breakthrough was possible through close collaboration across nanoengineering, clinical oncology, cell biology, and artificial intelligence — each field bringing essential tools and perspectives that, together, unlocked a new approach to non-invasive diagnostics.

The study was supported by the European Research Council through its flagship Starting Grant programme, Wellcome Leap, and UKRI's EPSRC and MRC, which enabled acquisition of key analytical instrumentation.

Materialsprovided byKing's College London.Note: Content may be edited for style and length.

Forever chemicals’ toxic cousin: MCCPs detected in U. S. air for first time

Once in a while, scientific research resembles detective work. Researchers head into the field with a hypothesis and high hopes of finding specific results, but sometimes, there's a twist in the story that requires a deeper dive into the data.

That was the case for the University of Colorado Boulder researchers who led a field campaign in an agricultural region of Oklahoma. Using a high-tech instrument to measure how aerosol particles form and grow in the atmosphere, they stumbled upon something unexpected: the first-ever airborne measurements of Medium Chain Chlorinated Paraffins (MCCPs), a kind of toxic organic pollutant, in the Western Hemisphere. Their results published today inACS Environmental Au.

"It's very exciting as a scientist to find something unexpected like this that we weren't looking for," said Daniel Katz, CU Boulder chemistry PhD student and lead author of the study. "We're starting to learn more about this toxic, organic pollutant that we know is out there, and which we need to understand better."

MCCPs are currently under consideration for regulation by the Stockholm Convention, a global treaty to protect human health from long-standing and widespread chemicals. While the toxic pollutants have been measured in Antarctica and Asia, researchers haven't been sure how to document them in the Western Hemisphere's atmosphere until now.

MCCPs are used in fluids for metal working and in the construction of PVC and textiles. They are often found in wastewater and as a result, can end up in biosolid fertilizer, also called sewage sludge, which is created when liquid is removed from wastewater in a treatment plant. In Oklahoma, researchers suspect the MCCPs they identified came from biosolid fertilizer in the fields near where they set up their instrument.

"When sewage sludges are spread across the fields, those toxic compounds could be released into the air," Katz said. "We can't show directly that that's happening, but we think it's a reasonable way that they could be winding up in the air. Sewage sludge fertilizers have been shown to release similar compounds."

MCCPs little cousins, Short Chain Chlorinated Paraffins (SCCPs), are currently regulated by the Stockholm Convention, and since 2009, by the EPA here in the United States. Regulation came after studies found the toxic pollutants, which travel far and last a long time in the atmosphere, were harmful to human health. But researchers hypothesize that the regulation of SCCPs may have increased MCCPs in the environment.

"We always have these unintended consequences of regulation, where you regulate something, and then there's still a need for the products that those were in," said Ellie Browne, CU Boulder chemistry professor, CIRES Fellow, and co-author of the study. "So they get replaced by something."

Measurement of aerosols led to a new and surprising discovery

Using a nitrate chemical ionization mass spectrometer, which allows scientists to identify chemical compounds in the air, the team measured air at the agricultural site 24 hours a day for one month. As Katz cataloged the data, he documented the different isotopic patterns in the compounds. The compounds measured by the team had distinct patterns, and he noticed new patterns that he immediately identified as different from the known chemical compounds. With some additional research, he identified them as chlorinated paraffins found in MCCPs.

Katz says the makeup of MCCPs are similar to PFAS, long-lasting toxic chemicals that break down slowly over time. Known as "forever chemicals," their presence in soils recently led the Oklahoma Senate to ban biosolid fertilizer.

Now that researchers know how to measure MCCPs, the next step might be to measure the pollutants at different times throughout the year to understand how levels change each season. Many unknowns surrounding MCCPs remain, and there's much more to learn about their environmental impacts.

"We identified them, but we still don't know exactly what they do when they are in the atmosphere, and they need to be investigated further," Katz said. "I think it's important that we continue to have governmental agencies that are capable of evaluating the science and regulating these chemicals as necessary for public health and safety."

Materialsprovided byUniversity of Colorado at Boulder.Note: Content may be edited for style and length.

From shortage to supremacy: How Sandia and the CHIPS Act aim to reboot US chip power

Sandia National Laboratories has joined a new partnership aimed at helping the United States regain its leadership in semiconductor manufacturing.

While the U.S. was considered a powerhouse in chip production in the 1990s, fabricating more than 35% of the world's semiconductors, that share has since dropped to 12%. Today, the U.S. manufactures none of the world's most advanced chips which power technologies like smartphones, owned by 71% of the world's population, as well as self-driving cars, quantum computers, and artificial intelligence-powered devices and programs.

Sandia hopes to help change that. It recently became the first national lab to join the U.S. National Semiconductor Technology Center. The NSTC was established under the CHIPS and Science Act to accelerate innovation and address some of the country's most pressing technology challenges.

"We have pioneered the way for other labs to join," said Mary Monson, Sandia's senior manager of Technology Partnerships and Business Development. "The CHIPS Act has brought the band back together, you could say. By including the national labs, U.S. companies, and academia, it's really a force multiplier."

Sandia has a long history of contributing to the semiconductor industry through research and development partnerships, its Microsystems Engineering, Science and Applications facility known as MESA, and its advanced cleanrooms for developing next-generation technologies. Through its NSTC partnerships, Sandia hopes to strengthen U.S. semiconductor manufacturing and research and development, enhance national security production, and foster the innovation of new technologies that sets the nation apart globally.

"The big goal is to strengthen capabilities. Industry is moving fast, so we are keeping abreast of everything happening and incorporating what will help us deliver more efficiently on our national security mission. It's about looking at innovative ways of partnering and expediting the process," Monson said.

The urgency of the effort is evident. The pandemic provided a perfect example, as car lots were left bare and manufacturers sat idle, waiting for chips to be produced to build new vehicles.

"An average car contains 1,400 chips and electric vehicles use more than 3,000," said Rick McCormick, Sandia's senior scientist for semiconductor technology strategy. McCormick is helping lead Sandia's new role. "Other nations around the globe are investing more than $300 billion to be leaders in semiconductor manufacturing. The U.S. CHIPS Act is our way of 'keeping up with the Joneses.' One goal is for the U.S. to have more than 25% of the global capacity for state-of-the-art chips by 2032."

Sandia is positioned to play a key role in creating the chips of the future.

"More than $12 billion in research and development spending is planned under CHIPS, including a $3 billion program to create an ecosystem for packaging assemblies of chiplets," McCormick said. "These chiplets communicate at low energy and high speed as if they were a large expensive chip."

Modern commercial AI processors use this approach, and Sandia's resources and partnerships can help expand access to small companies and national security applications. MESA already fabricates high-reliability chiplet assembly products for the stockpile and nonproliferation applications.

McCormick said Sandia could also play a major role in training the workforce of the future. The government has invested billions of dollars in new factories, all of which need to be staffed by STEM students.

"There is a potential crisis looming," McCormick said. "The Semiconductor Industry Association anticipates that the U.S. will need 60,000 to 70,000 more workers, so we need to help engage the STEM workforce. That effort will also help Sandia bolster its staffing pipeline."

As part of its membership, Sandia will offer access to some of its facilities to other NSTC members, fostering collaboration and partnerships. Tech transfer is a core part of Sandia's missions, and this initiative will build on that by helping private partners increase their stake in the industry while enabling Sandia to build on its own mission.

"We will be helping develop suppliers and strengthen our capabilities," Monson said. "We are a government resource for semiconductor knowledge. We are in this evolving landscape and have a front row seat to what it will look like over the next 20 years. We are helping support technology and strengthening our national security capabilities and mission delivery."

Materialsprovided byDOE/Sandia National Laboratories.Note: Content may be edited for style and length.

AI sniffs earwax and detects Parkinson’s with 94% accuracy

Most treatments for Parkinson's disease (PD) only slow disease progression. Early intervention for the neurological disease that worsens over time is therefore critical to optimize care, but that requires early diagnosis. Current tests, like clinical rating scales and neural imaging, can be subjective and costly. Now, researchers in ACS'Analytical Chemistryreport the initial development of a system that inexpensively screens for PD from the odors in a person's earwax.

Previous research has shown that changes in sebum, an oily substance secreted by the skin, could help identify people with PD. Specifically, sebum from people with PD may have a characteristic smell because volatile organic compounds (VOCs) released by sebum are altered by disease progression — including neurodegeneration, systemic inflammation and oxidative stress. However, when sebum on the skin is exposed to environmental factors like air pollution and humidity, its composition can be altered, making it an unreliable testing medium. But the skin inside the ear canal is kept away from the elements. So, Hao Dong, Danhua Zhu and colleagues wanted to focus their PD screening efforts on ear wax, which mostly consists of sebum and is easily sampled.

To identify potential VOCs related to PD in ear wax, the researchers swabbed the ear canals of 209 human subjects (108 of whom were diagnosed with PD). They analyzed the collected secretions using gas chromatography and mass spectrometry techniques. Four of the VOCs the researchers found in ear wax from people with PD were significantly different than the ear wax from people without the disease. They concluded that these four VOCs, including ethylbenzene, 4-ethyltoluene, pentanal, and 2-pentadecyl-1,3-dioxolane, are potential biomarkers for PD.

Dong, Zhu and colleagues then trained an artificial intelligence olfactory (AIO) system with their ear wax VOC data. The resulting AIO-based screening model categorized with 94% accuracy ear wax samples from people with and without PD. The AIO system, the researchers say, could be used as a first-line screening tool for early PD detection and could pave the way for early medical intervention, thereby improving patient care.

"This method is a small-scale single-center experiment in China," says Dong. "The next step is to conduct further research at different stages of the disease, in multiple research centers and among multiple ethnic groups, in order to determine whether this method has greater practical application value."

The authors acknowledge funding from the National Natural Sciences Foundation of Science, Pioneer and Leading Goose R&D Program of Zhejiang Province, and the Fundamental Research Funds for the Central Universities.

Materialsprovided byAmerican Chemical Society.Note: Content may be edited for style and length.

Tidak Ada Lagi Postingan yang Tersedia.

Tidak ada lagi halaman untuk dimuat.