Physics World https://physicsworld.com/a/peter-hirst-mit-sloan-executive-education-develops-leadership-skills-in-stem-employees/ Thu, 31 Oct 2024 16:16:18 +0000 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=6.6.1 Physics is full of captivating stories, from ongoing endeavours to explain the cosmos to ingenious innovations that shape the world around us. In the Physics World Stories podcast, Andrew Glester talks to the people behind some of the most intriguing and inspiring scientific stories. Listen to the podcast to hear from a diverse mix of scientists, engineers, artists and other commentators. Find out more about the stories in this podcast by visiting the Physics World website. If you enjoy what you hear, then also check out the Physics World Weekly podcast, a science-news podcast presented by our award-winning science journalists. Physics World false episodic Physics World dens.milne@ioppublishing.org Copyright by IOP Publishing Ltd and individual contributors Copyright by IOP Publishing Ltd and individual contributors podcast Physics World Stories Podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees https://physicsworld.com/a/peter-hirst-mit-sloan-executive-education-develops-leadership-skills-in-stem-employees/ Thu, 31 Oct 2024 16:14:26 +0000 https://physicsworld.com/?p=117799 This podcast is sponsored by MIT Sloan School of Management

The post Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees appeared first on Physics World.

]]>
Physicists and others with STEM backgrounds are sought after in industry for their analytical skills. However, traditional training in STEM subjects is often lacking when it comes to nurturing the soft skills that are needed to succeed in managerial and leadership positions.

Our guest in this podcast is Peter Hirst, who is Senior Associate Dean, Executive Education at the MIT Sloan School of Management. He explains how MIT Sloan works with executives to ensure that they efficiently and effectively acquire the skills and knowledge needed to be effective leaders.

This podcast is sponsored by the MIT Sloan School of Management

The post Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees appeared first on Physics World.

]]>
Podcasts This podcast is sponsored by MIT Sloan School of Management https://physicsworld.com/wp-content/uploads/2024/10/finance-and-stats-on-laptop-1148413487-iStock_Pinkypills.jpg newsletter
Bursts of embers play outsized role in wildfire spread, say physicists https://physicsworld.com/a/bursts-of-embers-play-outsized-role-in-wildfire-spread-say-physicists/ Thu, 31 Oct 2024 13:00:29 +0000 https://physicsworld.com/?p=117782 Experiments on tracking firebrands could improve predictions of spot-fire risks

The post Bursts of embers play outsized role in wildfire spread, say physicists appeared first on Physics World.

]]>
New field experiments carried out by physicists in California’s Sierra Nevada mountains suggest that intermittent bursts of embers play an unexpectedly large role in the spread of wildfires, calling into question some aspects of previous fire models. While this is not the first study to highlight the importance of embers, it does indicate that standard modelling tools used to predict wildfire spread may need to be modified to account for these rare but high-impact events.

Embers form during a wildfire due to a combination of heat, wind and flames. Once lofted into the air, they can travel long distances and may trigger new “spot fires” when they land. Understanding ember behaviour is therefore important for predicting how a wildfire will spread and helping emergency services limit infrastructure damage and prevent loss of life.

Watching it burn

In their field experiments, Tirtha Banerjee and colleagues at the University of California Irvine built a “pile fire” – essentially a bonfire fuelled by a representative mixture of needles, branches, pinecones and pieces of wood from ponderosa pine and Douglas fir trees – in the foothills of the Sierra Nevada mountains. A high-frequency (120 frames per second) camera recorded the fire’s behaviour for 20 minutes, and the researchers placed aluminium baking trays around it to collect the embers it ejected.

After they extinguished the pile fire, the researchers brought the ember samples back to the laboratory and measured their size, shape and density. Footage from the camera enabled them to estimate the fire’s intensity based on its height. They also used a technique called particle tracking velocimetry to follow firebrands and calculate their trajectories, velocities and accelerations.

Highly intermittent ember generation

Based on the footage, the team concluded that ember generation is highly intermittent, with occasional bursts containing orders of magnitude more embers than were ejected at baseline. Existing models do not capture such behaviour well, says Alec Petersen, an experimental fluid dynamicist at UC Irvine and lead author of a Physics of Fluids paper on the experiment. In particular, he explains that models with a low computational cost often make simplifications in characterizing embers, especially with regards to fire plumes and ember shapes. This means that while they can predict how far an average firebrand with a certain size and shape will travel, the accuracy of those predictions is poor.

“Although we care about the average behaviour, we also want to know more about outliers,” he says. “It only takes a single ember to ignite a spot fire.”

As an example of such an outlier, Petersen notes that sometimes a strong updraft from a fire plume coincides with the fire emitting a large number of embers. Similar phenomena occur in many types of turbulent flows, including atmospheric winds as well as buoyant fire plumes, and they are characterized by statistically infrequent but extreme fluctuations in velocity. While these fluctuations are rare, they could partially explain why the team observed large (>1mm) firebrands travelling further than models predict, he tells Physics World.

This is important, Petersen adds, because large embers are precisely the ones with enough thermal energy to start spot fires. “Given enough chances, even statistically unlikely events can become probable, and we need to take such events into account,” he says.

New models, fresh measurements

The researchers now hope to reformulate operational models to do just this, but they acknowledge that this will be challenging. “Predicting spot fire risk is difficult and we’re only just scratching the surface of what needs to be included for accurate and useful predictions that can help first responders,” Petersen says.

They also plan to do more experiments in conjunction with a consortium of fire researchers that Banerjee set up. Beginning in November, when temperatures in California are cooler and the wildfire risk is lower, members of the new iFirenet consortium plan to collaborate on a large-scale field campaign at the UC Berkeley Research Forests. “We’ll have tonnes of research groups out there, measuring all sorts of parameters for our various projects,” Petersen says. “We’ll be trying to refine our firebrand tracking experiments too, using multiple cameras to track them in 3D, hopefully supplemented with a thermal camera to measure their temperatures.

“My background is in measuring and describing the complex dynamics of particles carried by turbulent flows,” Petersen continues. “I don’t have the same deep expertise studying fires that I do in experimental fluid dynamics, so it’s always a challenge to learn the best practices of a new field and to familiarize yourself with the great research folks have done in the past and are doing now. But that’s what makes studying fluid dynamics so satisfying – it touches so many corners of our society and world, there’s always something new to learn.”

The post Bursts of embers play outsized role in wildfire spread, say physicists appeared first on Physics World.

]]>
Research update Experiments on tracking firebrands could improve predictions of spot-fire risks https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_TestFire.jpg
IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics https://physicsworld.com/a/ihep-sdu-in-search-of-quantum-advantage-to-open-new-frontiers-in-high-energy-physics/ Thu, 31 Oct 2024 10:05:25 +0000 https://physicsworld.com/?p=117710 Opportunities in quantum science and technology a high priority for China’s high-energy physicists

The post IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics appeared first on Physics World.

]]>
The particle physics community is in the vanguard of a global effort to realize the potential of quantum computing hardware and software for all manner of hitherto intractable research problems across the natural sciences. The end-game? A paradigm shift – dubbed “quantum advantage” – where calculations that are unattainable or extremely expensive on classical machines become possible, and practical, with quantum computers.

A case study in this regard is the Institute of High Energy Physics (IHEP), the largest basic science laboratory in China and part of the Chinese Academy of Sciences. Headquartered in Beijing, IHEP hosts a multidisciplinary scientific programme spanning elementary particle physics, astrophysics as well as the planning, design and construction of large-scale accelerator projects – among them the China Spallation Neutron Source, which launched in 2018, and the High Energy Photon Source, due to come online in 2025.

Quantum opportunity

Notwithstanding its ongoing investment in experimental infrastructure, IHEP is increasingly turning its attention to the application of quantum computing and quantum machine-learning technologies to accelerate research discovery. In short, exploring use-cases in theoretical and experimental particle physics where quantum approaches promise game-changing scientific breakthroughs. A core partner in this endeavour is Shandong University (SDU) Institute of Frontier and Interdisciplinary Science, home to another of China’s top-tier research programmes in high-energy physics (HEP).

With senior backing from Weidong Li and Xingtao Huang – physics professors at IHEP and SDU, respectively – the two laboratories began collaborating on the applications of quantum science and technology in summer 2022. This was followed by the establishment of a joint working group 12 months later. Operationally, the Quantum Computing for Simulation and Reconstruction (QC4SimRec) initiative comprises eight faculty members (drawn from both institutes) and is supported by a multidisciplinary team of two postdoctoral scientists and five PhD students.

Hideki Okawa

“QC4SimRec is part of IHEP’s at-scale quantum computing effort, tapping into cutting-edge resource and capability from a network of academic and industry partners across China,” explains Hideki Okawa, a professor who heads up quantum applications research at IHEP (as well as co-chairing QC4SimRec alongside Teng Li, an associate professor in SDU’s Institute of Frontier and Interdisciplinary Science). “The partnership with SDU is a logical progression,” he adds, “building on a track-record of successful collaboration between the two centres in areas like high-performance computing, offline software and machine-learning applications for a variety of HEP experiments.”

Right now, Okawa, Teng Li and the QC4SimRec team are set on expanding the scope of their joint research activity. One principal line of enquiry focuses on detector simulation – i.e. simulating the particle shower development in the calorimeter, which is one of the most demanding tasks for the central processing unit (CPU) in collider experiments. Other early-stage applications include particle tracking, particle identification, and analysis of the fundamental physics of particle dynamics and collision.

“Working together in QC4SimRec,” explains Okawa, “IHEP and SDU are intent on creating a global player in the application of quantum computing and quantum machine-learning to HEP problems.”

Sustained scientific impact, of course, is contingent on recruiting the brightest and best talent in quantum hardware and software, with IHEP’s near-term focus directed towards engaging early-career scientists, whether from domestic or international institutions. “IHEP is very supportive in this regard,” adds Okawa, “and provides free Chinese language courses to fast-track the integration of international scientists. It also helps that our bi-weekly QC4SimRec working group meetings are held in English.”

A high-energy partnership

Around 700 km south-east of Beijing, the QC4SimRec research effort at SDU is overseen by Xingtao Huang, dean of the university’s Institute of Frontier and Interdisciplinary Science and an internationally recognized expert in machine-learning technologies and offline software for data processing and analysis in particle physics.

“There’s huge potential upside for quantum technologies in HEP,” he explains. In the next few years, for example, QC4SimRec will apply innovative quantum approaches to build on SDU’s pre-existing interdisciplinary collaborations with IHEP across a range of HEP initiatives – including the Beijing Spectrometer III (BESIII), the Jiangmen Underground Neutrino Observatory (JUNO) and the Circular Electron-Positron Collider (CEPC).

Jiangmen Underground Neutrino Observatory

One early-stage QC4SimRec project evaluated quantum machine-learning techniques for the identification and discrimination of muon and pion particles within the BESIII detector. Comparison with traditional machine-learning approaches shows equivalent performance on the same datasets and, by extension, the feasibility of applying quantum machine-learning to data analysis in next-generation collider experiments.

“This is a significant result,” explains Huang, “not least because particle identification – the identification of charged-particle species in the detector – is one of the biggest challenges in HEP experiments.”

Xingtao Huang

Huang is currently seeking to recruit senior-level scientists with quantum and HEP expertise from Europe and North America, building on a well-established faculty team of 48 staff members (32 of them full professors) working on HEP. “We have several open faculty positions at SDU in quantum computing and quantum machine-learning,” he notes. “We’re also interested in recruiting talented postdoctoral researchers with quantum know-how.”

As a signal of intent, and to raise awareness of SDU’s global ambitions in quantum science and technology, Huang and colleagues hosted a three-day workshop (co-chaired by IHEP) last summer to promote the applications of quantum computing and classical/quantum machine-learning in particle physics. With over 100 attendees and speakers attending the inaugural event, including several prominent international participants, a successful follow-on workshop was held in Changchun earlier this year, with planning well under way for the next instalment in 2025.

Along a related coordinate, SDU has launched a series of online tutorials to support aspiring Masters and PhD students keen to further their studies in the applications of quantum computing and quantum machine-learning within HEP.

“Quantum computing is a hot topic, but there’s still a relatively small community of scientists and engineers working on HEP applications,” concludes Huang. “Working together, IHEP and SDU are building the interdisciplinary capacity in quantum science and technology to accelerate frontier research in particle physics. Our long-term goal is to establish a joint national laboratory with dedicated quantum computing facilities across both campuses.”

One thing is clear: the QC4SimRec collaboration offers ambitious quantum scientists a unique opportunity to progress alongside China’s burgeoning quantum ecosystem – an industry, moreover, that’s being heavily backed by sustained public and private investment. “For researchers who want to be at the cutting edge in quantum science and HEP, China is as good a place as any,” Okawa concludes.

Quantum machine-learning for accelerated discovery

To understand the potential for quantum advantage in specific HEP contexts, QC4SimRec scientists are currently working on “rediscovering” the exotic particle Zc(3900) using quantum machine-learning techniques.

In terms of the back-story: Zc(3900) is an exotic subatomic particle made up of quarks (the building blocks of protons and neutrons) and believed to be the first tetraquark state observed experimentally – an observation that, in the process, deepened our understanding of quantum chromodynamics (QCD). The particle was discovered in 2013 using the BESIII detector at the Beijing Electron-Positron Collider (BEPCII), with independent observation by the Belle experiment at Japan’s KEK particle physics laboratory.

As part of their study, the IHEP- SDU team deployed the so-called Quantum Support Vector Machine algorithm (a quantum variant of a classical algorithm) for the training along with simulated signals of Zc(3900) and randomly selected events from the real BESIII data as backgrounds.

Using the quantum machine-learning approach, the performance is competitive versus classical machine-learning systems – though, crucially, with a smaller training dataset and fewer data features. Investigations are ongoing to demonstrate enhanced signal sensitivity with quantum computing – work that could ultimately point the way to the discovery of new exotic particles in future experiments.

IHEP and SDU logos

The post IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics appeared first on Physics World.

]]>
Analysis Opportunities in quantum science and technology a high priority for China’s high-energy physicists https://physicsworld.com/wp-content/uploads/2024/10/WEB-mmexport1729132118505.jpg newsletter
Chip-based optical tweezers manipulate microparticles and cells from a distance https://physicsworld.com/a/chip-based-optical-tweezers-manipulate-microparticles-and-cells-from-a-distance/ Thu, 31 Oct 2024 09:30:09 +0000 https://physicsworld.com/?p=117791 Integrated optical phased array uses a tightly focused beam of light to trap and manipulate biological particles

The post Chip-based optical tweezers manipulate microparticles and cells from a distance appeared first on Physics World.

]]>
Optical traps and tweezers can be used to capture and manipulate particles using non-contact forces. A focused beam of light allows precise control over the position of and force applied to an object, at the micron scale or below, enabling particles to be pulled and captured by the beam.

Optical manipulation techniques are garnering increased interest for biological applications. Researchers from Massachusetts Institute of Technology (MIT) have now developed a miniature, chip-based optical trap that acts as a “tractor beam” for studying DNA, classifying cells and investigating disease mechanisms. The device – which is small enough to fit in your hand – is made from a silicon-photonics chip and can manipulate particles up to 5 mm away from the chip surface, while maintaining a sterile environment for cells.

The promise of integrated optical tweezers

Integrated optical trapping provides a compact route to accessible optical manipulation compared with bulk optical tweezers, and has already been demonstrated using planar waveguides, optical resonators and plasmonic devices. However, many such tweezers can only trap particles directly on (or within several microns of) the chip’s surface and only offer passive trapping.

To make optical traps sterile for cell research, 150-µm thick glass coverslips are required. However, the short focal heights of many integrated optical tweezers means that the light beams can’t penetrate into standard sample chambers. Because such devices can only trap particles a few microns above the chip, they are incompatible with biological research that requires particles and cells to be trapped at much larger distances from the chip’s surface.

With current approaches, the only way to overcome this is to remove the cells and place them on the surface of the chip itself. This process contaminates the chip, however, meaning that each chip must be discarded after use and a new chip used for every experiment.

Trapping device for biological particles

Lead author Tal Sneh and colleagues developed an integrated optical phased array (OPA) that can focus emitted light at a specific point in the radiative near field of the chip. To date, many OPA devices have been motivated by LiDAR and optical communications applications, so their capabilities were limited to steering light beams in the far field using linear phase gradients. However, this approach does not generate the tightly focused beam required for optical trapping.

In their new approach, the MIT researchers used semiconductor manufacturing processes to fabricate a series of micro-antennas onto the chip. By creating specific phase patterns for each antenna, the researchers found that they could generate a tightly focused beam of light.

Each antenna’s optical signal was also tightly controlled by varying the input laser wavelength to provide an active spatial tuning for tweezing particles. The focused light beam emitted by the chip could therefore be shaped and steered to capture particles located millimetres above the surface of the chip, making it suitable for biological studies.

The researchers used the OPA tweezers to optically steer and non-mechanically trap polystyrene microparticles at up to 5 mm above the chip’s surface. They also demonstrated stretching of mouse lymphoblast cells, in the first known cell experiment to use single-beam integrated optical tweezers.

The researchers point out that this is the first demonstration of trapping particles over millimetre ranges, with the operating distance of the new device orders of magnitude greater than other integrated optical tweezers. Plasmonic, waveguide and resonator tweezers, for example, can only operate at 1 µm above the surface, while microlens-based tweezers have been able to operate at 20 µm distances.

Importantly, the device is completely reusable and biocompatible, because the biological samples can be trapped and undergo manipulation while remaining within a sterile coverslip. This ensures that both the biological media and the chip stay free from contamination without needing complex microfluidics packaging.

The work in this study provides a new type of modality for integrated optical tweezers, expanding their use into the biological domain to perform experiments on proteins and DNA, for example, as well as to sort and manipulate cells.

The researchers say that they hope to build on this research by creating a device with an adjustable focal height for the light beam, as well as introduce multiple trap sites to manipulate biological particles in more complex ways and employ the device to examine more biological systems.

The optical trap is described in Nature Communications.

The post Chip-based optical tweezers manipulate microparticles and cells from a distance appeared first on Physics World.

]]>
Research update Integrated optical phased array uses a tightly focused beam of light to trap and manipulate biological particles https://physicsworld.com/wp-content/uploads/2024/10/31-10-24-MIT_Tractor-Beam.jpg
AI enters the fold with the 2024 Nobel Prize for Physics: https://physicsworld.com/a/ai-enters-the-fold-with-the-2024-nobel-prize-for-physics/ Wed, 30 Oct 2024 16:19:25 +0000 https://physicsworld.com/?p=117696 Matin Durrani is pleased that the 2024 Nobel Prize for Physics brings AI under physicists' wing

The post AI enters the fold with the 2024 Nobel Prize for Physics: appeared first on Physics World.

]]>
I’ll admit that this year’s Nobel Prize for Physics took us here at Physics World by surprise. Trying to guess who might win a Nobel is always a mug’s game but with condensed-matter physics having missed out since 2016, our money was on research into, say, metamaterials or twisted graphene winning. We certainly weren’t expecting machine learning and artificial intelligence (AI) to come up trumps.

Machine learning these days has a huge influence in physics, where it’s used in everything from the very practical (designing new circuits for quantum optics experiments) to the esoteric (finding new symmetries in data from the Large Hadron Collider). But it would be wrong to think that machine learning itself isn’t physics or that the Nobel committee – in honouring John Hopfield and Geoffrey Hinton – has been misguidedly seduced by some kind of “AI hype”.

Hopfield, 91, is a fully fledged condensed-matter physicist, who in the 1970s began to study the dynamics of biochemical reactions and its applications in neuroscience. In particular, he showed that the physics of spin glasses can be used to build networks of neurons to store and retrieve information. Hopfield applied his work to the problem of “associative memories” – how hearing a fragment of a song, say, can unlock a memory of the occasion we first heard it.

His work on the statistical physics and training of these “Hopfield networks” – and Hinton’s later on “Boltzmann machines” – paved the way for modern-day AI. Indeed, Hinton, a computer scientist, is often dubbed “the godfather of AI”. On the Physics World Weekly podcast, Anil Ananthaswamy – author of Why Machines Learn: the Elegant Maths Behind Modern AI – said Hinton’s contributions to AI were “immense”.

Of course, machine learning and AI are multidisciplinary endeavours, drawing on not just physics and mathematics, but neuroscience, computer science and cognitive science too. Imagine though, if Hinton and Hopfield had been given, say, a medicine Nobel prize. We’d have physicists moaning they’d been overlooked. Some might even say that this year’s Nobel Prize for Chemistry, which went to the application of AI to protein-folding, is really physics at heart.

We’re still in the early days for AI, which has its dangers. Indeed, Hinton quit Google last year so he could more freely express his concerns. But as this year’s Nobel prize makes clear, physics isn’t just drawing on machine learning and AI – it paved the way for these fields too.

The post AI enters the fold with the 2024 Nobel Prize for Physics: appeared first on Physics World.

]]>
Blog Matin Durrani is pleased that the 2024 Nobel Prize for Physics brings AI under physicists' wing https://physicsworld.com/wp-content/uploads/2024/10/brain-computer-intelligence-concept-landscape-1027941874-Shutterstock_Jackie-Niam.jpg
Two distinct descriptions of nuclei unified for the first time https://physicsworld.com/a/two-distinct-descriptions-of-nuclei-unified-for-the-first-time/ Wed, 30 Oct 2024 14:47:58 +0000 https://physicsworld.com/?p=117766 Hybrid approach focuses on short-range-correlated nucleon pairs

The post Two distinct descriptions of nuclei unified for the first time appeared first on Physics World.

]]>
In a new study, an international team of physicists has unified two distinct descriptions of atomic nuclei, taking a major step forward in our understanding of nuclear structure and strong interactions. For the first time, the particle physics perspective – where nuclei are seen as made up of quarks and gluons – has been combined with the traditional nuclear physics view that treats nuclei as collections of interacting nucleons (protons and neutrons). This innovative hybrid approach provides fresh insights into short-range correlated (SRC) nucleon pairs – which are fleeting interactions where two nucleons come exceptionally close and engage in strong interactions for mere femtoseconds. Although these interactions play a crucial role in the structure of nuclei, they have been notoriously difficult to describe theoretically.

“Nuclei (such as gold and lead) are not just a ‘bag of non-interacting protons and neutrons’,” explains Fredrick Olness at Southern Methodist University in the US, who is part of the international team. “When we put 208 protons and neutrons together to make a lead nucleus, they interact via the strong interaction force with their nearest neighbours; specifically, those neighbours within a ‘short range.’ These short-range interactions/correlations modify the composition of the nucleus and are a manifestation of the strong interaction force. An improved understanding of these correlations can provide new insights into both the properties of nuclei and the strong interaction force.”

To investigate the inner structure of atomic nuclei, physicists use parton distribution functions (PDFs). These functions describe how the momentum and energy of quarks and gluons are distributed within protons, neutrons, or entire nuclei. PDFs are typically obtained from high-energy experiments, such as those performed at particle accelerators, where nucleons or nuclei collide at close to the speed of light. By analysing the behaviour of the particles produced in these collisions, physicists can gain essential insights into their properties, revealing the complex dynamics of the strong interaction.

Traditional focus

However, traditional nuclear physics often focuses on the interactions between protons and neutrons within the nucleus, without delving into the quark and gluon structure of nucleons. Until now, these two approaches – one based on fundamental particles and the other on nuclear dynamics — remained separate. Now researchers in the US, Germany, Poland, Finland, Australia, Israel and France have bridged this gap.

The team developed a unified framework that integrates both the partonic structure of nucleons and the interactions between nucleons in atomic nuclei. This approach is particularly useful for studying SRC nucleon pairs, whose interactions have long been recognized as crucial to understanding the structure of nuclei, but they have been notoriously difficult to describe using conventional theoretical models.

By combining particle and nuclear physics descriptions, the researchers were able to derive PDFs for SRC pairs, providing a detailed understanding of how quarks and gluons behave within these pairs.

“This framework allows us to make direct relations between the quark–gluon and the proton–neutron description of nuclei,” said Olness. “Thus, for the first time, we can begin to relate the general properties of nuclei (such as ‘magic number’ nuclei – those with a specific number of protons or neutrons that make them particularly stable – or ‘mirror nuclei’ with equal numbers of protons and neutrons) to the characteristics of the quarks and gluons inside the nuclei.”

Experimental data

The researchers applied their model to experimental data from scattering experiments involving 19 different nuclei, ranging from helium-3 (with two protons and one neutron) to lead-208 (with 208 protons and neutrons). By comparing their predictions with the experimental data, they were able to refine their model and confirm its accuracy.

The results showed a remarkable agreement between the theoretical predictions and the data, particularly when it came to estimating the fraction of nucleons that form SRC pairs. In light nuclei, such as helium, nucleons rarely form SRC pairs. However, in heavier nuclei like lead, nearly half of the nucleons participate in SRC pairs, highlighting the significant role these interactions play in shaping the structure of larger nuclei.

These findings not only validate the team’s approach but also open up new avenues for research.

“We can study what other nuclear characteristics might yield modifications of the short-ranged correlated pairs ratios,” explains Olness. “This connects us to the shell model of the nucleus and other theoretical nuclear models. With the new relations provided by our framework, we can directly relate elemental quantities described by nuclear physics to the fundamental quarks and gluons as governed by the strong interaction force.”

The new model can be further tested using data from future experiments, such as those planned at the Jefferson Lab and at the Electron–Ion Collider at Brookhaven National Laboratory. These facilities will allow scientists to probe quark–gluon dynamics within nuclei with even greater precision, providing an opportunity to validate the predictions made in this study.

The research is described in Physical Review Letters.

The post Two distinct descriptions of nuclei unified for the first time appeared first on Physics World.

]]>
Research update Hybrid approach focuses on short-range-correlated nucleon pairs https://physicsworld.com/wp-content/uploads/2024/10/30-10-2024-particle-nuclear-illustration.jpg newsletter1
Reanimating the ‘living Earth’ concept for a more cynical world https://physicsworld.com/a/reanimating-the-living-earth-concept-for-a-more-cynical-world/ Wed, 30 Oct 2024 13:30:24 +0000 https://physicsworld.com/?p=117485 James Dacey reviews Becoming Earth: How Our Planet Came to Life by Ferris Jabr

The post Reanimating the ‘living Earth’ concept for a more cynical world appeared first on Physics World.

]]>
Tie-dye, geopolitical tension and a digitized Abba back on stage. Our appetite for revisiting the 1970s shows no signs of waning. Science writer Ferris Jabr has now reanimated another idea that captured the era’s zeitgeist: the concept of a “living Earth”. In Becoming Earth: How Our Planet Came to Life Jabr makes the case that our planet is far more than a lump of rock that passively hosts complex life. Instead, he argues that the Earth and life have co-evolved over geological time and that appreciating these synchronies can help us to steer away from environmental breakdown.

“We, and all living things, are more than inhabitants of Earth – we are Earth, an outgrowth of its structure and an engine of its evolution.” If that sounds like something you might hear in the early hours at a stone circle gathering, don’t worry. Jabr fleshes out his case with the latest science and journalistic flair in what is an impressive debut from the Oregon-based writer.

Becoming Earth is a reappraisal of the Gaia hypothesis, proposed in 1972 by British scientist James Lovelock and co-developed over several decades by US microbiologist Lynn Margulis. This idea of the Earth functioning as a self-regulating living organism has faced scepticism over the years, with many feeling it is untestable and strays into the realm of pseudoscience. In a 1988 essay, the biologist and science historian Stephen Jay Gould called Gaia “a metaphor, not a mechanism”.

Though undoubtedly a prodigious intellect, Lovelock was not your typical academic. He worked independently across fields including medical research, inventing the electron capture detector and consulting for petrochemical giant Shell. Add that to Gaia’s hippyish name – evoking the Greek goddess of Earth – and it’s easy to see why the theory faced a branding issue within mainstream science. Lovelock himself acknowledged errors in the theory’s original wording, which implied the biosphere acted with intention.

Though he makes due reference to the Gaia hypothesis, Jabr’s book is a standalone work, and in revisiting the concept in 2024, he has one significant advantage: we now have a tonne of scientific evidence for tight coupling between life and the environment. For instance, microbiologists increasingly speak of soil as a living organism because of the interconnections between micro-organisms and soil’s structure and function. Physicists meanwhile happily speak of “complex systems” where collective behaviour emerges from interactions of numerous components – climate being the obvious example.

To simplify this sprawling topic, Becoming Earth is structured into three parts: Rock, Water and Air. Accessible scientific discussions are interspersed with reportage, based on Jabr’s visits to various research sites. We kick off at the Sanford Underground Research Facility in South Dakota (also home to neutrino experiments) as Jabr descends 1500 m in search of iron-loving microbes. We learn that perhaps 90% of all microbes live deep underground and they transform Earth wherever they appear, carving vast caverns and regulating the global cycling of carbon and nutrients. Crucially, microbes also created the conditions for complex life by oxygenating the atmosphere.

In the Air section, Jabr scales the 1500 narrow steps of the Amazon Tall Tower Observatory to observe the forest making its own rain. Plants are constantly releasing water into the air through their leaves, and this drives more than half of the 20 billion tonnes of rain that fall on its canopy daily – more than the volume discharged by the Amazon river. “It’s not that Earth is a single living organism in exactly the same way as a bird or bacterium, or even a superorganism akin to an ant colony,” explains Jabr. “Rather that the planet is the largest known living system – the confluence of all other ecosystems – with structures, rhythms, and self-regulating processes that resemble those of its smaller constituent life forms. Life rhymes at every scale.”

When it comes to life’s capacity to alter its environment, not all creatures are born equal. Humans are having a supersized influence on these planetary rhythms despite appearing in recent geological history. Jabr suggests the Anthropocene – a proposed epoch defined by humanity’s influence on the planet – may have started between 50,000 and 10,000 years ago. At that time, our ancestors hunted mammoths and other megafauna into extinction, altering grassland habitats that had preserved a relatively cool climate.

Some of the most powerful passages in Becoming Earth concern our relationship with hydrocarbons. “Fossil fuel is essentially an ecosystem in an urn,” writes Jabr to illustrate why coal and oil store such vast amounts of energy. Elsewhere, on a beach in Hawaii an earth scientist and artist scoop up “plastiglomerates” – rocks formed from the eroded remains of plastic pollution fused with natural sediments. Humans have “forged a material that had never existed before”.

A criticism of the original Gaia hypothesis is that its association with a self-regulating planet may have fuelled a type of climate denialism. Science historian Leah Aronowsky argued that Gaia created the conditions for people to deny humans’ unique capacity to tip the system.

Jabr doesn’t see it that way and is deeply concerned that we are hastening the end of a stable period for life on Earth. But he also suggests we have the tools to mitigate the worst impacts, though this will likely require far more than just cutting emissions. He visits the Orca project in Iceland, the world’s first and largest plant for removing carbon from the atmosphere and storing it over long periods – in this case injecting it into basalt deep below the surface.

In an epilogue, we finally meet a 100-year-old James Lovelock at his Dorset home three years before his death in 2022. Still cheerful and articulate, Lovelock thrived on humour and tackling the big questions. As pointed out by Jabr, Lovelock was also prone to contradiction and the occasional alarmist statement. For instance, in his 2006 book The Revenge of Gaia he claimed that the only few breeding humans left by the end of the century would be confined to the Arctic. Fingers crossed he’s wrong on that one!

Perhaps Lovelock was prone to the same phenomenon we see in quantum physics where even the sharpest scientific minds can end up shrouding the research in hype and woo. Once you strip away the new-ageyness, we may find that the idea of Gaia was never as “out there” as the cultural noise that surrounded it. Thanks to Jabr’s earnest approach, the living Earth concept is alive and kicking in 2024.

The post Reanimating the ‘living Earth’ concept for a more cynical world appeared first on Physics World.

]]>
Opinion and reviews James Dacey reviews Becoming Earth: How Our Planet Came to Life by Ferris Jabr https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Dacey-Amazon.jpg newsletter
Superconductivity theorist Leon Cooper dies aged 94 https://physicsworld.com/a/superconductivity-theorist-leon-cooper-dies-aged-94/ Tue, 29 Oct 2024 14:09:20 +0000 https://physicsworld.com/?p=117733 Cooper carried out research into superconductivity and neuroscience

The post Superconductivity theorist Leon Cooper dies aged 94 appeared first on Physics World.

]]>
The US condensed-matter physicist Leon Cooper, who shared the 1972 Nobel Prize for Physics, has died at the age of 94. In the late 1950s, Cooper, together with his colleagues Robert Schrieffer and John Bardeen, developed a theory of superconductivity that could explain why certain materials undergo an absolute absence of electrical resistance at low temperatures.

Born on 28 February 1930 in New York City, US, Cooper graduated from the Bronx High School of Science in 1947 before earning a degree from Columbia University, which he completed in 1951, and then a PhD in 1954.

Cooper then spent time at the Institute for Advanced Study in Princeton, the University of Illinois and Ohio State University before heading to Brown University in 1958 where he remained for the rest of his career.

It was in Illinois that Cooper began to work on a theoretical explanation of superconductivity – a phenomenon that was first seen by the Dutch physicist Heike Kamerlingh Onnes when he discovered in 1911 that the electrical resistance of mercury suddenly disappeared beneath a temperature of 4.2 K.

However, there was no microscopic theory of superconductivity until 1957, when Bardeen, Cooper and Schrieffer – all based at Illinois – came up with their “BCS” theory. This described how an electron can deform the atomic lattice through which it moves, thereby pairing with a neighbouring electron, which became known as a Cooper pair. Being paired allows all the electrons in a superconductor to move as a single cohort, known as a condensate, prevailing over thermal fluctuations that could cause the pairs to break.

Bardeen, Cooper and Schrieffer published their BCS theory in April 1957 (Phys. Rev. 106 162), which was then followed in December by a full-length paper (Phys. Rev. 108 1175). Cooper was in his late 20s when he made the breakthrough.

Not only did the BCS theory of superconductivity successfully account for the behaviour of “conventional” low-temperature superconductors such as mercury and tin but it also had application in particle physics by contributing to the notion of spontaneous symmetry breaking.

For their work the trio won the 1972 Nobel Prize for Physics “for their jointly developed theory of superconductivity, usually called the BCS-theory”.

From BCS to BCM

While Cooper continued to work in superconductivity, later in his career he turned to neuroscience. In 1973 he founded and directed Brown’s Institute for Brain and Neural Systems, which studied animal nervous systems and the human brain. In the 1980s he came up with a physical theory of learning in the visual cortex dubbed the “BCM” theory, named after Cooper and his colleagues Elie Bienenstock and Paul Munro.

He also founded the technology firm Nestor along with Charles Elbaum, which aimed to find commercial and military applications for artificial neural networks.

As well as the Nobel prize, Cooper was awarded the Comstock Prize from the US National Academy of Sciences in 1968 and the Descartes Medal from the Academie de Paris in 1977.

He also wrote numerous books including An Introduction to the Meaning and Structure of Physics in 1968 and Physics: Structure and Meaning in 1992. More recently, he published Science and Human Experience in 2014.

“Leon’s intellectual curiosity knew no boundaries,” notes Peter Bilderback, who worked with Cooper at Brown. “He was comfortable conversing on any subject, including art, which he loved greatly. He often compared the construction of physics to the building of a great cathedral, both beautiful human achievements accomplished by many hands over many years and perhaps never to be fully finished.”

The post Superconductivity theorist Leon Cooper dies aged 94 appeared first on Physics World.

]]>
News Cooper carried out research into superconductivity and neuroscience https://physicsworld.com/wp-content/uploads/2024/10/Cooper-BCS.jpg newsletter1
From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science https://physicsworld.com/a/from-buckyballs-to-biological-membranes-isis-celebrates-40-years-of-neutron-science/ Tue, 29 Oct 2024 10:00:50 +0000 https://physicsworld.com/?p=117353 As ISIS – the UK’s muon and neutron source – turns 40, Rosie de Laune and colleagues from ISIS explore the past, present and future of neutron scattering

The post From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science appeared first on Physics World.

]]>
When British physicist James Chadwick discovered the neutron in 1932, he supposedly said, “I am afraid neutrons will not be of any use to anyone.” The UK’s neutron user facility – the ISIS Neutron and Muon Source, now operated by the Science and Technology Facilities Council (STFC) – was opened 40 years ago. In that time, the facility has welcomed more than 60,000 scientists from around the world. ISIS supports a global community of neutron-scattering researchers, and the work that has been done there shows that Chadwick couldn’t have been more wrong.

By the time of Chadwick’s discovery, scientists knew that the atom was mostly empty space, and that it contained electrons and protons. However, there were some observations they couldn’t explain, such as the disparity between the mass and charge numbers of the helium nucleus.

The neutron was the missing piece of this puzzle. Chadwick’s work was fundamental to our understanding of the atom, but it also set the stage for a powerful new field of condensed-matter physics. Like other subatomic particles, neutrons have wave-like properties, and their wavelengths are comparable to the spacings between atoms. This means that when neutrons scatter off materials, they create characteristic interference patterns. In addition, because they are electrically neutral, neutrons can probe deeper into materials than X-rays or electrons.

Today, facilities like ISIS use neutron scattering to probe everything from spacecraft components and solar cells to studying how cosmic ray neutrons interact with electronics to ensure the resilience of technology for driverless cars and aircraft.

The origins of neutron scattering

On 2 December 1942 a group of scientists at the University of Chicago in the US, led by Enrico Fermi, watched the world’s first self-sustaining nuclear chain reaction, an event that would reshape world history and usher in a new era of atomic science.

One of those in attendance was Ernest O Wollan, a physicist with a background in X-ray scattering. The neutron’s wave-like properties had been established in 1936 and Wollan recognized that he could use neutrons produced by a nuclear reactor like the one in Chicago to determine the positions of atoms in a crystal. Wollan later moved to Oak Ridge National Laboratory (ORNL) in Tennessee, where a second reactor was being built, and at the end of 1944 his team was able to observe Bragg diffraction of neutrons in sodium chloride and gypsum salts.

A few years later Wollan was joined by Clifford Schull, with whom he refined the technique and constructed the world’s first purpose-built neutron-scattering instrument. Schull won the Nobel Prize for Physics in 1994 for his work (with Bertram Brockhouse, who had pioneered the use of neutron scattering to measure excitations), but Wollan was ineligible because he had died 10 years previously.

The early reactors used for neutron scattering were multipurpose, the first to be designed specifically to produce neutron beams was the High Flux Beam Reactor (HFBR) at Brookhaven National Laboratory in the US in 1965. This was closely followed in 1972 by the Institut Laue–Langevin (ILL) in France, a facility that is still running today.

The first target station at the ISIS Neutron and Muon Source

Rather than using a reactor, ISIS is based on an alternative technology called “spallation” that first emerged in the 1970s. In spallation, neutrons are produced by accelerating protons at a heavy metal target. The protons collide like bullets with the nuclei in the target, absorb the proton and then discharge high-energy particles, including neutrons.

The first such sources specifically designed for neutron scattering were the KENS source at the Institute of Materials Structure Science (IMSS) in Japan, which started operation in 1980, and the Intense Pulsed Neutron Source at the Argonne National Laboratory in the US, which started operation in 1981.

The pioneering development work on these sources and in other institutions was of great benefit during the design and development of what was to become ISIS. The facility was approved in 1977 and the first beam was produced on 16 December 1984. In October 1985 the source was formally named ISIS and opened by then UK prime minister Margaret Thatcher. Today around 20 reactor and spallation neutron sources are operational around the world and one – the European Spallation Source (ESS) – is under construction in Sweden.

The name ISIS was inspired by both the river that flows through Oxford and the Egyptian goddess of reincarnation. The relevance of the latter relates to the fact that ISIS was built on the site of the NIMROD proton synchrotron that operated between 1964 and 1978, reusing much of its infrastructure and components.

Producing neutrons and muons

At the heart of ISIS is an 800 MeV accelerator that produces intense pulses of protons 50 times a second. These pulses are then fired at two tungsten targets. Spallation of the tungsten by the proton beam produces neutrons that fly off in all directions.

Before the neutrons can be used, they must be slowed down, which is achieved by passing them through a material called a “moderator”. ISIS uses various moderators which operate at different temperatures, producing neutrons with varying wavelengths. This enables scientists to probe materials on length scales from fractions of an angstrom to hundreds of nanometres.

Arrayed around the two neutron sources and the moderators are more than 25 beamlines that direct neutrons to one of ISIS’s specialized experiments. Many of these perform neutron diffraction, which is used to study the structure of crystalline and amorphous solids, as well as liquids.

When neutrons scatter, they also transfer a small amount of energy to the material and can excite vibrational modes in atoms and molecules. ISIS has seven beamlines dedicated to measuring this energy transfer, a technique called neutron spectroscopy. This can tell us about atomic and molecular bonds and is also used to study properties like specific heat and resistivity, as well as magnetic interactions.

Neutrons have spin so they are also sensitive to the magnetic properties of materials. Neutron diffraction is used to investigate magnetic ordering such as ferrimagnetism whereas spectroscopy is suited to the study of collective magnetic excitations.

Neutrons can sense short and long-ranged magnetic ordering, but to understand localized effects with small magnetic moments, an alternative probe is needed. Since 1987, ISIS has also produced muon beams, which are used for this purpose, as well as other applications. In front of one of the neutron targets is a carbon foil and when the proton beam passes through this it produces pions, which rapidly decay into muons. Rather than scattering, muons become implanted in the material, where they rapidly decay into positrons. By analysing the decay positrons, scientists can study very weak and fluctuating magnetic fields in materials that may be inaccessible with neutrons. For this reason, muon and neutron techniques are often used together.

“The ISIS instrument suite now provides capability across a broad range of neutron and muon science,” says Roger Eccleston, ISIS director. “We’re constantly engaging our user community, providing feedback and consulting them on plans to develop ISIS. This continues as we begin our ‘Endeavour’ programme: the construction of four new instruments and five significant upgrades to deliver even more performance enhancements.

“ISIS has been a part of my career since I arrived as a placement student shortly before the inauguration. Although I have worked elsewhere, ISIS has always been part of my working life. I have seen many important scientific and technical developments and innovations that kept me inspired to keep coming back.”

Over the last 40 years, the samples studied at ISIS have become smaller and more complex, and measurements have become quicker. The kinetics of chemical reactions can be imaged in real-time, and extreme temperatures and pressures can be achieved. Early work from ISIS focused on physics and chemistry questions such as the properties of high-temperature superconductors, the structure of chemicals and the phase behaviour of water. More recent work includes “seeing” catalysis in real-time, studying biological systems such as bacterial membranes, and enhancing the reliability of circuits for driverless cars.

Understanding the building blocks of life

Unlike X-rays and electrons, neutrons scatter strongly from light nuclei including hydrogen, which means they can be used to study water and organic materials.

Water is the most ubiquitous liquid on the planet, but its molecular structure gives it complex chemical and physical properties. Significant work on the phase behaviour of water was performed at ISIS in the early 2000s by scientists from the UK and Italy, who showed that liquid water under pressure transitions between two distinct structures, one low density and one high density (Phys. Rev. Lett. 84 2881).

A cartoon of the model outer membrane of the bacterium used in ISIS experiments

Water is the molecule of life, and as the technical capabilities of ISIS have advanced, it has become possible to study it inside cells, where it underpins vital functions from protein folding to chemical reactions. In 2023 a team from Portugal used the facilities at ISIS to investigate whether the water inside cells can be used as a biomarker for cancer.

Because it’s confined at the nanoscale, water in a cell will behave quite differently to bulk water. At these scales, water’s properties are highly sensitive to its environment, which changes when a cell becomes cancerous. The team showed that this can be measured with neutron spectroscopy, manifesting as an increased flexibility in the cancerous cells (Scientific Reports 13 21079).

If light is incident on an interface between two materials with different refractive indices it may, if the angle is just right, be perfectly reflected. A similar effect is exhibited by neutrons that are directed at the surface of a material, and neutron reflectometry instruments at ISIS use this to measure the thickness, surface roughness, and chemical composition of thin films.

One recent application of this technique at ISIS was a 2018 project where a team from the UK studied the effect of a powerful “last resort” antibiotic on the outer membrane of a bacterium. This antibiotic is only effective at body temperature, and the researchers show that this is because the thermal motion of molecules in the outer membrane makes it easier for the antibiotic to slip in and disrupt the bacterium’s structure (PNAS 115 E7587).

Exploring the quantum world

A year after ISIS became operational, physicists Georg Bednorz and Karl Alexander Müller, working at the IBM research laboratory in Switzerland, discovered superconductivity in a material at 35 K, 12 K higher than any other known superconductor at the time. This discovery would later win them the 1987 Nobel Prize for Physics.

High-temperature superconductivity was one of the most significant discoveries of the 1980s, and it was a focus of early work at ISIS. Another landmark came in 1987, when yttrium barium copper oxide (YBCO) was found to exhibit superconductivity above 77 K, meaning that instead of liquid helium, it can be cooled to a superconducting state with the much cheaper liquid nitrogen. The structure of this material was first fully characterized at ISIS by a team from the US and UK (Nature 327 310).

Illustration of several arrows floating on liquid

Another example of the quantum systems studied at ISIS is quantum spin liquids (QSLs). Most magnetic materials form an ordered phase like a ferromagnet when cooled, but a QSL is an interacting system of electron spins that is, in theory, disordered even when cooled to absolute zero.

QSLs are of great interest today because they are theorized to exhibit long-range entanglement, which could be applied to quantum computing and communications. QSLs have proven challenging to identify experimentally, but evidence from neutron scattering and muon spectroscopy at ISIS has characterized spin-liquid states in a number of materials (Nature 471 612).

Developing sustainable solutions and new materials

Over the years, experimental set-ups at ISIS have evolved to handle increasingly extreme and complex conditions. Almost 20 years ago, high-pressure neutron experiments performed by a UK team at ISIS showed that surfactants could be designed to enhance the solubility of liquid carbon dioxide, potentially unlocking a vast array of applications in the food and pharmaceutical industries as an environmentally friendly alternative to traditional petrochemical solvents (Langmuir 22 9832).

Today, further developments in sample environment, detector technology and data analysis software enable us to observe chemical processes in real time, with materials kept under conditions that closely mimic their actual use. Recently, neutron imaging was used by a team from the UK and Germany to monitor a catalyst used widely in the chemical industry to improve the efficiency of reactions (Chem. Commun. 59 12767). Few methods can observe what is happening during a reaction, but neutron imaging was able to visualize it in real time.

Another discovery made just after ISIS became operational was the chemical buckminsterfullerene or “buckyball”. Buckyballs are a molecular form of carbon that consists of 60 carbon atoms arranged in a spherical structure, resembling a football. The scientists who first synthesized this molecule were awarded the Nobel Prize for Chemistry in 1996, and in the years following this discovery, researchers have studied this form of carbon using a range of techniques, including neutron scattering.

Ensembles of buckyballs can form a crystalline solid, and in the early 1990s studies of crystalline buckminsterfullerene at ISIS revealed that, while adjacent molecules are oriented randomly at room temperature, they transition to an ordered structure below 249 K to minimize their energy (Nature 353 147).

buckyballs

Four decades on, fullerenes (the family of materials that includes buckyballs) continue to present many research opportunities. Through a process known as “molecular surgery”, synthetic chemists can create an opening in the fullerene cage, enabling them to insert an atom, ion or molecular cluster. Neutron-scattering studies at ISIS were recently used to characterize helium atoms trapped inside buckyballs (Phys. Chem. Chem. Phys. 25 20295). These endofullerenes are helping to improve our understanding of the quantum mechanics associated with confined particles and have potential applications ranging from photovoltaics to drug delivery.

Just as they shed light on materials of the future, neutrons and muons also offer a unique glimpse into the materials, methods and cultures of the past. At ISIS, the penetrative and non-destructive nature of neutrons and muons has been used to study many invaluable cultural heritage objects from ancient Egyptian lizard coffins (Sci. Rep. 13 4582) to Samurai helmets (Archaeol. Anthropol. Sci. 13 96), deepening our understanding of the past without damaging any of these precious artifacts.

Looking within, and to the future

If you want to understand how things structurally fail, you must get right inside and look, and the neutron’s ability to penetrate deep into materials allows engineers to do just that. ISIS’s Engin-X beamline measures the strain within a crystalline material by measuring the spacing between atomic lattice planes. This has been used by sectors including aerospace, oil and gas exploration, automotive, and renewable power.

Recently, ISIS has also been attracting electronics companies looking to use the facility to irradiate their chips with neutrons. This can mimic the high-energy neutrons generated in the atmosphere by cosmic rays, which can cause reliability problems in electronics. So, when you next fly, drive or surf the web, ISIS may just have had a hand in it.

Series of circuit boards attached to steel rods connected with cables

With its many discoveries and developments, ISIS has succeeded in proving Chadwick wrong over the past 40 years, and the facility is now setting its sights on the upcoming decades of neutron-scattering research. “While predicting the future of scientific research is challenging, we can anchor our activities around a couple of trends,” explains ISIS associate director Sean Langridge. “Our community will continue to pursue fundamental research for its intrinsic societal value by discovering, synthesizing and processing new materials. Furthermore, we will use the capabilities of neutrons to engineer and optimize a material’s functionality, for example, to increase operational lifetime and minimize environmental impact.”

The capability requirements will continue to become more complex and, as they do so, the amount of data produced will also increase. The extensive datasets produced at ISIS are well suited for machine-learning techniques. These can identify new phenomena that conventional methods might overlook, leading to the discovery of novel materials.

As ISIS celebrates its 40th anniversary of neutron production, the use of neutrons continues to provide huge value to the physics community. A feasibility and design study for a next-generation neutron and muon source is now under way. Despite four decades of neutrons proving their worth, there is still much to discover over the coming decades of UK neutron and muon science.

The post From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science appeared first on Physics World.

]]>
Feature As ISIS – the UK’s muon and neutron source – turns 40, Rosie de Laune and colleagues from ISIS explore the past, present and future of neutron scattering https://physicsworld.com/wp-content/uploads/2024/10/2024-10-DeLaune-neutron-scattering-abstract-2499733271-Shutterstock_Tom-Korcak.jpg newsletter1
Optical technique measures intramolecular distances with angstrom precision https://physicsworld.com/a/optical-technique-measures-intramolecular-distances-with-angstrom-precision/ Mon, 28 Oct 2024 14:04:08 +0000 https://physicsworld.com/?p=117694 Modified MINFLUX approach could be used to study biological processes inside cells

The post Optical technique measures intramolecular distances with angstrom precision appeared first on Physics World.

]]>
Physicists in Germany have used visible light to measure intramolecular distances smaller than 10 nm thanks to an advanced version of an optical fluorescence microscopy technique called MINFLUX. The technique, which has a precision of just 1 angstrom (0.1 nm), could be used to study biological processes such as interactions between proteins and other biomolecules inside cells.

In conventional microscopy, when two features of an object are separated by less than half the wavelength of the light used to image them, they will appear blurry and indistinguishable due to diffraction. Super-resolution microscopy techniques can, however, overcome this so-called Rayleigh limit by exciting individual fluorescent groups (fluorophores) on molecules while leaving neighbouring fluorophores alone, meaning they remain dark.

One such technique, known as nanoscopy with minimal photon fluxes, or MINFLUX, was invented by the physicist Stefan Hell. First reported in 2016 by Hell’s team at the Max Planck Institute (MPI) for Multidisciplinary Sciences in Göttingen, MINFLUX first “switches on” individual molecules, then determines their position by scanning a beam of light with a doughnut-shaped intensity profile across them.

The problem is that at distances of less than 5 to 10 nm, most fluorescent molecules start interacting with each other. This means they cannot emit fluorescence independently – a prerequisite for reliable distance measurements, explains Steffen Sahl, who works with Hell at the MPI.

Non-interacting fluorescent dye molecules

To overcome this problem, the team turned to a new type of fluorescent dye molecule developed in Hell’s research group. These molecules can be switched on in succession using UV light, but they do not interact with each other. This allows the researchers to mark the positions they want to measure with single fluorescent molecules and record their locations independently, to within as little as 0.1 nm, even when the dye molecules are close together.

“The localization process boils down to relating the unknown position of the fluorophore to the known position of the centre of the doughnut beam, where there is minimal or ideally zero excitation light intensity,” explains Hell. “The distance between the two can be inferred from the excitation (and hence the fluorescence) rate of the fluorophore.”

The advantage of MINFLUX, Hell tells Physics World, is that the closer the beam’s intensity minimum gets to the fluorescent molecule, the fewer fluorescence photons are needed to pinpoint the molecule’s location. This takes the burden of producing localizing photons – in effect, tiny lighthouses signalling “Here I am!” – away from the relatively weakly-emitting molecule and shifts it onto the laser beam, which has photons to spare. The overall effect is to reduce the required number of detected photons “typically by a factor of 100”, Hell says, adding that this translates into a 10-fold increase in localization precision compared to traditional camera-based techniques.

“A real alternative” to existing measurement methods

The researchers demonstrated their technique by precisely determining distances of 1–10 nanometres in polypeptides and proteins. To prove that they were indeed measuring distances smaller than the size of these molecules, they used molecules of a different substance, polyproline, as “rulers” of various lengths.

Polyproline is relatively stiff and was used for a similar purpose in early demonstrations of a method called Förster resonance energy transfer (FRET) that is now widely used in biophysics and molecular biology. However, FRET suffers from fundamental limitations on its accuracy, and Sahl thinks the “arguably surprising” 0.1 nm precision of MINFLUX makes it “a real alternative” for monitoring sub-10-nm distances.

While it had long been clear that MINFLUX should, in principle, be able to resolve distances at the < 5 nm scale and measure them to sub-nm precision, Hell notes that it had not been demonstrated at this scale until now. “Showing that the technique can do this is a milestone in its development and demonstration,” he says. “It is exciting to see that we can resolve fluorescence molecules that are so close together that they literally touch.” Being able to measure these distances with angstrom precision is, Hell adds, “astounding if your bear in mind that all this is done with freely propagating visible light focused by a conventional lens”.

“I find it particularly fascinating that we have now gone to the very size scale of biological molecules and can quantify distances even within them, gaining access to details of their conformation,” Sahl adds.

The researchers say that one of the key prerequisites for this work (and indeed all super-resolution microscopy developed to date) was the sequential ON/OFF switching of the fluorophores emitting fluorescence. Because any cross-talk between the two molecules would have been problematic, one of the main challenges was to identify fluorescence molecules with truly independent behaviour – that is, ones in which the silent (OFF-state) molecule did not affect its emitting (ON-state) neighbour and vice versa.

Looking forward, Hell says he and his colleagues are now looking to develop and establish MINFLUX as a standard tool for unravelling and quantifying the mechanics of proteins.

The research is published in Science.

The post Optical technique measures intramolecular distances with angstrom precision appeared first on Physics World.

]]>
Research update Modified MINFLUX approach could be used to study biological processes inside cells https://physicsworld.com/wp-content/uploads/2024/10/adj7368_Science_Sahletal_600dpi_Press_Image-01.jpg newsletter1
Daily adaptive proton therapy employed in the clinic for the first time https://physicsworld.com/a/daily-adaptive-proton-therapy-employed-in-the-clinic-for-the-first-time/ Mon, 28 Oct 2024 08:30:47 +0000 https://physicsworld.com/?p=117676 Researchers in Switzerland have integrated online daily adaptation into the clinical proton therapy workflow

The post Daily adaptive proton therapy employed in the clinic for the first time appeared first on Physics World.

]]>
Adaptive radiotherapy – in which a patient’s treatment is regularly replanned throughout their course of therapy – can compensate for uncertainties and anatomical changes and improve the accuracy of radiation delivery. Now, a team at the Paul Scherrer Institute’s Center for Proton Therapy has performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow.

Proton therapy benefits from a well-defined Bragg peak range that enables highly targeted dose delivery to a tumour while minimizing dose to nearby healthy tissues. This precision, however, also makes proton delivery extremely sensitive to anatomical changes along the beam path – arising from variations in mucus, air, muscle or fat in the body – or changes in the tumour’s position and shape.

“For cancer patients who are irradiated with protons, even small changes can have significant effects on the optimal radiation dose,” says first author Francesca Albertini in a press statement.

Online plan adaptation, where the patient remains on the couch during the replanning process, could help address the uncertainties arising from anatomical changes. But while this technique is being introduced into photon-based radiotherapy, daily online adaptation has not yet been applied to proton treatments, where it could prove even more valuable.

To address this shortfall, Albertini and colleagues developed a three-phase DAPT workflow, describing the procedure in Physics in Medicine & Biology. In the pre-treatment phase, two independent plans are created from the patient’s planning CT: a “template plan” that acts as a reference for the online optimized plan, and a “fallback plan” that can be selected on any day as a back-up if necessary.

Next, the online phase involves acquiring a daily CT before each irradiation, while the patient is on the treatment couch. For this, the researchers use an in-room CT-on-rails with a low-dose protocol. They then perform a fully automated re-optimization of the treatment plan based on the daily CT image. If the adapted plan meets the required clinical goals and passes an automated quality assurance (QA) procedure, it is used to treat the patient. If not, the fallback plan is delivered instead.

Finally, in the offline phase, the delivered dose in each fraction is recalculated retrospectively from the log files using a Monte Carlo algorithm. This step enables the team to accurately assess the dose delivered to the patient each day.

First clinical implementation

The researchers employed their DAPT protocol in five adults with tumours in rigid body regions, such as the brain or skull base. As this study was designed to demonstrate proof-of-principle and ensure clinical safety, they specified some additional constraints: only the last few consecutive fractions of each patient’s treatment course were delivered using DAPT; the plans used standard field arrangements and safety margins; and the template and fallback plans were kept the same.

“It’s important to note that these criteria are not optimized to fully exploit the potential clinical benefits of our approach,” the researchers write. “As our implementation progresses and matures, we anticipate refining these criteria to maximize the clinical advantages offered by DAPT.”

Across the five patients, the team performed DAPT for 26 treatment fractions. In 22 of these, the online adapted plans were chosen for delivery. In three fractions, the fallback plan was chosen due to a marginal dose increase to a critical structure, while for one fraction, the fallback plan was utilized due to a miscommunication. The team emphasize that all of the adapted plans passed the online QA steps and all agreed well with the log file-based dose calculations.

The daily adapted plans provided target coverage to within 1.1% of the planned dose and, in 92% of fractions, exhibited improved dose metrics to the targets and/or organs-at-risk (OARs). The researchers observed that a non-DAPT delivery (using the fallback plan) could have significantly increased the maximum dose to both the target and OARs. For one patient, this would have increased the dose to their brainstem by up to 10%. In contrast, the DAPT approach ensured that the OAR doses remained within the 5% threshold for all fractions.

Albertini emphasizes, however, that the main aim of this feasibility study was not to demonstrate superior plan quality with DAPT, but rather to establish that it could be implemented safely and efficiently. “The observed decrease in maximum dose to some OARs was a bonus and reinforces the potential benefits of adaptive strategies,” she tells Physics World.

Importantly, the DAPT process took just a few minutes longer than a non-adaptive session, averaging just above 23 min per fraction (including plan adaptation and assessment of clinical goals). Keeping the adaptive treatment within the typical 30-min time slot allocated for a proton therapy fraction is essential to maintain the patient workflow.

To reduce the time requirement, the team automated key workflow components, including the independent dose calculations. “Once registration between the daily and reference images is completed, all subsequent steps are automatically processed in the background, while the users are evaluating the daily structure and plan,” Albertini explains. “Once the plan is approved, all the QA has already been performed and the plan is ready to be delivered.

Following on from this first-in-patient demonstration, the researchers now plan to use DAPT to deliver full treatments (all fractions), as well as to enable margin reduction and potentially employ more conformal beam angles. “We are currently focused on transitioning our workflow to a commercial treatment planning system and enhancing it to incorporate deformable anatomy considerations,” says Albertini.

The post Daily adaptive proton therapy employed in the clinic for the first time appeared first on Physics World.

]]>
Research update Researchers in Switzerland have integrated online daily adaptation into the clinical proton therapy workflow https://physicsworld.com/wp-content/uploads/2024/10/28-10-24-Francesca-Albertini.jpg newsletter1
Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear https://physicsworld.com/a/imaging-method-could-detect-parkinsons-disease-up-to-20-years-before-symptoms-appear/ Fri, 25 Oct 2024 12:45:18 +0000 https://physicsworld.com/?p=117671 A technique that combines super-resolution microscopy with advanced computational analysis could identify early signs of Parkinson’s disease

The post Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear appeared first on Physics World.

]]>
Researchers at Tel Aviv University in Israel have developed a method to detect early signs of Parkinson’s disease at the cellular level using skin biopsies. They say that this capability could enable treatment up to 20 years before the appearance of motor symptoms characteristic of advanced Parkinson’s. Such early treatment could reduce neurotoxic protein aggregates in the brain and help prevent the irreversible loss of dopamine-producing neurons.

Parkinson’s disease is the second most common neurodegenerative disease in the world. The World Health Organization reports that its prevalence has doubled in the past 25 years, with more than 8.5 million people affected in 2019. Diagnosis is currently based on the onset of clinical motor symptoms. By the time of diagnosis, however, up to 80% of dopaminergic neurons in the brain may already be dead.

The new method combines a super-resolution microscopy technique, known as direct stochastic optical reconstruction microscopy (dSTORM), with advanced computational analysis to identify and map the aggregation of alpha-synuclein (αSyn), a synaptic protein that regulates transmission in nerve terminals. When it aggregates in brain neurons, αSyn causes neurotoxicity and impacts the central nervous system. In Parkinson’s disease, αSyn begins to aggregate about 15 years before motor symptoms appear.

Importantly, αSyn aggregates also accumulate in the skin. With this in mind, principal investigator Uri Ashery and colleagues developed a method for quantitative assessment of Parkinson’s pathology using skin biopsies from the upper back. The technique, which enables detailed characterization of nano-sized αSyn aggregates, will hopefully facilitate the development of a new molecular biomarker for Parkinson’s disease.

“We hypothesized that these αSyn aggregates are essential for understanding αSyn pathology in Parkinson’s disease,” the researchers write. “We created a novel platform that revealed a unique fingerprint of αSyn aggregates. The analysis detected a larger number of clusters, clusters with larger radii, and sparser clusters containing a smaller number of localizations in Parkinson’s disease patients relative to what was seen with healthy control subjects.”

The researchers used dSTORM to analyse skin biopsies from seven patients with Parkinson’s disease and seven healthy controls, characterizing nanoscale αSyn based on quantitative parameters such as aggregate size, shape, distribution, density and composition.

Super-resolution imaging

Their analysis revealed a significant decrease in the ratio of neuronal marker molecules to phosphorylated αSyn molecules (the pathological form of αSyn) in biopsies from Parkinson’s disease patients, suggesting the existence of damaged nerve cells in fibres enriched with phosphorylated αSyn.

The researchers determined that phosphorylated αSyn is organized into dense aggregates of approximately 75 nm in size. They also found that that patients with Parkinson’s disease had a higher number of αSyn aggregates than the healthy controls, with larger αSyn clusters (75 nm compared with 69 nm).

“Parkinson’s disease diagnosis based on quantitative parameters represents an unmet need that offers a route to revolutionize the way Parkinson’s disease and potentially other neurodegenerative diseases are diagnosed and treated,” Ashery and colleagues conclude.

In the next phase of this work, supported by the Michael J. Fox Foundation for Parkinson’s Research, the researchers will increase the number of subjects to 90 to identify differences between patients with Parkinson disease and healthy subjects.

“We intend to pinpoint the exact juncture at which a normal quantity of proteins turns into a pathological aggregate,” says lead author Ofir Sade in a press statement. “In addition, we will collaborate with computer science researchers to develop a machine learning algorithm that will identify correlations between results of motor and cognitive tests and our findings under the microscope. Using this algorithm, we will be able to predict future development and severity of various pathologies.”

“The machine learning algorithm is intended to spot young individuals at risk for Parkinson’s,” Ashery adds. “Our main target population are relatives of Parkinson’s patients who carry mutations that increase the risk for the disease.”

The researchers report their findings in Frontiers in Molecular Neuroscience.

The post Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear appeared first on Physics World.

]]>
Research update A technique that combines super-resolution microscopy with advanced computational analysis could identify early signs of Parkinson’s disease https://physicsworld.com/wp-content/uploads/2024/10/25-10-24-Uri-Ashery-and-Ofir-Sade.jpg
Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ https://physicsworld.com/a/ask-me-anything-raghavendra-srinivas-experimental-physics-is-never-boring/ Fri, 25 Oct 2024 07:59:11 +0000 https://physicsworld.com/?p=117480 Quantum scientist Raghavendra Srinivas thinks young researchers shouldn’t be afraid to ask questions

The post Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

One of my favourite parts of being an atomic physicist is the variety. I get to work with lasers, vacuums, experimental control software, simulations, data analysis and physics theory.

As I’m transitioning to a more senior position, the skills I use have changed. Rather than doing most of the lab-based work myself, I now have a more supervisory role on some projects. I go to the lab when I can but it’s certainly different. I’m also teaching a second-year quantum mechanics course, which requires its own skillset. I try to use my experience to impart more of an experimental flavour. The field is now in an exciting place where we can not only think about experiments with single quantum systems, but actually do them.

It’s important to have the right structures in place to deliver complex projects with many moving parts

I also work part-time at a trapped-ion quantum computing company, Oxford Ionics, which has grown from about 20 to over 60 people since I started in 2021. Being involved in a team with so many people has taught me a lot about the importance of project management. It’s important to have the right structures in place to deliver complex projects with many moving parts. In addition, most of my company colleagues are also not physicists; it’s important to be able to communicate with people across a range of disciplines.

What do you like best and least about your job?

Experimental physics is never boring, as experiments always find new and wonderful ways to break: 90–99% of the time something needs fixing, but when it works it’s just magical.

I’ve been incredibly lucky to work with a fantastic group of people wherever I’ve been. Experimental physics cannot be done alone and I feel very privileged to work with colleagues who are passionate about what they do and have a wide variety of skills.

I also love the opportunities for outreach activities that my position affords me. Since I started at Oxford, I have led work placements as part of In2scienceUK and more recently helped start a week-long summer school for school students with the National Quantum Computing Centre. In many ways, I think promoting the idea that a career in quantum physics is accessible to anyone as long as they are willing to work hard is the most impactful work I can do.

I do dislike that as you spend longer in a field, more and more non-lab-based tasks creep into your calendar. I also find it difficult to switch between different tasks but that’s the price to pay for being involved in multiple projects.

What do you know today, that you wish you knew when you were starting out in your career?

It’s a difficult feeling for me to shake off even now, but when I started my career, I used to feel afraid to ask questions when I didn’t know something. I think it’s easy to fall into the trap of thinking it’s your fault, or that others will think less of you. However, I believe it’s better to see these instances as opportunities to learn rather than being embarrassed.

Scientifically, I think it’s also really important to be able to take a step back from the weeds of technical work and have an idea of the big-picture physics you’re trying to solve. I would have encouraged my past self to spend more time thinking deeply about physics, even beyond the field I was in. Just a couple of hours a week adds up over time without really taking away from other work.

It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout

One last thing I’d tell my past self is to think about boundaries and find a healthy work-life balance. It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout. Other aspects of life are important too.

The post Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ appeared first on Physics World.

]]>
Interview Quantum scientist Raghavendra Srinivas thinks young researchers shouldn’t be afraid to ask questions https://physicsworld.com/wp-content/uploads/2024/10/2024-10-AMA-Srinivas-LISTING.jpg newsletter
Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence https://physicsworld.com/a/julia-sutcliffe-chief-scientific-advisor-explains-why-policymaking-must-be-underpinned-by-evidence/ Thu, 24 Oct 2024 12:23:33 +0000 https://physicsworld.com/?p=117656 Exploring a career in physics, systems engineering and advising the UK's Department for Business and Trade

The post Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast, features the physicist and engineer Julia Sutcliffe, who is chief scientific adviser to the UK government’s Department for Business and Trade.

In a wide-ranging conversation with Physics World’s Matin Durrani, Sutcliffe explains how she began her career as a PhD physicist before working in systems engineering at British Aerospace – where she worked on cutting-edge technologies including robotics, artificial intelligence, and autonomous systems. They also chat about Sutcliffe’s current role advising the UK government to ensure that policymaking is underpinned by the best evidence.

The post Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence appeared first on Physics World.

]]>
Podcasts Exploring a career in physics, systems engineering and advising the UK's Department for Business and Trade https://physicsworld.com/wp-content/uploads/2024/10/Julia-Sutcliffe-1280-player.jpg newsletter
Eco-friendly graphene composite recovers gold from e-waste https://physicsworld.com/a/eco-friendly-graphene-composite-recovers-gold-from-e-waste/ Thu, 24 Oct 2024 09:55:03 +0000 https://physicsworld.com/?p=117637 New graphene-biopolymer material extracts gold ions 10 times more efficiently than other adsorbents

The post Eco-friendly graphene composite recovers gold from e-waste appeared first on Physics World.

]]>
A new type of composite material is 10 times more efficient at extracting gold from electronic waste than previous adsorbents. Developed by researchers in Singapore, the UK and China, the environmentally-friendly composite is made from graphene oxide and a natural biopolymer called chitosan, and it filters the gold without an external power source, making it an attractive alternative to older, more energy-intensive techniques.

Getting better at extracting gold from electronic waste, or e-waste, is desirable for two reasons. As well as reducing the volume of e-waste, it would lessen our reliance on mining and refining new gold, which involves environmentally hazardous materials such as activated carbon and cyanides. Electronic waste management is a relatively new field, however, and existing techniques like electrolysis are time-consuming and require a lot of energy.

A more efficient and suitable recovery process

Led by Kostya Novoselov and Daria Andreeva of the Institute for Functional Intelligent Materials at the National University of Singapore, the researchers chose graphene and chitosan because both have desirable characteristics for gold extraction. Graphene boasts a high surface area, making it ideal for adsorbing ions, they explain, while chitosan acts as a natural reducing agent, catalytically converting ionic gold into its solid metallic form.

While neither material is efficient enough to compete with conventional methods such as activated carbon on its own, Andreeva says they work well together. “By combining both of them, we enhance both the adsorption capacity of graphene and the catalytic reduction ability of chitosan,” she explains. “The result is a more efficient and suitable gold recovery process.”

High extraction efficiency

The researchers made the composite by getting one-dimensional chitosan macromolecules to self-assemble on two-dimensional flakes of graphene oxide. This assembly process triggers the formation of sites that bind gold ions. The enhanced extracting ability of the composite comes from the fact that the ion binding is cooperative, meaning that an ion binding at one site allows other ions to bind, too. The team had previously used similar methods in studies that focused on structures such as novel membranes with artificial ionic channels, anticorrosion coatings, sensors and actuators, switchable water valves and bioelectrochemical systems.

Once the gold ions are adsorbed onto the graphene surface, the chitosan catalyses the reduction of these ions, converting them from their ionic state into solid metallic gold, Andreeva explains. “This combined action of adsorption and reduction makes the process both highly efficient and environmentally friendly, as it avoids the use of harsh chemicals typically employed in gold recovery from electronic waste,” she says.

The researchers tested the material on a real waste mixture provided by SG Recycle Group SG3R, Pte, Ltd. Using this mixture, which contained gold in a residual concentration of just 3 ppm, they showed that the composite can extract nearly 17g/g of Au3+ ions and just over 6 g/g of Au+ from a solution – values that are 10 times larger than existing gold adsorbents. The material also has an extraction efficiency of above 99.5 percent by weight (wt%), breaking the current of limit of 75 wt%. To top it off, the ion extraction process is ultrafast, taking around just 10 minutes compared to days for other graphene-based adsorbents.

No applied voltage required

The researchers, who report their work in PNAS, say that the multidimensional architecture of the composite’s structure means that no applied voltage is required to adsorb and reduce gold ions. Instead, the technique relies solely on the chemisorption kinetics of gold ions on the heterogenous graphene oxide/chitosan nanoconfinement channels and the chemical reduction at multiple binding sites. The new process therefore offers a cleaner, more efficient and environmentally-friendly method for recovering gold from electronic waste, they add.

While the present work focused on gold, the team say the technique could be adapted to recover other valuable metals such as silver, platinum or palladium from electronic waste or even mining residues. And that is not all: as well as e-waste, the technology might be applied to a wider range of environmental cleaning efforts, such as filtering out heavy metals from polluted water sources or industrial effluents. “It thus provides a solution for reducing metal contamination in ecosystems,” Andreeva says.

Other possible applications areas, she adds, include sustainable decarbonization and hydrogen production, low-dimensional building blocks for embedding artificial neural networks in hardware for neuromorphic computing and biomedical applications.

The Singapore researchers are now studying how to regenerate and reuse the composite material itself, to further reduce waste and improve the process’s sustainability. “Our ongoing research is focusing on optimizing the material’s properties, bringing us closer to a scalable, eco-friendly solution for e-waste management and beyond,” Andreeva says.

The post Eco-friendly graphene composite recovers gold from e-waste appeared first on Physics World.

]]>
Research update New graphene-biopolymer material extracts gold ions 10 times more efficiently than other adsorbents https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_24-14449-1.jpg
Cosmic antimatter could be created by annihilating WIMPs https://physicsworld.com/a/cosmic-antimatter-could-be-created-by-annihilating-wimps/ Wed, 23 Oct 2024 17:34:28 +0000 https://physicsworld.com/?p=117646 Detection of antideuterons and antihelium could help hone dark-matter models

The post Cosmic antimatter could be created by annihilating WIMPs appeared first on Physics World.

]]>
Weakly interacting massive particles (WIMPs) are prime candidates for dark matter – but the hypothetical particles have never been observed directly. Now, an international group of physicists has proposed a connection between WIMPs and the higher-than-expected flux of antimatter cosmic rays  detected by NASA’s Alpha Magnetic Spectrometer (AMS-02) on the International Space Station.

Cosmic rays are high-energy charged particles that are created by a wide range of astrophysical processes including supernovae and the violent regions surrounding supermassive black holes. The origins of cosmic rays are not fully understood so they offer physicists opportunities to look for phenomena not described by the Standard Model of particle physics. This includes dark matter, a hypothetical substance that could account for about 85% of the mass in the universe.

If WIMPs exist, physicists believe that they would occasionally annihilate when they encounter one another to create matter and antimatter particles. Because WIMPs are very heavy, it is possible that these annihilations create antinuclei – the antimatter version of nuclei comprising antiprotons and antineutrons. Some of these antinuclei could make their way to Earth and be detected as cosmic rays

Now, a trio of researchers in Spain, Sweden, and the US has done new calculations that suggest that unexpected antinuclei detections made by AMS-02 could shed light on the nature of dark matter. The trio is led by Pedro De La Torre Luque at the Autonomous University of Madrid.

Heavy antiparticles

According to the Standard Model of particle physics, antinuclei should be an extremely small component of the cosmic rays measured by AMS-02. However, excesses of antideuterons (antihydrogen-2), antihelium-3  and antihelium-4 have been glimpsed in data gathered by AMS-02.

In previous work, De La Torre Luque and colleagues explored the possibility that these antinuclei emerged through the annihilation of WIMPs. Using AMS-02 data, the team put new constraints on the hypothetical properties of WIMPs.

Now, the trio has built on this work. “With this information, we calculated the fluxes of antideuterons and antihelium that AMS-02 could detect: both from dark matter, and from cosmic ray interactions with gas in the interstellar medium,” De La Torre Luque says. “In addition, we estimated the maximum possible flux of antinuclei from WIMP dark matter.”

This allowed the researchers to test whether AMS-02’s cosmic ray measurements are really compatible with standard WIMP models. According to De La Torre Luque, their analysis had mixed implications for WIMPs.

“We found that while the antideuteron events measured by AMS-02 are well compatible with WIMP dark matter annihilating in the galaxy, only in optimistic cases can WIMPs explain the detected events of antihelium-3,” he explains. “No standard WIMP scenario can explain the detection of antihelium-4.”

Altogether, the team’s results are promising for proponents of the idea that WIMPs are a component of dark matter. However, the research also suggest that the WIMP model in its current form is incomplete. To be consistent with the AMS-02 data, the researchers believe that a new WIMP model must further push the bounds of the Standard Model.

“If these measurements are robust, we may be opening the window for something very exotic going on in the galaxy, that could be related to dark matter, says De La Torre Luque. But it could also reveal some unexpected new phenomenon in the universe”. Ultimately, the researchers hope that the precision of their antinuclei measurements could bring us a small step closer to solving one of the deepest, most enduring mysteries in physics.

The research is described in the Journal of Cosmology and Astroparticle Physics.

The post Cosmic antimatter could be created by annihilating WIMPs appeared first on Physics World.

]]>
Research update Detection of antideuterons and antihelium could help hone dark-matter models https://physicsworld.com/wp-content/uploads/2024/10/23-10-2024-ISS-50_EVA-1_b_Alpha_Magnetic_Spectrometer.jpg newsletter
First look at prototype telescope for the LISA gravitational-wave mission https://physicsworld.com/a/first-look-at-prototype-telescope-for-the-lisa-gravitational-wave-mission/ Wed, 23 Oct 2024 10:30:09 +0000 https://physicsworld.com/?p=117632 The telescopes will be used to send and receive infrared laser beams between the three satellites in space

The post First look at prototype telescope for the LISA gravitational-wave mission appeared first on Physics World.

]]>
NASA has released the first images of a full-scale prototype for the six telescopes that will be included in the €1.5bn Laser Interferometer Space Antenna (LISA) mission.

Expected to launch in 2035 and operate for at least four year, LISA is a space-based gravitational-wave mission led by the European Space Agency.

It will comprise of three identical satellites that will be placed in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometers – more than six times the distance between the Earth and the Moon.

The three craft will send infrared laser beams to each other via twin telescopes in the satellites. The beams will be sent to free-floating golden cubes – each slightly smaller than a Rubik’s cube — that are placed inside the craft.

The system will be able to measure the separation between the cubes down to picometers, or trillionths of a meter. Such subtle changes in the distances between the measured laser beams will indicate the presence of a gravitational wave.

The prototype telescope, dubbed the Engineering Development Unit Telescope, was manufactured and assembled by L3Harris Technologies in Rochester, New York.

It is made entirely from an amber-coloured glass-ceramic called Zerodur, which has been manufactured by Schott in Mainz, Germany. The primary mirror of the telescopes is coated in gold to better reflect the infrared lasers and reduce heat loss.

On 25 January ESA’s Science Programme Committee formally approved the start of construction of LISA.

The post First look at prototype telescope for the LISA gravitational-wave mission appeared first on Physics World.

]]>
Blog The telescopes will be used to send and receive infrared laser beams between the three satellites in space https://physicsworld.com/wp-content/uploads/2024/10/GSFC_LISA_small.jpg newsletter
Orbital angular momentum monopoles appear in a chiral crystal https://physicsworld.com/a/orbital-angular-momentum-monopoles-appear-in-a-chiral-crystal/ Wed, 23 Oct 2024 09:00:20 +0000 https://physicsworld.com/?p=117626 Experimental observations at the Swiss Light Source could advance the development of energy-efficient memory devices based on "orbitronics"

The post Orbital angular momentum monopoles appear in a chiral crystal appeared first on Physics World.

]]>
Magnets generally have two poles, north and south, so observing something that behaves like it has only one is extremely unusual. Physicists in Germany and Switzerland have become the latest to claim this rare accolade by making the first direct detection of structures known as orbital angular momentum monopoles. The monopoles, which the team identified in materials known as chiral crystals, had previously only been predicted in theory. The discovery could aid the development of more energy-efficient memory devices.

Traditional electronic devices use the charge of electrons to transfer energy and information. This transfer process is energy-intensive, however, so scientists are looking for alternatives. One possibility is spintronics, which uses the electron’s spin rather than its charge, but more recently another alternative has emerged that could be even more promising. Known as orbitronics, it exploits the orbital angular momentum (OAM) of electrons as they revolve around an atomic nucleus. By manipulating this OAM, it is in principle possible to generate large magnetizations with very small electric currents – a property that could be used to make energy-efficient memory devices.

Chiral topological semi-metals with “built-in” OAM textures

The problem is that materials that support such orbital magnetizations are hard to come by. However, Niels Schröter, a physicist at the Max Planck Institute of Microstructure Physics in Halle, Germany who co-led the new research, explains that theoretical work carried out in the 1980s suggested that certain crystalline materials with a chiral structure could generate an orbital magnetization that is isotropic, or uniform in all directions. “This means that the materials’ magnetoelectric response is also isotropic – it depends solely on the direction of the injected current and not on the crystals’ orientation,” Schröter says. “This property could be useful for device applications since it allows for a uniform performance regardless of how the crystal grains are oriented in a material.”

In 2019, three experimental groups (including the one involved in the latest work) independently discovered a type of material called a chiral topological semimetal that seemed to fit the bill. Atoms in these semimetals are arranged in a helical pattern, which produces something that behaves like a solenoid on the nanoscale, creating a magnetic field whenever an electric current passes through it.

The advantage of these materials, Schröter explains, is that they have “built-in” OAM textures. What is more, he says the specific texture discovered in the most recent work – an OAM monopole – is “special because the magnetic field response can be very large – and isotropic, too”.

Visualizing monopoles

Schröter and colleagues studied chiral topological semimetals made from either palladium and gallium or platinum and gallium (PdGa or PtGa). To understand the structure of these semimetals, they directed circularly polarized X-rays from the Swiss Light Source (SLS) onto samples of PdGa and PtGa prepared by Claudia Felser’s group at the Max Planck Institute in Dresden. In this technique, known as circular dichroism in angle-resolved photoemission spectroscopy (CD-ARPES), the synchrotron light ejects electrons from the sample, and the angles and energies of these electrons provide information about the material’s electronic structure.

“This technique essentially allows us to ‘visualize’ the orbital texture, almost like capturing an image of the OAM monopoles,” Schröter explains. “Instead of looking at the reflected light, however, we observe the emission pattern of electrons.” The new monopoles, he notes, reside in momentum (or reciprocal) space, which is the Fourier transform of our everyday three-dimensional space.

Complex data

One of the researchers’ main challenges was figuring out how to interpret the CD-ARPES data. This turned out to be anything but straightforward. Working closely with Michael Schüler’s theoretical modelling group at the Paul Scherrer Institute in Switzerland, they managed to identify the OAM textures hidden within the complexity of the measurement figures.

Contrary to what was previously thought, they found that the CD-ARPES signal was not directly proportional to the OAMs. Instead, it rotated around the monopoles as the energy of the photons in the synchrotron light source was varied. This observation, they say, proves that monopoles are indeed present.

The findings, which are detailed in Nature Physics, could have important implications for future magnetic memory devices. “Being able to switch small magnetic domains with currents passed through such chiral crystals opens the door to creating more energy-efficient data storage technologies, and possibly also logic devices,” Schröter says. “This study will likely inspire further research into how these materials can be used in practical applications, especially in the field of low-power computing.”

The researchers’ next task is to design and build prototype devices that exploit the unique properties of chiral topological semimetals. “Finding these monopoles has been a focus for us ever since I started my independent research group at the Max Planck Institute for Microstructure Physics in 2021,” Schröter tells Physics World. The team’s new goal, he adds, is to “demonstrate functionalities and create devices that can drive advancements in information technologies”.

To achieve this, he and his colleagues are collaborating with partners at the universities of Regensburg and Berlin. They aim to establish a new centre for chiral electronics that will, he says, “serve as a hub for exploring the transformative potential of chiral materials in developing next-generation technologies”.

The post Orbital angular momentum monopoles appear in a chiral crystal appeared first on Physics World.

]]>
Research update Experimental observations at the Swiss Light Source could advance the development of energy-efficient memory devices based on "orbitronics" https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_Hedgehog_Titelbild_16_9.jpg
On the proper use of a Warburg impedance https://physicsworld.com/a/on-the-proper-use-of-a-warburg-impedance/ Wed, 23 Oct 2024 08:20:06 +0000 https://physicsworld.com/?p=116636 Join the audience for a live webinar on 4 December 2024 sponsored by Gamry Instruments, Inc., BioLogic, Scribner and Metrohm Autolab, in partnership with The Electrochemical Society

The post On the proper use of a Warburg impedance appeared first on Physics World.

]]>

Recent battery papers commonly employ interpretation models for which diffusion impedances are in series with interfacial impedance. The models are fundamentally flawed because the diffusion impedance should be part of the interfacial impedance. A general approach is presented that shows how the charge-transfer resistance and diffusion resistance are functions of the concentration of reacting species at the electrode surface. The resulting impedance model incorporates diffusion impedances as part of the interfacial impedance.

An interactive Q&A session follows the presentation.

Mark Orazem obtained his BS and MS degrees from Kansas State University and his PhD in 1983 from the University of California, Berkeley. In 1983, he began his career as assistant professor at the University of Virginia, and in 1988 joined the faculty of the University of Florida, where he is Distinguished Professor of Chemical Engineering and Associate Chair for Graduate Studies. Mark is a fellow of The Electrochemical Society, International Society of Electrochemistry, and American Association for the Advancement of Science. He served as President of the International Society of Electrochemistry and co-authored, with Bernard Tribollet of the Centre national de la recherche scientifique (CNRS), the textbook entitled Electrochemical Impedance Spectroscopy, now in its second edition. Mark received the ECS Henry B. Linford Award, ECS Corrosion Division H. H. Uhlig Award, and with co-author Bernard Tribollet, the 2019 Claude Gabrielli Award for contributions to electrochemical impedance spectroscopy. In addition to writing books, he has taught short courses on impedance spectroscopy for The Electrochemical Society since 2000.

 

The Electrochemical Society

The post On the proper use of a Warburg impedance appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 4 December 2024 sponsored by Gamry Instruments, Inc., BioLogic, Scribner and Metrohm Autolab, in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/09/2024-12-04-webinar-image.jpg
Multi-qubit entangled states boost atomic clock and sensor performance https://physicsworld.com/a/multi-qubit-entangled-states-boost-atomic-clock-and-sensor-performance/ Tue, 22 Oct 2024 16:52:22 +0000 https://physicsworld.com/?p=117622 Greenberger–Horne–Zeilinger states increase measurement frequency

The post Multi-qubit entangled states boost atomic clock and sensor performance appeared first on Physics World.

]]>
Frequency measurements using multi-qubit entangled states have been performed by two independent groups in the US. These entangled states have correlated errors, resulting in measurement precisions better than the standard quantum limit. One team is based in Colorado and it measured the frequency of an atomic clock with greater precision than possible using conventional methods. The other group is in California and it showed how entangled states could be used in quantum sensing.

Atomic clocks are the most accurate timekeeping devices we have. They work by locking an ultraprecise, frequency comb laser to a narrow linewidth transition in an atom. The higher the transition’s frequency, the faster the clock ticks and the more precisely it can keep time. The clock with the best precision today is operated by Jun Ye’s group at JILA in Boulder, Colorado and colleagues. After running for the age of the universe, this clock would only be wrong by 0.01 s.

The conventional way of improving precision is to use higher-energy, narrower transitions such as those found in highly charged ions and nuclei. These pose formidable challenges, however, both in locating the transitions and in producing stable high-frequency lasers to excite them.

Standard quantum limit

An alternative is to operate existing clocks in more sophisticated ways. “In an optical atomic clock, you’re comparing the oscillations of an atomic superposition with the frequency of a laser,” explains JILA’s Adam Kaufman, “At the end of the experiment, that atom can only be in the excited state or in the ground state, so to get an estimate of the relative frequencies you need to sample that atom many times, and the precision goes like one over the square root of the number of samples.” This is the standard quantum limit, and is derived from the assumption that the atoms collapse randomly, producing random noise in the frequency estimate.

If, however, multiple atoms are placed into a Greenberger–Horne–Zeilinger (GHZ) entangled state and measured simultaneously, information can be acquired at a higher frequency without increasing the fundamental frequency of the transition. JILA’s Alec Cao explains, “Two atoms in a GHZ state are not just two independent atoms. Both the atoms are in the zero state, so the state has an energy of zero, or both the atoms are in the upper state so it has an energy of two. And as you scale the size of the system the energy difference increases.”

Unfortunately the lifetime of a GHZ state is inversely proportional to its size. Therefore, though precision can be acquired in a shorter time, the time window for measurement also drops, cancelling out the benefit. Mark Saffman of the University of Wisconsin-Madison explains, “This idea was suggested about 20 years ago that you could get around this by creating GHZ states of different sizes, and using the smallest GHZ state to measure the least significant bit of your measurement, and as you go to larger and larger GHZ states you’re adding more significant bits to your measurement result.”

In the Colorado experiment, Kaufman, Cao and colleagues used a novel, multi-qubit entangling technique to create GHZ states of Rydberg atoms in a programmable optical tweezer lattice. A Rydberg atom is an atom with one or more electrons in a highly-excited state. They showed that, when interrogated for short times, four-atom GHZ states achieved higher precisions than could be achieved with the same number of uncorrelated atoms. They also constructed gates of up to eight qubits. However, owing to their short lifetimes, they were unable to beat the standard quantum limit with these.

Cascade of GHZ qubits

The Colorado team therefore constructed a cascade of GHZ qubits of increasing sizes, with the largest containing eight atoms. They showed that the fidelity achieved by the cascade was superior to the fidelity achieved by a single large GHZ qubit. Cao compares this to using the large GHZ state on a clock as the second hand while progressively smaller states act as the minute and hour hands. The team did not demonstrate higher phase sensitivity than could theoretically be achieved with the same number of unentangled atoms, but Cao says this is simply a technical challenge.

Meanwhile in California, Manuel Endres and colleagues at Caltech also used GHZ states to do precision spectroscopy on the frequency of an atomic clock using Rydberg atoms in an optical tweezer array. They used a slightly different technique for preparing the GHZ states. This did not allow them to prepare such large GHZ states as their Coloradan counterparts, although Endres argues that their technique should be more scalable. The Caltech work, however, focused on mapping the output data onto “ancilla” qubits and demonstrating a universal set of quantum logic operations.

“The question is, ‘How can a quantum computer help you for a sensor?’” says Endres. “If you had a universal quantum computer that somehow produced a GHZ state on your sensor you could improve the sensing capabilities. The other thing is to take the signal from a quantum computer and do quantum post-processing on that signal. The vision in our [work] is to have a quantum computer integrated with a sensor.”

Saffman, who was not involved with either group, praises the work of both teams. He congratulates the Coloradans for setting out to build a better clock and succeeding – and praises the Californians for going in “another direction” with their GHZ states.  Saffman says he would like to see the researchers produce larger GHZ states and show that such states can not only confer an improvement on a clock with the same limitations as a similar clock measured with random atoms, but can produce the world’s best clock overall.

The research is described in two papers in nature Nature (California paper, Colorado paper).

The post Multi-qubit entangled states boost atomic clock and sensor performance appeared first on Physics World.

]]>
Research update Greenberger–Horne–Zeilinger states increase measurement frequency https://physicsworld.com/wp-content/uploads/2024/10/22-10-2024-GHZ-atomic-clock-team.jpg newsletter
Gems from the Physics World archive: Isaac Asimov https://physicsworld.com/a/gems-from-the-physics-world-archive-isaac-asimov/ Tue, 22 Oct 2024 15:20:02 +0000 https://physicsworld.com/?p=117617 Science-fiction fans in the Physics World team have a clear favourite from 36 years of articles

The post Gems from the <em>Physics World</em> archive: Isaac Asimov appeared first on Physics World.

]]>
Cartoon illustration of Isaac Asimov

Since 1988 Physics World has boasted among its authors some of the most eminent physicists of the 20th and 21st centuries, as well as some of the best popular-science authors. But while I am, in principle, aware of this, it can still be genuinely exciting to discover who wrote for Physics World before I joined the team in 2011. And for me – a self-avowed book nerd – the most exciting discovery was an article written by Isaac Asimov in 1990.

Asimov is best remembered for his hard science fiction. His Foundation trilogy (1951–1953) and decades of robot stories first collected in I, Robot (1950) are so seminal they have contributed words and concepts to the popular imagination, far beyond actual readers of his work. If you’ve ever heard of the Laws of Robotics (the first of which is that “a robot shall not harm a human, or by inaction allow a human to come to harm”), that was Asimov’s work.

I was introduced to Asimov through what remains the most “hard physics”-heavy sci-fi I have ever tackled: The Gods Themselves (1972). In this short novel, humans make contact with a parallel universe and manage to transfer energy from a parallel world to Earth. When a human linguist attempts to communicate with the “para-men”, he discovers this transfer may be dangerous. The narrative then switches to the parallel world, which is populated by the most “alien” aliens I can remember encountering in fiction.

Underlying this whole premise, though, is the fact that in the parallel world, the strong nuclear force, which binds protons and neutrons together, is even stronger than it is in our own. And Asimov was a good enough scientist that he worked into his novel everything that would be different – subtly or significantly – were this the case. It’s a physics thought experiment; a highly entertaining one that also encompasses ethics, astrobiology, cryptanalysis and engineering.

Of course, Asimov wrote non-fiction, too. His 500+ books include such titles as Understanding Physics (1966), Atom: Journey Across the Subatomic Cosmos (1991) and the extensive Library of the Universe series (1988–1990). The last two of these even came out while Physics World was being published.

So what did this giant of sci-fi and science communication write about for Physics World?

It was, of all things, a review of a book by someone else: specifically, Think of a Number by Malcolm E Lines, a British mathematician. Lines isn’t nearly so famous as his reviewer, but he was still writing popular-science books about mathematics as recently as 2020. Was Asimov impressed? You’ll have to read his review to find out.

The post Gems from the <em>Physics World</em> archive: Isaac Asimov appeared first on Physics World.

]]>
Blog Science-fiction fans in the Physics World team have a clear favourite from 36 years of articles https://physicsworld.com/wp-content/uploads/2024/10/Isaac-Asimov-featured-2278521053-Shutterstock_Mei-Zendra-editorial-use-only-scaled.jpg newsletter
Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? https://physicsworld.com/a/negative-triangularity-tokamaks-a-power-plant-plasma-solution-from-the-core-to-the-edge/ Tue, 22 Oct 2024 10:05:24 +0000 https://physicsworld.com/?p=117448 Join the audience for a live webinar on 7 November 2024 sponsored by IOP Publishing's journal, Plasma Science and Technologies

The post Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? appeared first on Physics World.

]]>
The webinar is directly linked with a special issue of Plasma Physics and Controlled Fusion on Advances in the Physics Basis of Negative Triangularity Tokamaks; featuring contributions from all of the speakers, and many more papers from the leading groups researching this fascinating topic.

In recent years the fusion community has begun to focus on the practical engineering of tokamak power plants. From this, it became clear that the power exhaust problem, extracting the energy produced by fusion without melting the plasma-facing components, is just as important and challenging as plasma confinement. To these ends, negative triangularity plasma shaping holds unique promise.

Conceptually, negative triangularity is simple. Take the standard positive triangularity plasma shape, ubiquitous among tokamaks, and flip it so that the triangle points inwards. By virtue of this change in shape, negative triangularity plasmas have been experimentally observed to dramatically improve energy confinement, sometimes by more than a factor of two. Simultaneously, the plasma shape is also found to robustly prevent the transition to the improved confinement regime H-mode. While this may initially seem a drawback, the confinement improvement can enable negative triangularity to still achieve similar confinement to a positive triangularity H-mode. In this way, it robustly avoids the typical difficulties of H-mode: damaging edge localized modes (ELMs) and the narrow scrape-off layer (SOL) width. This is the promise of negative triangularity, an elegant and simple path to alleviating power exhaust while preserving plasma confinement.

The biggest deficiency is currently uncertainty. No tokamak in the world is designed to create negative triangularity plasmas and it has received a fraction of the theory community’s attention. In this webinar, through both theory and experiment, we will explore the knowns and unknowns of negative triangularity and evaluate its future as a power plant solution.

Justin Ball (chair) is a research scientist at the Swiss Plasma Center at EPFL in Lausanne, Switzerland. He earned his Masters from MIT in 2013 and his PhD in 2016 at Oxford University studying the effects of plasma shaping in tokamaks, for which he was awarded the European Plasma Physics PhD Award. In 2019, he and Jason Parisi published the popular science book, The Future of Fusion Energy. Currently, Justin is the principal investigator of the EUROfusion TSVV 2 project, a ten-person team evaluating the reactor prospects of negative triangularity using theory and simulation.

Alessandro Balestri is a PhD student at the Swiss Plasma Center (SPC) located within the École Polytechnique Fédérale de Lausanne (EPFL). His research focuses on using experiments and gyrokinetic simulations to achieve a deep understanding on how negative triangularity reduces turbulent transport in tokamak plasmas and how this beneficial effect can be optimized in view of a fusion power plant. He received his Bachelor and Master degrees in physics at the University of Milano-Bicocca where he carried out a thesis on the first gyrokinetic simulations for the negative triangularity option of the novel Divertor Tokamak Test facility.

Andrew “Oak” Nelson is an associate research scientist with Columbia University where he specializes in negative triangularity (NT) experiments and reactor design. Oak received his PhD in plasma physics from Princeton University in 2021 for work on the H-mode pedestal in DIII-D and has since dedicated his career to uncovering mechanisms to mitigate the power-handling needs faced by tokamak fusion pilot plants. Oak is an expert in the edge regions of NT plasmas and one of the co-leaders of the EU-US Joint Task Force on Negative Triangularity Plasmas. In addition to NT work, Oak consults regularly on various physics topics for Commonwealth Fusion Systems and heads several fusion-outreach efforts.

Tim Happel is the head of the Plasma Dynamics Division at the Max Planck Institute for Plasma Physics in Garching near Munich. His research centres around turbulence and tokamak operational modes with enhanced energy confinement. He is particularly interested in the physics of the Improved Energy Confinement Mode (I-Mode) and plasmas with negative triangularity. During his PhD, which he received in 2010 from the University Carlos III in Madrid, he developed a Doppler backscattering system for the investigation of plasma flows and their interaction with turbulent structures. For this work, he was awarded the Itoh Prize for Plasma Turbulence.

Haley Wilson is a PhD candidate studying plasma physics at Columbia University. Her main research interest is the integrated modelling of reactor-class tokamak core scenarios, with a focus on highly radiative, negative triangularity scenarios. The core modelling of MANTA is her first published work in this area, but her most recent manuscript submission expands the MANTA study to a broader operational space. She was recently selected for an Office of Science Graduate Student Research award, to work with Oak Ridge National Laboratory on whole device modelling of negative triangularity tokamaks using the FREDA framework.

Olivier Sauter obtained his PhD at CRPP-EPFL, Lausanne, Switzerland in 1992, followed by post-doc at General Atomics in 1992-93 and ITER-San Diego (1995/96), leading to the bootstrap current coefficients and experimental studies on Neoclassical tearing modes. He has been JET Task Force Leader, Eurofusion Research Topic Coordinator and recipient of the 2013 John Dawson Award for excellence in plasma physics research and nominated since 2016 as ITER Scientist Fellow in the area of integrated modelling. He is a senior scientist at SPC-EPFL, supervising several PhD theses, and active with AUG, DIII-D, JET, TCV, WEST focusing on real-time simulations and negative triangularity plasmas.

About this journal

Plasma Physics and Controlled Fusion is a monthly publication dedicated to the dissemination of original results on all aspects of plasma physics and associated science and technology.

Editor-in-chief: Jonathan Graves University of York, UK and EPFL, Switzerland.

 

The post Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 7 November 2024 sponsored by IOP Publishing's journal, Plasma Science and Technologies https://physicsworld.com/wp-content/uploads/2024/10/diii-d_cut.png
How a next-generation particle collider could unravel the mysteries of the Higgs boson https://physicsworld.com/a/how-a-next-generation-particle-collider-could-unravel-the-mysteries-of-the-higgs-boson/ Tue, 22 Oct 2024 10:00:59 +0000 https://physicsworld.com/?p=117149 Tulika Bose, Philip Burrows and Tara Shears discuss proposals for the next big particle collider

The post How a next-generation particle collider could unravel the mysteries of the Higgs boson appeared first on Physics World.

]]>
More than a decade following the discovery of the Higgs boson at the CERN particle-physics lab near Geneva in 2012, high-energy physics stands at a crossroads. While the Large Hadron Collider (LHC) is currently undergoing a major £1.1bn upgrade towards a High-Luminosity LHC (HL-LHC), the question facing particle physicists is what machine should be built next – and where – if we are to study the Higgs boson in unprecedented detail in the hope of revealing new physics.

Several designs exist, one of which is a huge 91 km circumference collider at CERN known as the Future Circular Collider (FCC). But new technologies are also offering tantalising alternatives to such large machines, notably a muon collider. As CERN celebrates its 70th anniversary this year, Michael Banks talks to Tulika Bose from the University of Wisconsin–Madison, Philip Burrows from the University of Oxford and Tara Shears from the University of Liverpool about the latest research on the Higgs boson, what the HL-LHC might discover and the range of proposals for the next big particle collider.

Tulika Bose, Philip Burrows and Tara Shears

What have we learnt about the Higgs boson since it was discovered in 2012?

Tulika Bose (TB): The question we have been working towards in the past decade is whether it is a “Standard Model” Higgs boson or a sister, or a cousin or a brother of that Higgs. We’ve been working really hard to pin it down by measuring its properties. All we can say at this point is that it looks like the Higgs that was predicted by the Standard Model. However, there are so many questions we still don’t know. Does it decay into something more exotic? How does it interact with all of the other particles in the Standard Model? While we’ve understood some of these interactions, there are still many more particle interactions with the Higgs that we don’t quite understand. Then of course, there is a big open question about how the Higgs interacts with itself. Does it, and if so, what is its interaction strength? These are some of the exciting questions that we are currently trying to answer at the LHC.

So the Standard Model of particle physics is alive and well?

TB: The fact that we haven’t seen anything exotic that has not been predicted yet tells us that we need to be looking at a different energy scale. That’s one possibility – we just need to go much higher energies. The other alternative is that we’ve been looking in the standard places. Maybe there are particles that we haven’t yet been able to detect that couple incredibly lightly to the Higgs.

Has it been disappointing that the LHC hasn’t discovered particles beyond the Higgs?

Tara Shears (TS): Not at all. The Higgs alone is such a huge step forward in completing our picture and understanding of the Standard Model, providing, of course, it is a Standard Model Higgs. And there’s so much more that we’ve learned aside from the Higgs, such as understanding the behaviour of other particles such as differences between matter and antimatter charm quarks.

How will the HL-LHC take our understanding of the Higgs forward?

TS: One way to understand more about the Higgs is to amass enormous amounts of data to look for very rare processes and this is where the HL-LHC is really going to come into its own. It is going to allow us to extend those investigations beyond the particles we’ve been able to study so far making our first observations of how the Higgs interacts with lighter particles such as the muon and how the Higgs interacts with itself. We hope to see that with the HL-LHC.

What is involved with the £1.1bn HL-LHC upgrade?

Philip Burrows (PB): The LHC accelerator is 27 km long and about 90% of it is not going to be affected. One of the most critical aspects of the upgrade is to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to very small sizes of the order of 10 microns in cross section. This upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

Engineer working on the HL-LHC upgrade in the LHC tunnel

What is the current status of the project?

PB: The schedule involves shutting down the LHC for roughly three to four years to install the high-luminosity upgrade, which will then turn on towards the end of the decade. The current CERN schedule has the HL-LHC running until the end of 2041. So there’s another 10 years plus of running this upgraded collider and who knows what exciting discoveries are going to be made.

TS: One thing to think about concerning the cost is that the timescale of use is huge and so it is an investment for a considerable part of the future in terms of scientific exploitation. It’s also an investment in terms of potential spin-out technology.

In what way will the HL-LHC be better than the LHC?

PB: The measure of the performance of the accelerator is conventionally given in terms of luminosity and it’s defined as the number of particles that cross at these collision points per square centimetre per second. That number is roughly 1034 with the LHC. With the high-luminosity upgrade, however, we are talking about making roughly an order of magnitude increase in the total data sample that will be collected over the next decade or so. So in other words, we’ve only got 10% or so of the total data sample so far in the bag. After the upgrade, there’ll be another factor of 10 data that will be collected and that is a completely new ball game in terms of the statistical accuracy of the measurements that can be made and the sensitivity and reach for new physics

Looking beyond the HL-LHC, particle physicists seem to agree that the next particle collider should be a Higgs factory – but what would that involve?

TB: Even at the end of the HL-LHC, there will be certain things we won’t be able to do at the LHC and that’s for several reasons. One is that the LHC is a proton–proton machine and when you’re colliding protons, you end up with a rather messy environment in comparison to the clean collisions between electrons and positrons and this allows you to make certain measurements which will not be possible at the LHC.

So what sort of measurements could you do with a Higgs factory?

TS:  One is to find out how much the Higgs couples to the electron. There’s no way we will ever find that out with the HL-LHC, it’s just too rare a process to measure, but with a Higgs factory, it becomes a possibility. And this is important not because it’s stamp collecting, but because understanding why the mass of the electron, which the Higgs boson is responsible for, has that particular value is of huge importance to our understanding of the size of atoms, which underpins chemistry and materials science.

PB: Although we often call this future machine a Higgs factory, it has far more uses beyond making Higgs bosons. If you were to run it at higher energies, for example, you could make pairs of top quarks and anti-top quarks. And we desperately want to understand the top quark, given it is the heaviest fundamental particle that we are aware of – it’s roughly 180 times heavier than a proton. You could also run the Higgs factory at lower energies and carry out more precision measurements of the Z and W bosons. So it’s really more than a Higgs factory. Some people say it’s the “Higgs and the electroweak boson factory” but that doesn’t quite roll off the tongue in the same way.

Artist concept of the International Linear Collider

While it seems there’s a consensus on a Higgs factory, there doesn’t appear to be one regarding building a linear or circular machine?

PB: There are two main designs on the table today – circular and linear. The motivation for linear colliders is due to the problem of sending electrons and positrons round in a circle – they radiate photons. So as you go to higher energies in a circular collider, electrons and positrons radiate that energy away in the form of synchrotron radiation. It was felt back in the late-1990s that it was the end of the road for circular electron–positron colliders because of the limitations of synchrotron radiation. But the discovery of the Higgs boson at 125 GeV was lighter than some had predicted. This meant that an electron–positron collider would only need a centre of mass energy of about 250 GeV. Circular electron–positron colliders then came back in vogue.

TS: The drawback with a linear collider is that the beams are not recirculated in the same way as they are in a circular collider. Instead, you have “shots”, so it’s difficult to reach the same volume of data in a linear collider. Yet it turns out that both of these solutions are really competitive with each other and that’s why they are still both on the table.

PB: Yes, while a circular machine may have two, or even four, main detectors in the ring, at a linear machine the beam can be sent to only one detector at a given time. So having two detectors means you have to share the luminosity, so each would get notionally half of the data. But to take an automobile analogy, it’s kind of like arguing about the merits of a Rolls-Royce versus a Bentley. Both linear and circular are absolutely superb, amazing options and some have got bells and whistles over here and others have got bells and whistles over there, but you’re really arguing about the fine details.

CERN seems to have put its weight behind the Future Circular Collider (FCC) – a huge 91 km circumference circular collider that would cost £12bn. What’s the thinking behind that?

TS: The cost is about one-and-a-half times that of the Channel Tunnel so it is really substantial infrastructure. But bear in mind it is for a facility that’s going to be used for the remainder of the century, for future physics, so you have to keep that longevity in mind when talking about the costs.

TB: I think the circular collider has become popular because it’s seen as a stepping stone towards a proton–proton machine operating at 100 TeV that would use the same infrastructure and the same large tunnel and begin operation after the Higgs factory element in the 2070s. That would allow us to really pin down the Higgs interaction with itself and it would also be the ultimate discovery machine, allowing us to discover particles at the 30–40 TeV scale, for example.

Artist concept of the Future Circular Collider

What kind of technologies will be needed for this potential proton machine?

PB: The big issue is the magnets, because you have to build very strong bending magnets to keep the protons going round on their 91 km circumference trajectory. The magnets at the LHC are 8 T but some think the magnets you would need for the proton version of the FCC would be 16–20 T. And that is really pushing the boundaries of magnet technology. Today, nobody really knows how to build such magnets. There’s a huge R&D effort going on around the world and people are constantly making progress. But that is the big technological uncertainty. Yet if we follow the model of an electron–positron collider first, followed by a proton–proton machine, then we will have several decades in which to master the magnet technology.

With regard to novel technology, the influential US Particle Physics Project Prioritization Panel, known as “P5”, called for more research into a muon collider, calling it “our muon shot”. What would that involve?

TB: Yes, I sat on the P5 panel that published a report late last year that recommended a course of action for US particle physics for the coming 20 years. One of those recommendations involves carrying out more research and development into a muon collider. As we already discussed, an electron–positron collider in a circular configuration suffers from a lot of synchrotron radiation. The question is if we can instead use a fundamental elementary particle that is more massive than the electron. In that case a muon collider could offer the best of both worlds, the advantages of an electron machine in terms of clean collisions but also reaching larger energies like a proton machine. However, the challenge is that the muon is very unstable and decays quickly. This means you are going to have to create, focus and collide them before they decay. A lot of R&D is needed in the coming decades but perhaps a decision could be taken on whether to go ahead by the 2050s.

And potentially, if built, it would need a tunnel of similar size to the existing LHC?

TB: Yes. The nice thing about the muon collider is that you don’t need a massive 90 km tunnel so it could actually fit on the existing Fermilab campus. Perhaps we need to think about this project in a global way because this has to be a big global collaborative effort. But whatever happens it is exciting times ahead.

  • Tulika Bose, Philip Burrows and Tara Shears were speaking on a Physics World Live panel discussion about the future of particle physics held on 26 September 2024. This Q&A is an edited version of the event, which you can watch online now

The post How a next-generation particle collider could unravel the mysteries of the Higgs boson appeared first on Physics World.

]]>
Feature Tulika Bose, Philip Burrows and Tara Shears discuss proposals for the next big particle collider https://physicsworld.com/wp-content/uploads/2024/10/Fractal-image-of-particle-fission-1238252527-shutterstock_sakkmesterke.jpg newsletter1
Objects with embedded spins could test whether quantum measurement affects gravity https://physicsworld.com/a/objects-with-embedded-spins-could-test-whether-quantum-measurement-affects-gravity/ Mon, 21 Oct 2024 17:26:22 +0000 https://physicsworld.com/?p=117585 Experiment could involve sending tiny diamonds through interferometers

The post Objects with embedded spins could test whether quantum measurement affects gravity appeared first on Physics World.

]]>
A new experiment to determine whether or not gravity is affected by the act of measurement has been proposed by theoretical physicists in the UK, India and the Netherlands. The experiment is similar to one outlined by the same group in 2017 to test whether or not two masses could become quantum-mechanically entangled by gravity, but the latest version could potentially be easier to perform.

An important outstanding challenge in modern theoretical physics is how to reconcile Einstein’s general theory of relativity – which describes gravity – with quantum theory, which describes just about everything else in physics.

“You can quantize gravity”  explains Daniel Carney of the Lawrence Berkeley National Laboratory in California, who was not involved in this latest research. However, he adds, “Gravitational wave detection is extremely quantum mechanical…[Gravity is] a normal quantum field theory and it works fine: it just predicts its own breakdown near black hole singularities and the Big Bang and things like that.”

Multiple experimental groups around the world seek to test whether the gravitational field can exist in non-classical states that would be fundamentally inconsistent with general relativity. If it could not, it would suggest that the reason quantum gravity breaks down at high energies is that gravity was not a quantum field. Performing these tests, however, is extraordinarily difficult because it requires objects that are both small enough to be detectably affected by the laws of quantum mechanics and yet massive enough for their gravitation to be measured.

Hypothetical analogy

Now, Sougato Bose of University College London and colleagues have proposed a test to determine whether or not the quantum state of a massive particle is affected by the detection of its mass. The measurement postulate in quantum mechanics says that it should be affected. Bose offers a hypothetical analogy: a photon passes through an interferometer, splitting its quantum wavefunction into two paths. Both paths interact equally with a mass in a delocalized superposition state. When the paths recombine, the output photon always emerges from the same port of the interferometer. If, however, the position of the mass is detected using another mass, the superposition collapses, the photon wavefuntion no longer interacts equally with the mass along each arm and the photon may consequently emerge from the other port.

However, this conceptually simple test is experimentally impracticable. For a mass to exert a gravitational field sufficient for another mass to detect it, it needs to be at least 10-14 kg – about a micron in size: “A micron-sized mass does not go into a quantum superposition, because a beamsplitter is a like a potential barrier, and a large mass doesn’t tunnel across a barrier of sufficient height,” explains Bose.

The solution to this problem, according to Bose and colleagues, is to use a small diamond crystal containing a single nitrogen vacancy centre – which contains a quantum spin. At the beginning of the experiment, a microwave pulse would initialize the vacancy into a spin superposition. The crystal would then pass through a Stern–Gerlach interferometer, where it would experience a magnetic field gradient.

Nitrogen vacancy centres are magnetic, so opposite spins would be deflected in opposite directions by the magnetic field gradient. Crystals with spins in superposition states would be deflected both ways simultaneously. The spins could then be inverted using another microwave pulse, causing the crystals to recombine with themselves without providing any information about which path they had taken. However, if a second interferometer were placed close enough to detect the gravitational field produced by the first mass, it would collapse the superposition, providing “which path” information and affecting the result measured by the first interferometer.

Stern–Gerlach interferometers

In 2017, Bose and colleagues proposed a similar setup to test whether or not the gravitational attraction between two masses could lead to quantum entanglement of spins in two Stern–Gerlach interferometers. However, Bose argues the new test could be easier to perform, as it would not require measurement of both spins simultaneously – simply for a second interferometer to perform some kind of gravitational detection of the first mass’s position. “If you see a difference, then you can immediately conclude that an update on a quantum measurement is happening.”

Moreover, Bose says that the inevitable invasiveness of a measurement is a different postulate of quantum mechanics from the formation of quantum entanglement between the two particles as a result of their interaction. In a hypothetical theory going beyond both quantum mechanics and general relativity, one of them could hold but not the other. The researchers are now investigating potential ways to implement their proposal in practice – something Bose predicts will take at least 15 years.

Carney sees some merit in the proposal. “I do like the one-sided test nature of things like this, and they are, in some sense, easier to execute,” he says; “but the reason these things are so hard is that I need to take a small system and measure its gravitational field, and this does not avoid that problem at all.”

A paper describing the research has been accepted for publication in Physical Review Letters and is available on the arXiv pre-print server.

The post Objects with embedded spins could test whether quantum measurement affects gravity appeared first on Physics World.

]]>
Research update Experiment could involve sending tiny diamonds through interferometers https://physicsworld.com/wp-content/uploads/2024/10/21-10-2024-massive-spins.jpg
Flocking together: the physics of sheep herding and pedestrian flows https://physicsworld.com/a/flocking-together-the-physics-of-sheep-herding-and-pedestrian-flows/ Mon, 21 Oct 2024 16:04:38 +0000 https://physicsworld.com/?p=117558 Learn how the science of crowd movements can help shepherds and urban designers

The post Flocking together: the physics of sheep herding and pedestrian flows appeared first on Physics World.

]]>

In this episode of Physics World Stories, host Andrew Glester shepherds you through the fascinating world of crowd dynamics. While gazing at a flock of sheep or meandering through a busy street, you may not immediately think of the physics at play – but there is much more than you think. Give the episode a listen to discover the surprising science behind how animals and people move together in large groups.

The first guest, Philip Ball, a UK-based science writer, explores the principles that underpin the movement of sheep in flocks. Insights from physics can even be used to inform herding tactics, whereby dogs are guided – usually through whistles – to control flocks of sheep and direct them towards a chosen destination. For even more detail, check out Ball’s recent Physics World feature “Field work – the physics of sheep, from phase transitions to collective motion“.

Next, Alessandro Corbetta, from Eindhoven University of Technology in the Netherlands, talks about his research on pedestrian flow that won him an Ig Nobel Prize. Corbetta explains how his research field is helping us understand – and manage – the movements of human crowds in bustling spaces such as museums, transport hubs and stadia. Plus, he shares how winning the Ig Nobel has enabled the research to reach a far broader audience than he initially imagined.

The post Flocking together: the physics of sheep herding and pedestrian flows appeared first on Physics World.

]]>
Learn how the science of crowd movements can help shepherds and urban designers Learn how the science of crowd movements can help shepherds and urban designers Physics World Flocking together: the physics of sheep herding and pedestrian flows full false 1:00:31 Podcasts Learn how the science of crowd movements can help shepherds and urban designers https://physicsworld.com/wp-content/uploads/2024/10/crowd-Frankfurt-472237637-iStock-Meinzahn_crop.jpg newsletter
Confused by the twin paradox? Maybe philosophy can help https://physicsworld.com/a/confused-by-the-twin-paradox-maybe-philosophy-can-help/ Mon, 21 Oct 2024 10:00:53 +0000 https://physicsworld.com/?p=117115 Robert P Crease discusses a puzzle that goes to the heart of science and philosophy

The post Confused by the twin paradox? Maybe philosophy can help appeared first on Physics World.

]]>
Once upon a time, a man took a fast rocket to a faraway planet. He soon missed his home world and took a fast rocket back. His twin sister, a physicist, was heartbroken, saying that they were no longer twins and that her sibling was now younger than she due to the phenomenon of time dilation.

But her brother, who was a philosopher, said that they had experienced time equally and so were truthfully the same age. And verily, physicists and philosophers have quarrelled ever since – physicists speaking of clocks and philosophers of time.

This scenario illustrates a famously counterintuitive implication of the special theory of relativity known as the “twin paradox”. It’s a puzzle that two physicists (Adam Frank and Marcello Gleiser) and a philosopher (Evan Thompson) have now taken up in a new book called The Blind Spot. The book shows how bound up philosophy and physics are and how its practitioners can so easily misunderstand each other.

Got time?

Albert Einstein implicitly proposed time dilation in his famous 1905 paper “On the electrodynamics of moving bodies” (Ann. Phys. 17 891), which inaugurated the special theory of relativity. If two identical clocks are synchronized and one then travels at a speed relative to the other and back, the theory implied, then when the clocks are compared one would see a difference in the time registered by the two. The clock that had travelled and returned would have run faster and therefore be “younger”.

For humans to experience the world, time cannot be made of abstract instants stuck together

At around the same time that Einstein was putting together the theory of relativity, the French philosopher Henri Bergson (1859–1941) was working out a theory of time. In Time and Free Will, his doctoral thesis published in 1889, Bergson argued that time, considered most fundamentally, does not consist of dimensionless and identical instants.

For humans to experience the world, time cannot be made of abstract instants stuck together. Humans live in a temporal flow that Bergson called “duration”, and only duration makes it possible to conceive and measure a “clock-time” consisting of instants. Duration itself cannot be measured; any measurement presupposes duration.

These two accounts of time provided the perfect opportunity to display the relation of physics and philosophy. On the one hand was Einstein’s special theory of relativity, which relates measured times of objects moving with respect to each other; on the other was Bergson’s account of the dependence of measured times on duration.

Unfortunately, as the authors of The Blind Spot describe, the opportunity was squandered by off-hand comments during an impromptu exchange between Einstein and Bergson. The much-written-about encounter, which took place in Paris in 1922, saw Einstein speaking to the Paris Philosophical Society, with Bergson in the audience.

Coaxed into speaking at a slow spot in the meeting, Bergson mentioned some ideas from his upcoming book Duration and Simultaneity. While relativity may be complete as a mathematical theory, he said, it depends on duration, or the experience of time itself, which escapes measurement and indeed makes “clock-time” possible.

Einstein was dismissive, calling Bergson’s notion “psychological”. To Einstein, duration is an emotion, the response of a human being to a situation rather than part and parcel of what it means to experience a situation.

Mutual understanding was still possible, had Einstein and Bergson pursued the issue with rigorous and open minds. But the occasion came to an unnecessary standstill when Bergson slipped up in remarks about the twin paradox.

Bergson argued that duration underlies the experience of each twin and neither would experience any dilation of it; neither would experience time as “slowing down” or “speeding up”. This much was true. But Bergson went on to say that duration was therefore a continuum, and any intervals of time in it are abstractions made possible by duration.

Bergson thought that duration is single. Moreover, the reference frames of the twins are symmetric, for the twins are in reference frames moving with respect to each other, not with respect to an absolute frame or universal time. An age difference between the twins, Bergson thought, is purely mathematical and only on their clocks; it might show up when the twins are theorizing, but not in real life.

This was a mistake; Einstein’s theory does indeed entail that the twins have aged differently. One twin has switched directions, jumping from a frame moving away to one in the reverse direction. Frame-switching requires acceleration, and the twin who has undergone it has broken the symmetry. Einstein and other physicists, noting Bergson’s misunderstanding of relativity, then felt legitimated to dismiss Bergson’s idea of duration and of how measurement depended on it.

The Blind Spot uses philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked

Many philosophers, from Immanuel Kant to Alfred North Whitehead, have demonstrated that scientific activity arises from and depends on something like duration. What is innovative about The Blind Spot is that it uses such philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked.

“We must live the world before we conceptualize it,” the authors say. Their book title invokes an analogy with the optic nerve, which makes seeing possible only by creating a blind spot in the visual field. Similarly, the authors write, aspects of experience such as duration make things like measurement possible only by being invisible, even to scientific data-taking and theorizing. Duration cannot itself be measured and precedes being able to practise science – yet it is fundamental to science.

The critical point

The Blind Spot does not eliminate what’s enigmatic about the twin paradox but shows more clearly what that enigma is. An everyday assumption about time is that it’s Newtonian: time is universal and can be measured as flowing everywhere the same. Bergson found that this is wrong, for duration allows humans to interact with the world before they can measure time and develop theories about it. But it turns out that there is no one duration, and relativity theory captures the structure of the relations between durations.

The two siblings may be very different, but with help they can understand each other.

The post Confused by the twin paradox? Maybe philosophy can help appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease discusses a puzzle that goes to the heart of science and philosophy https://physicsworld.com/wp-content/uploads/2024/10/2024-10-CP-clocks-bendy-time-2016598412-Shutterstock_klee048.jpg newsletter
Physics-based model helps pedestrians and cyclists avoid city pollution https://physicsworld.com/a/physics-based-model-helps-pedestrians-and-cyclists-avoid-city-pollution/ Mon, 21 Oct 2024 08:19:46 +0000 https://physicsworld.com/?p=117565 New immersive reality method could also inform policymakers and urban planners about risks, say researchers

The post Physics-based model helps pedestrians and cyclists avoid city pollution appeared first on Physics World.

]]>
Computer rendering of a neon-blue car with airflow lines passing over it and a cloud of emissions trailing behind it, labelled "brake dust ejection" near the front wheels and "tyre and road dispersion" in the middle

Scientists at the University of Birmingham, UK, have used physics-based modelling to develop a tool that lets cyclists and pedestrians visualize certain types of pollution in real time – and take steps to avoid it. The scientists say the data behind the tool could also guide policymakers and urban planners, helping them make cities cleaner and healthier.

As well as the exhaust from their tailpipes, motor vehicles produce particulates from their tyres, their brakes and their interactions with the road surface. These particulate pollutants are known health hazards, causing or contributing to chronic conditions such as lung disease and cardiovascular problems. However, it is difficult to track exactly how they pass from their sources into the environment, and the relationships between pollution levels and factors like vehicle type, speed and deceleration are hard to quantify.

Large-eddy simulations

In the new study, which is detailed in the Royal Society Open Science Journal, researchers led by Birmingham mechanical engineer Jason Stafford developed a tool that answers some of these questions in a way that helps both members of the public and policymakers to manage the associated risks. Among other findings, they showed that the risk of being exposed to non-exhaust pollutants from vehicles is greatest when the vehicles brake – for example at traffic lights, zebra crossings and bus stops.

“We used large-eddy simulations to predict turbulent air flow around road vehicles for cruising and braking conditions that are observed in urban environments,” Stafford explains. “We then coupled these to a set of pollution transport (fluid dynamics) equations, allowing us to predict how harmful particle pollutants from the different emission sources (for example, brakes, tyres and roads) are transported to the wider pedestrian/cyclist environment.”

A visible problem

The researchers’ next goal was to help people “see” these so-called PM2.5 pollutants (which, at 2.5 microns or less in diameter, cannot be detected with the naked eye) in their everyday world without alarming them unduly and putting them off walking and cycling in urban spaces altogether. To this end, they developed an immersive reality tool that makes the pollutants visible in space and time, allowing users to observe the safest distances for themselves. They then demonstrated this tool to members of the general public in the centre of Birmingham, which is the UK’s second most populous city and its second largest contributor to PM2.5 emissions from brake and tyre wear.

The people who tried the tool were able to visualize the pollution data and identify pollutant sources. They could also understand how to navigate urban spaces to reduce their exposure to these pollutants, Stafford says.

“It was very exciting to find that this approach was effective no matter what a person’s pre-existing knowledge of non-exhaust emissions was, or on their educational background,” he tells Physics World.

Clear guidance and a framework via which to convey complex physicochemical data

Stafford says the team’s work provides clear guidance to governments, city councils and urban planners on the interface between road transport emissions and public health. It also creates a framework for conveying complex physicochemical data in a way that members of the public and decision-makers can understand, even if they lack scientific training.

“This is a crucial component if we are to help society,” Stafford says. Longitudinal studies, he adds, would help him and his colleagues understand whether the method actually leads to behavioural change for vehicle drivers or pedestrians.

Looking forward, the Birmingham team aims to reduce the computing complexity required to build the model. At present, the numerical simulations are intensive and require high-performance facilities to solve the governing equations and produce data. “These constraints limited us to constructing a one-way virtual environment,” Stafford says.  “Techniques that would provide close to real-time computing may open up two-way interactions that allow users to quickly change their environment and observe how this affects their exposure to pollution.”

Stafford says the team’s physics-informed immersive approach could also be extended beyond non-exhaust emissions to, for example, visualize indoor air quality and how it interacts with the built environment, where computational modelling tools are regularly used to inform thermal comfort and ventilation.

The post Physics-based model helps pedestrians and cyclists avoid city pollution appeared first on Physics World.

]]>
Research update New immersive reality method could also inform policymakers and urban planners about risks, say researchers https://physicsworld.com/wp-content/uploads/2024/10/2024-10-21-brake-pollution-2.jpg newsletter
Liquid-crystal bifocal lens excels at polarization and edge imaging https://physicsworld.com/a/liquid-crystal-bifocal-lens-excels-at-polarization-and-edge-imaging/ Sat, 19 Oct 2024 14:04:19 +0000 https://physicsworld.com/?p=117562 Applied voltage adjusts intensity at twin focal points

The post Liquid-crystal bifocal lens excels at polarization and edge imaging appeared first on Physics World.

]]>
A bifocal lens that can adjust the relative intensity of its two focal points using an applied electric field has been developed by Fan Fan and colleagues at China’s Hunan University. The lens features a bilayer structure made of liquid crystal materials. Each layer responds differently to the applied electric field, splitting incoming light into oppositely polarized beams.

Bifocal lenses work by combining two distinct lens segments into one, each with a different focal length – the distance from the lens to its focal point. This gives the lens two distinct focal lengths.

While bifocals are best known for their use in vision correction, recent advances in optical materials are expanding their application in new directions. In their research, Fan’s team recognized how recent progress in holography held the potential for further innovations in the field.

Inspired by hologaphy

“Researchers have devised many methods to improve the information capacity of holographic devices based on multi-layer structures,” Fan describes. “We thought this type of structure could be useful beyond the field of holographic displays.”

To this end, the Hunan team investigated how layers within these structures could manipulate the polarization states of light beams in different ways. To achieve this, they fabricated their bifocal lens from liquid crystal materials.

Liquid crystals comprise molecules that can flow like in a liquid, but can maintain specific orientations – like molecules in a crystal. These properties make liquid crystals ideal for modulating light.

Bilayer benefits

“Most liquid-crystal-based devices are made from single-layer structures, but this limits light-field modulation to a confined area,” Fan explains. “To realize more complex and functional modulation of incident light, we used bilayer structures composed of a liquid crystal cell and a liquid crystal polymer.”

In the cell, the liquid crystal layer is sandwiched between two transparent substrates, creating a 2D material. When a voltage is applied across the cell, the molecules align along the electric field. In contrast, the molecules in the liquid-crystal polymer are much larger, and their alignment is not affected by the applied voltage.

Fan’s team took advantage of these differences, finding that each layer modulates circularly polarized light in different ways. As a result, the lens could split the light into left-handed and right-handed circularly polarized components. Crucially, each of these components is focused at a different point. By adjusting the voltage across the lens, the researchers could easily control the difference in intensity at the two focal points.

In the past, achieving this kind of control would have only been possible by the mechanical rotation of the lens layers with respect to each other. The new design is much simpler and makes it easier and more efficient to adjust the intensities at the two focal points.

Large separation distance

To demonstrate this advantage, Fan’s team used their bifocal lens in two types of imaging experiments. One was polarization imaging, which analyses differences in how left-handed and right-handed circularly polarized light interact with a sample. This method typically requires a large separation distance between focal points.

They also tested the lens in edge imaging, which enhances the clarity of boundaries in images. This requires a much smaller separation distance between focal points.

By adjusting the geometric configurations within the bilayer structure, Fan’s team achieved the tight control over the separation between the focal points. In both polarization and edge imaging experiments, their bifocal lens did very well, closely matching the theoretical performance predicted by their simulations. These promising results suggest that the lens could have a wide range of applications in optical systems.

Based on their initial success, Fan and colleagues are now working to reduce the manufacturing costs of their multi-layer bifocal lenses. If successful, this would allow the lens to be used in a wide range of research applications.

“We believe that the light control mechanism we created using the multilayer structure could also be used to design other optical devices, including holographic devices and beam generators, or for optical image processing,” Fan says.

The lens is described in Optics Letters.

The post Liquid-crystal bifocal lens excels at polarization and edge imaging appeared first on Physics World.

]]>
Research update Applied voltage adjusts intensity at twin focal points https://physicsworld.com/wp-content/uploads/2024/10/18-10-2024-LCD-bifocal-lens.jpg
Century-old photoelectric effect inspires a new search for quantum gravity https://physicsworld.com/a/century-old-photoelectric-effect-inspires-a-new-search-for-quantum-gravity/ Fri, 18 Oct 2024 14:46:05 +0000 https://physicsworld.com/?p=117554 Proposed experiment could demonstrate absorption and emission of individual gravitons

The post Century-old photoelectric effect inspires a new search for quantum gravity appeared first on Physics World.

]]>
According to quantum mechanics, our universe is like a Lego set. All matter particles, as well as particles such as light that act as messengers between them, come in discrete blocks of energy. By rearranging these blocks, it is possible to build everything we observe around us.

Well, almost everything. Gravity, a crucial piece of the universe, is missing from the quantum Lego set. But while there is still no quantum theory of gravity, the challenge of detecting its signatures now looks a little more manageable thanks to a proposed experiment that takes inspiration from the photoelectric effect, which Albert Einstein used to prove the quantum nature of light more than a century ago.

History revisited

Quantum mechanics and general relativity each, independently, provide accurate descriptions of our universe – but only at short and long distances, respectively. Bridging the two is one of the deepest problems facing physics, with tentative theories approaching it from different perspectives.

However, all efforts of describing a quantum theory of gravity agree on one thing: if gravity is quantum, then it, too, must have a particle that carries its force in discrete packages, just as other forces do.

In the latest study, which is described in Nature Communications, Germain Tobar and Sreenath K Manikandan of Sweden’s Stockholm University, working with Thomas Beitel and Igor Pikovski of the Stevens Institute of Technology, US, propose a new experiment that could show that gravity does indeed come in these discrete packages, which are known as gravitons.

The principle behind their experiment parallels that of the photoelectric effect, in which light shining on a material causes it to emit discrete packets of energy, one particle at a time, rather than in a continuous spectrum. Similarly, the Stockholm-Stevens team proposes using massive resonant bars that have been cooled and tuned to vibrate if they absorb a graviton from an incoming gravitational wave. When this happens, the column’s quantum state would undergo a transition that can be detected by a quantum sensor.

“We’re playing the same game as photoelectric effect, except instead of photons – quanta of light – energy is exchanged between a graviton and the resonant bar in discrete steps,” Pikovski explains.

“Still hard, but not as hard as we thought”

While the idea of using resonant bars to detect gravitational waves dates back to the 1960s, the possibility of using it to detect quantum transitions is new. “We realized if you change perspectives and instead of measuring change in position, you measure change in energy in the quantum state, you can learn more,” Pikovski says.

A key driver of this perspective shift is the Laser Interferometer Gravitational-wave Observatory, or LIGO, which detects gravitational waves by measuring tiny deviations in the length of the interferometer’s arms as the waves pass through them. Thanks to LIGO, Pikovski says, “We not only know when gravitational waves are detected but also [their] properties such as frequency.”

Aerial photo of the Hanford detector site of LIGO, showing a building in the centre of the image and two long interferometer arms stretching into the distance of a desert-like landscape

In their study, Pikovski and colleagues used LIGO’s repository of gravitational-wave data to narrow down the frequency and energy range of typical gravitational waves. This allowed them to calculate the type of resonant bar required to detect gravitons. LIGO could also help them cross-correlate any signals they detect.

“When these three ingredients—resonant bar as a macroscopic quantum detector, detecting quantum transitions using quantum sensors and cross-correlating detection with LIGO— are taken altogether, it turns out detecting a graviton is still hard but not as hard as we thought,” Pikovski says.

Within reach, theoretically

For most known gravitational wave events, the Stockholm-Stevens scientists say that the number of gravitons their proposed device could detect is small. However, for neutron star-neutron star collisions, a quantum transition in reasonably-sized resonant bars could be detected for one in every three collisions, they say.

Carlo Rovelli, a theorist at the University of Aix-Marseille, France who was not involved in the study, agrees that “the goal of quantum gravity observations seem within reach”. He adds that the work “shows that the arguments claiming that it should be impossible to find evidence for single-graviton exchange were wrong”.

Frank Wilczek, a theorist at the Massachusetts Institute of Technology (MIT), US who was also not involved in the study, is similarly positive. For a consistent theory that respects quantum mechanics and general relativity, he says, “it can be interpreted that this experiment would prove the existence of gravitons and that the gravitational field is quantized”.

So when are we going to start detecting?

On paper, the experiment shows promise. But actually building a massive graviton detector with measurable quantum transitions will be anything but easy.

Part of the reason for this is that a typical gravitational wave shower can consist of approximately zillions of gravitons. Just as the pattern of individual raindrops can be heard as they fall on a tin roof, carefully prepared resonant bars should, in principle, be able to detect individual incoming gravitons within these gravitational wave showers.

But for this to happen, the bars must be protected from noise and cooled down to their least energetic state. Otherwise, such tiny energy changes may be impossible to observe.

Vivishek Sudhir, an expert in quantum measurements at MIT who was not part of the research team, describes it as “an enormous practical challenge still, one that we do not currently have the technology for”.

Similarly, quantum sensing has been achieved in resonators, but only at much smaller masses than the tens of kilograms or more required to detect gravitons. The team is, however, working on a potential solution: Tobar, a PhD student at Stockholm and the study’s lead author, is devising a version of the experiment that would send the signal from the bars to smaller masses using transducers – in effect, meeting the quantum sensing challenge in the middle. “It’s not something you can do today, but I would guess we can achieve it within a decade or two,” Pikovski says.

Sudhir agrees that quantum measurements and experiments are rapidly progressing. “Keep in mind that only 15 years ago, nobody imagined that tangibly macroscopic systems would even be prepared in quantum states,” he says. “Now, we can do that.”

The post Century-old photoelectric effect inspires a new search for quantum gravity appeared first on Physics World.

]]>
Research update Proposed experiment could demonstrate absorption and emission of individual gravitons https://physicsworld.com/wp-content/uploads/2024/10/Pikovski_SingleGravitonDetector_web.jpg newsletter
Passing the torch: The “QuanTour” light source marks the International Year of Quantum https://physicsworld.com/a/passing-the-torch-the-quantour-light-source-marks-the-international-year-of-quantum/ Thu, 17 Oct 2024 15:25:15 +0000 https://physicsworld.com/?p=117483 Katherine Skipper visits the Cavendish laboratory in Cambridge to catch a quantum light source that’s touring Europe for the International Year of Quantum

The post Passing the torch: The “QuanTour” light source marks the International Year of Quantum appeared first on Physics World.

]]>
Earlier this year, the start of the Paris Olympics was marked by the ceremonial relay of the Olympic torch. You’ll have to wait until 2028 for the next Olympics, but in the meantime there’s the International Year of Quantum (IYQ) in 2025, which also features a torch relay. In keeping with the quantum theme, however, this light source is very, very small.

The light source is currently on tour around 12 different quantum labs around Europe as part of IYQ and last week I visited the Cavendish Laboratory at the University of Cambridge, UK, where it was on stop eight of what’s dubbed QuanTour. It’s a project of the German Physical Society (DPG), organised by Doris Reiter from the Technical University of Dortmund and Tobias Heindel from the Technical University of Berlin.

According to Mete Atatüre, who leads the Quantum Optical Materials and Systems (QOMS) group at Cambridge and in whose lab QuanTour is based, one of the project’s aims is to demystify quantum science. “I think what we need to do, especially in the year of quantum, is to have a change of style.” he says. “So that we focus not on the weirdness of quantum but on what it can actually bring us.”

Indeed, though it requires complex optical apparatus and must be cooled with helium, the Quantour light source itself looks like an ordinary computer chip. It is in fact an array of quantum dots, each emitting single photons when illuminated by a laser. “It’s really meant to show off that you can use quantum dots as a plug in light source” explains Christian Schimpf, a postdoc in the Quantum Engineering Group in Cambridge, who showed me around the lab where QuanTour is spending its time in England.

The light source is right at home in the Cambridge lab, where quantum dots are a key area of research. The team is working on networking applications, where the goal is to transmit quantum information over long distances, preferably using existing fibre-optic networks. In fibre optics, the signal is amplified regularly along the route, but quantum networks can’t do this – the so-called “no-cloning” theorem means it’s impossible to create a copy of an unknown quantum state.

The solution is to create a long-distance communication link from many short-distance entanglements. The challenge for scientists in the Cambridge lab, Schimpf explains, is to build ensembles of entangled qubits that can “store quantum bits on reasonable time scales.” He’s talking about just a few milliseconds, but this is still a significant challenge, requiring cooling close to absolute zero and precise control over the fabrication process.

Elsewhere in the Cavendish Laboratory, scientists in the quantum group are investigating platforms for quantum sensing, where changes to single quantum states are used to measure tiny magnetic fields. Attractive materials for this include diamond and some 2D materials, where quantum spin states trapped at crystal defects can act as qubits. Earlier this year Physics World spoke to Hannah Stern, a former postdoc in Atatüre’s group, who won an award from the Institute of Physics for her research on quantum sensing with hexagonal boron nitride, which she began in Cambridge.

I also spoke to Dorian Gangloff, head of the quantum engineering group, who described his recent work on nonlinear quantum optics. Nonlinear optical effects are generally only observed with high-power light sources such as lasers, but Gangloff’s team is trying to engineer these effects in single photons. Nonlinear quantum optics could be used to shift the frequency of a single photon or even split it into an entangled pair.

When asked about the existing challenges of rolling out quantum technologies, Atatüre points out that when quantum mechanics was first conceived, the belief was: “Of course we’ll never be able to see this effect, but if we did, what would the experimental result look like?” Thanks to decades of work however, it is indeed possible to see quantum science in action, as I did In Cambridge. Atatüre is confident that researchers will be able to take the next step – building useful technologies with quantum phenomena.

At the end of this week, QuanTour’s time in Cambridge will be up. If you missed it, you’ll have to head to University College Cork in Ireland, where it will be spending the next leg of its journey with the group of Emanuele Pelucchi.

 

The post Passing the torch: The “QuanTour” light source marks the International Year of Quantum appeared first on Physics World.

]]>
Blog Katherine Skipper visits the Cavendish laboratory in Cambridge to catch a quantum light source that’s touring Europe for the International Year of Quantum https://physicsworld.com/wp-content/uploads/2024/10/20241010_105442-scaled.jpg
Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia https://physicsworld.com/a/data-intensive-phds-at-liv-inno-prepare-students-for-careers-outside-of-academia/ Thu, 17 Oct 2024 10:38:15 +0000 https://physicsworld.com/?p=117426 This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science

The post Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia appeared first on Physics World.

]]>
LIV.INNO, Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science, offers students fully-funded PhD studentships across a broad range of research projects from  medical physics to quantum computing. All students receive training in high-performance computing, data analysis, and machine learning and artificial intelligence. Students also receive career advice and training in project management, entrepreneurship and communication skills – preparing them for careers outside of academia.

This podcast features the accelerator physicist Carsten Welsch, who is head of the Accelerator Science Cluster at the University of Liverpool and director of LIV.INNO, and the computational astrophysicist Andreea Font  who is a deputy director of LIV.INNO.

They chat with Physics World’s Katherine Skipper about how LIV.INNO provides its students with a wide range of skills and experiences – including a six-month industrial placement.

This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science.

LIVINNO CDT logo

The post Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia appeared first on Physics World.

]]>
Podcasts This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science https://physicsworld.com/wp-content/uploads/2024/10/Andreea-and-Carsten-new.jpg newsletter
Operando NMR methods for redox flow batteries and ammonia synthesis https://physicsworld.com/a/operando-nmr-methods-for-redox-flow-batteries-and-ammonia-synthesis/ Thu, 17 Oct 2024 09:03:41 +0000 https://physicsworld.com/?p=114973 Join the audience for a live webinar on 13 November 2024 sponsored by BioLogic and Bruker, in partnership with The Electrochemical Society

The post Operando NMR methods for redox flow batteries and ammonia synthesis appeared first on Physics World.

]]>

Magnetic resonance methods, including nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR), are non-invasive, atom-specific, quantitative, and capable of probing liquid and solid-state samples. These features make magnetic resonance ideal tools for operando measurement of an electrochemical device, and for establishing structure-function relationships under realistic condition.

The first part of the talk presents how coupled inline NMR and EPR methods were developed and applied to unravel rich electrochemistry in organic molecule-based redox flow batteries. Case studies performed on low-cost and compact bench-top systems are reviewed, demonstrating that a bench-top NMR has sufficient spectral and temporal resolution for studying degradation reaction mechanisms, monitoring the state of charge, and crossover phenomena in a working RFB. The second part of the talk presents new in situ NMR methods for studying Li-mediated ammonia synthesis, and the direct observation of lithium plating and its concurrent corrosion, nitrogen splitting on lithium metal, and protonolysis of lithium nitride. Based on these insights, potential strategies to optimize the efficiencies and rates of Li-mediated ammonia synthesis are discussed. The goal is to demonstrate that operando NMR and EPR methods are powerful and general and can be applied for understanding the electrochemistry underpinning various applications.

An interactive Q&A session follows the presentation.

Evan Wenbo Zhao is a tenured assistant professor at the Magnetic Resonance Research Center at Radboud Universiteit Nijmegen in the Netherlands. His core research focuses on developing operando/in situ NMR methods for studying electrochemical storage and conversion chemistries, including redox flow batteries, electrochemical ammonia synthesis, carbon-dioxide reduction, and lignin oxidation. He has led projects funded by the Dutch Research Council Open Competition Program, Bruker Collaboration, Radboud-Glasgow Collaboration Grants, the Mitacs Globalink Research Award, and others. After receiving his BS from Nanyang Technological University, he completed a PhD in chemistry with Prof. Clifford Russell Bowers at the University of Florida. Evan’s postdoc was with Prof. Dame Clare Grey at the Yusuf Hamied Department of Chemistry at the University of Cambridge.

 

The post Operando NMR methods for redox flow batteries and ammonia synthesis appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 13 November 2024 sponsored by BioLogic and Bruker, in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/06/2024-11-13-ECS-image.jpg
US Department of Energy announces new Fermilab contractor https://physicsworld.com/a/us-department-of-energy-announces-new-fermilab-contractor/ Thu, 17 Oct 2024 08:00:30 +0000 https://physicsworld.com/?p=117431 Yet some see little change in the selection of Fermi Forward Discovery Group

The post US Department of Energy announces new Fermilab contractor appeared first on Physics World.

]]>
A consortium of universities and companies has been awarded the contract to manage and operate Fermilab, the US’s premier particle-physics facility. The US Department of Energy (DOE) announced on 1 October that the new contractor, Fermi Forward Discovery Group, LLC (FFDV), will take over operation of the lab from 1 January 2025.

FFDV consists of Fermilab’s current contractor – the University of Chicago and Universities Research Association (URA), a consortium of research universities – as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates. The conglomerate’s initial contract will last for five years but “exemplary performance” running the lab could extend that by a further decade.

“We are honoured that the Department of Energy has selected FermiForward to manage Fermilab after a rigorous contract process,” University of Chicago president Paul Alivisatos told Physics World. “FermiForward represents a new approach that brings together the best parts of Fermilab with two new industry partners, who bring broad expertise from a deep bench from across the DOE complex.”

Alivisatos notes that the inclusion of Amentum and Longenecker will strengthen the management capability of the consortium given the companies’ “exemplary record of accomplishment in project management, operations, and safety.” Longenecker, a female-led company based in Las Vegas, is part of the managerial teams currently running Sandia, Los Alamos, and Savannah River national laboratories. Virginia-based Amentum, meanwhile, has a connection to Fermilab through Greg Stephens, its former vice president, who is now Fermilab’s chief operating officer.

The choice of the new contractor comes after Fermilab has faced a series of operating and budget challenges. In 2021, the institution scored low marks on a DOE assessment of its operations. A year later, complaints emerged that the lab’s leadership was restricting access to its campus despite reduced concern about the spread of COVID-19. In July, a group of Fermilab staff whistleblowers claimed that a series of problems indicated that the lab was “doomed” without a change of management. And in late August, the lab underwent a period of limited operations to reduce a budgetary shortfall.

The Fermilab staff whistleblowers, however, see little change in the DOE’s selection of FFDV. Indeed, the key members of FFDV – the University of Chicago and URA – made up Fermi Research Alliance, the previous contractor that has overseen Fermilab’s operations since 2007.

“We understand that the only reaction by DOE to our investigative report is that of coaching the University of Chicago’s teams that steward the university’s relationships with the national labs,” the group wrote in a letter to Geraldine Richmond, DOE’s Undersecretary for Science and Innovation, which has been seen by Physics World. “By doing so, the DOE is once again showing that it is for the status-quo.”

The DOE hasn’t revealed how many bids it received or other details about the contract award. In a statement to Physics World it noted that it “cannot discuss the contract at the current time because of business sensitive information”. Fermilab declined to comment for the story.

The post US Department of Energy announces new Fermilab contractor appeared first on Physics World.

]]>
News Yet some see little change in the selection of Fermi Forward Discovery Group https://physicsworld.com/wp-content/uploads/2024/10/24-0129-05.jpg newsletter1
Mountaintop observations of gamma-ray glow could shed light on origins of lightning https://physicsworld.com/a/mountaintop-observations-of-gamma-ray-glow-could-shed-light-on-origins-of-lightning/ Wed, 16 Oct 2024 17:10:28 +0000 https://physicsworld.com/?p=117476 Electric fields near Earth’s surface are stronger than expected

The post Mountaintop observations of gamma-ray glow could shed light on origins of lightning appeared first on Physics World.

]]>
Research done at a mountaintop cosmic-ray observatory in Armenia has shed new light on how thunderstorms can create flashes of gamma rays by accelerating electrons. Further study of the phenomenon could answer important questions about the origins of lightning.

This accelerating process is called thunderstorm ground enhancement (TGE), whereby thunderstorms create strong electric fields that accelerate atmospheric free electrons to high energies. These electrons then collide with air molecules, creating a cascade of secondary charged particles. When charged particles are deflected in these collisions they emit gamma rays in a process called bremsstrahlung.

The flashes of gamma rays are called “gamma-ray glows” and are some of the strongest natural sources of high-energy radiation on Earth.
Physicist Joseph Dwyer at the University of New Hampshire, who was not involved in the Armenian study says, “When you think of gamma rays, you usually think of black holes or solar flares. You don’t think of inside the Earth’s troposphere as being a source of gamma rays, and we’re still trying to understand this.”

Century-old mystery

Indeed, the effect was first predicted a century ago by Nobel laureate Charles Wilson, who is best known for his invention of the cloud chamber radiation detector. However, despite numerous attempts over the decades, early researchers were unable to detect this acceleration.

This latest research was led by Ashot Chiliangrian, who is director of the Cosmic Ray Division of Armenia’s Yerevan Physics Institute. The measurements were made at a research station located 3200 m above sea level on Armenia’s Mount Aragats.

Chiliangrian says, “There were some people that were convinced that there was no such effect. But now, on Aragats, we can measure electrons and gamma rays directly from thunderclouds.”

In the summer of 2023,  Chiliangrian and colleagues detected gamma rays, electrons, neutrons and other particles from intense TGE events. By analysing 56 of those events, the team has now concluded that the electric fields involved were close to Earth’s surface.

Though Aragats is not the first facility to confirm the existence of these gamma-ray glows, it is uniquely well-situated, sitting at a high altitude in an active storm region. This allows measurements to be made very close to thunderclouds.

Energy spectra

Instead of measuring the electric field directly, the team inferred its strength by analysing the energy spectra of electrons and gamma rays detected during TGE events.

By comparing the detected radiation to well-understood simulations of electron acceleration, the team deduced the strength of the electric field responsible for the particle showers as 2.1 kV/cm.

This field strength is substantially higher than what has been observed in most previous studies of thunderstorms, which typically use weather balloons to take direct field measurements.

The fact that such a high field can exist near the ground during a thunderstorm challenges previous assumptions about the limits of electric fields in the atmosphere.

Moreover, this discovery could help solve one of the biggest mysteries in atmospheric science: how lightning is initiated. Despite decades of research, scientists have been unable to measure electric fields strong enough to break down the air and create the initial spark of lightning.

“These are nice measurements and they’re one piece of the puzzle,” says Dwyer, “What these are telling us is that these gamma ray glows are so powerful and they’re producing so much ionizing radiation that they’re partially discharging the thunderstorm.”

“As the thunderstorms try to charge up, these gamma rays turn on and cause the field to kind of collapse,” Dwyer explains, comparing it to stepping on bump in a carpet. “You collapse it in one place but it pops up in another, so this enhancement may be enough to help the lightning get started.”

The research is described in Physics Review D.

The post Mountaintop observations of gamma-ray glow could shed light on origins of lightning appeared first on Physics World.

]]>
Research update Electric fields near Earth’s surface are stronger than expected https://physicsworld.com/wp-content/uploads/2024/10/16-10-2024-Aragats_Cosmic_Ray_Research_Station.jpg newsletter
Spiders use physics, not chemistry, to cut silk in their webs https://physicsworld.com/a/spiders-use-physics-not-chemistry-to-cut-silk-in-their-webs/ Wed, 16 Oct 2024 14:02:56 +0000 https://physicsworld.com/?p=117444 New work resolves a longstanding debate and could aid the development of new cutting tools

The post Spiders use physics, not chemistry, to cut silk in their webs appeared first on Physics World.

]]>
Spider silk is among the toughest of all biological materials, and scientists have long been puzzled by how spiders manage to cut it. Do they break it down by chemical means, using enzymes? Or do they do it mechanically, using their fangs? Researchers at the University of Trento in Italy have now come down firmly on the side of fangs, resolving a longstanding debate and perhaps also advancing the development of spider-fang-inspired cutting tools.

For spiders – especially those that spin webs – the ability to cut silk lines quickly and efficiently is a crucial skill. Previously, the main theory of how they do it involved enzymes that they produce in their mouths, and that can break silk down. This mechanism, however, cannot explain how spiders cut silk so quickly. Mechanical cutting is faster, but spiders’ fangs are not shaped like scissors or other common cutting tools, so this was considered less likely.

In the new work, researchers led by Nicola Pugno and Gabriele Greco studied two species of spiders (Nuctenea umbratica and Steatoda triangulosa) collected from around the campus in Trento. In one set of experiments, they allowed the spiders to interact with artificial webs made from Kevlar, a synthetic carbon-fibre material. To weave their own webs, the spiders needed to remove the Kevlar threads and replace them with silk ones. They did this by first cutting the key structural threads in the artificial webs, then spinning a silken framework in between to build up the web structure. Any discarded fibres became support for the web.

Pugno, Greco and colleagues also allowed the spiders to build webs naturally (that is, without any artificial materials present). They then removed some of the silken threads and substituted them with carbon fibre ones so they could study how the spiders cut them.

Revealing images

One of the researchers’ first observations was that the spiders found it harder to cut fibres made from Kevlar than those made from silk. While cutting silk took them just a fraction of a second, they needed more than 10 s to cut Kevlar. This implies that much more effort was required.

A further clue came from scanning electron microscope (SEM) images of the spider-cut silk and carbon fibres. These images showed that the fracture surfaces of both were similar to those of samples that were broken with scissors or during tensile tests.

Meanwhile, images of the spider fangs themselves revealed micro-structured serrations similar to those found in animals such as crocodiles and sharks. The advantage of serrated edges is that they minimize the force required to cut a material at the point of contact – something humans have long exploited by making serrated blades that quickly cut through tough materials like wood and steel (not to mention foods like bread and steak).

In spider fangs, however, the serrations are not evenly spaced. Instead, Pugno and Greco found that the gap between them is narrowest at the tip of a fang and widest nearest the base. This, they say, suggests that when spiders want to cut a fibre, their fangs slide inwards across it until it becomes trapped in a serration of the same size. At the contact point between fibre and serration, the required cutting force is at a minimum, thereby maximizing the efficiency of cutting.

“We conducted specific experiments to prove that the fang of a spider is a ‘smart’ tool with graded serrations for cutting fibres of different dimensions naturally placed in the best place for maximizing cutting efficiency,” Pugno explains. “This makes it more efficient than a razor blade to cut these fibres,” Greco adds.

The researchers, who report their work in Advanced Science, also conducted analytical and finite-element numerical analyses to back up their observations. These revealed that when a fibre presses onto a fang, the stress on the fibre becomes concentrated thanks to the two bulges at the top of the serration. This concentration initiates the propagation of cracks through the fibre, leading to its failure, they say.

The researchers note that serration had previously been observed in 48 families of modern spiders (araneomorphs) as well as at least three families of older species (mygalomorphs). They speculate that it may have been important for functions other than cutting silk, such as chewing and mashing prey, with the araneomorphae possibly later evolving it to cut silk. But their findings are also relevant in fields other than evolutionary biology, they say.

“By explaining how spiders cut, we reveal a basic engineering principle that could inspire the design of highly efficient, sharper and more performing cutting tools that could be of interest for high-tech applications,” Pugno tells Physics World. “For example, for cutting wood, metal, stone, food or hair.”

The post Spiders use physics, not chemistry, to cut silk in their webs appeared first on Physics World.

]]>
Research update New work resolves a longstanding debate and could aid the development of new cutting tools https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_1.jpg newsletter
Around the world in 16 orbits: a day in the life of the International Space Station https://physicsworld.com/a/around-the-world-in-16-orbits-a-day-in-the-life-of-the-international-space-station/ Wed, 16 Oct 2024 10:00:27 +0000 https://physicsworld.com/?p=117110 Kate Gardner reviews the novel Orbital by Samantha Harvey

The post Around the world in 16 orbits: a day in the life of the International Space Station appeared first on Physics World.

]]>
Every day the International Space Station (ISS) orbits the Earth 16 times. Every day its occupants could (if they aren’t otherwise occupied) observe each one of our planet’s terrains and seasons. For almost a quarter of a century the ISS has been continuously inhabited by humans, a few at a time, hailing from – at the latest count – 21 countries. This impressive feat of science, engineering and international co-operation may no longer be noteworthy or news fodder, yet it still has the power to astonish and inspire.

This makes it an excellent setting for a novel that’s quietly philosophical, tackling some of the biggest questions humanity has ever asked. Orbital by British author Samantha Harvey follows four astronauts and two cosmonauts through one day on the ISS. It is an ordinary, unremarkable day and yet their location makes every moment remarkable.

We meet our characters – four men and two women, from five countries – as they are waking up during orbit 1 and leave them fast asleep in orbit 16. Harvey has clearly read astronaut accounts and studied information available from NASA and the European Space Agency. She includes as much detail about life on the ISS as a typical popular-science book on the subject.

These minutiae of astronaut tasks are interspersed with descriptions of Earth during each of the 16 orbits, as well as long passages deliberating everything from whether there is a God and climate catastrophe to global politics and the futility of trying to understand another human being.

The characters going about their tightly scheduled day in Orbital are individual people, each with their own preoccupations, past and present. While they exercise and perform maintenance tasks, science experiments and self-assessments, their thoughts roam to give us an insight that feels as true as any astronaut memoir. One character muses on the difficulty of sending messages to her loved ones, feeling that everything she has to say is either hopelessly mundane or so grandiose as to be ridiculous. I don’t know if an astronaut on the ISS has ever thought that, but for me, it perfectly encapsulates their situation.

The ISS’s orbit 400 km above Earth is close enough to see the topography and colours that pass beneath, but far enough that signs of humanity can only be inferred – at least in daylight. This doesn’t stop the characters from learning to see the traces of humans: algal blooms in oceans warmer than they once were; retreated glaciers; mountains bare of snow that were once renowned for their white caps; absent rainforest; reclaimed land covered by acres of greenhouses.

It’s a curious choice to set a book on the ISS that isn’t science fiction. It’s fiction, yes, and certainly based in the world of science, but the science it depicts isn’t futuristic or even particularly cutting-edge. The ISS is now quite old technology, nearing the end of its remarkable life, as Harvey points out in an insightful essay for LitHub. Its occupants still do experiments to further our scientific knowledge, but even there what Harvey describes is sci-fact, not sci-fi.

In her LitHub essay, Harvey says it was precisely this “slow death” of the ISS that appealed to her. The ISS is almost a time capsule, hearkening back to the end of the Cold War. It now looks likely that Russia will pull out – or be ejected – from the mission before its projected end date of 2030.

Viewed from the ISS, no borders are visible, and the crew joke comfortably about their national differences. However, their lives are nevertheless dictated by strict and sometimes petty rules governing, for example, which toilet and which exercise equipment to use. These regulations are just one more banal reality of life on the ISS, like muscle atrophy, blocked sinuses or packing up waste to go in the next resupply craft.

Just consider the real-life NASA astronauts Suni Williams and Butch Wilmore, whose stay on the ISS has been extended following problems with the Boeing craft that was supposed to bring them home in August. Having two extra people living on the space station for several months longer than planned is an intensely practical matter, made easier by such innovations as the recycling of their urine and sweat into drinking water, or that astronauts must swallow toothpaste rather than spit it out.

Harvey manages to convey that these details are quotidian. But she also imbues them with beauty. During one conversation in Orbital, a character sheds four tears. He and a crew mate then chase down each floating water droplet because loose liquids must be avoided. It’s a small moment that says so much with few words.

Orbital has been shortlisted for both the 2024 Booker Prize (winner to be announced on 12 November) and the 2024 Ursula K Le Guin Prize for Fiction (the winner of which will be announced on 21 October). The recognition reflects the book’s combination of literary prose and unusual globe-spanning (indeed, beyond global) perspective. Harvey’s writing has been compared to Virginia Woolf – a comparison that is well warranted. And yet Orbital is as accessible and educational as the best of popular science. It’s a feat almost as astonishing as the existence of the ISS.

  • 2024 Vintage 144pp £9.99pb

The post Around the world in 16 orbits: a day in the life of the International Space Station appeared first on Physics World.

]]>
Opinion and reviews Kate Gardner reviews the novel Orbital by Samantha Harvey https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Gardner-Virts.jpg newsletter
Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize https://physicsworld.com/a/semiconductor-pioneer-richard-friend-bags-2024-isaac-newton-medal-and-prize/ Tue, 15 Oct 2024 14:44:55 +0000 https://physicsworld.com/?p=117420 Friend won for his work on the fundamental electronic properties of molecular semiconductors and in their engineering development

The post Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize appeared first on Physics World.

]]>
The semiconductor physicist Richard Friend from the University of Cambridge has won the 2024 Isaac Newton Medal and Prize “for pioneering and enduring work on the fundamental electronic properties of molecular semiconductors and in their engineering development”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics”.

Friend was born in 1953 in London, UK. He completed a PhD at the University of Cambridge in 1979 under the supervision of Abe Yoffe and remained at Cambridge becoming a full professor in 1995. Friend’s research has led to a deeper understanding of the electronic properties of molecular semiconductors having in the 1980s pioneered the fabrication of thin-film molecular semiconductor devices that were later developed to support field-effect transistor circuits.

When it was discovered that semiconducting polymers could be used for light-emitting diodes (LEDs), Friend founded Cambridge Display Technology in 1992 to develop polymer LED displays. In 2000 he also co-founded Plastic Logic to advance polymer transistor circuits for e-paper displays.

As well as the 2024 Newton Medal and Prize, Friend’s other honours include the IOP’s Katherine Burr Blodgett Medal and Prize in 2009 and in 2010 he shared the Millennium Technology Prize for the development of plastic electronics. He was also knighted for services to physics in the 2003 Queen’s Birthday Honours list.

“I am immensely proud of this award and the recognition of our work,” notes Friend. “Our Cambridge group helped set the framework for the field of molecular semiconductors, showing new ways to improve how these materials can separate charges and emit light.”

Friend notes that he is “not done just yet” and is currently working on molecular semiconductors to improve the efficiency of LEDs.

Innovating and inspiring

Friend’s honour formed part of the IOP’s wider 2024 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists.

Other winners include Laura Herz from the University of Oxford, who receives the Faraday Prize “for pioneering advances in the photophysics of next-generation semiconductors, accomplished through innovative spectroscopic experiments”. Rebecca Dewey from the University of Nottingham, meanwhile, receives the Phillips Award “for contributions to equality, diversity and inclusion in Institute of Physics activities, including promoting, updating and improving the accessibility of the I am a Physicist Girlguiding Badge, and engaging with British Sign Language users”.

In a statement, IOP president Keith Burnett congratulated all the winners, adding that they represent “some of the most innovative and inspiring” work that is happening in physics.

“Today’s world faces many challenges which physics will play an absolutely fundamental part in addressing, whether its securing the future of our economy or the transition to sustainable energy production and net zero,” adds Burnett. “Our award winners are in the vanguard of that work and each one has made a significant and positive impact in their profession. Whether as a researcher, teacher, industrialist, technician or apprentice, I hope they are incredibly proud of their achievements.”

The post Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize appeared first on Physics World.

]]>
News Friend won for his work on the fundamental electronic properties of molecular semiconductors and in their engineering development https://physicsworld.com/wp-content/uploads/2024/10/richard_friend_iop_web.jpg newsletter1
‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth https://physicsworld.com/a/mock-asteroids-deflected-by-x-rays-in-study-that-could-help-us-protect-earth/ Tue, 15 Oct 2024 14:14:26 +0000 https://physicsworld.com/?p=117412 Lab-based experiment shows how centimetre-sized objects are accelerated

The post ‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth appeared first on Physics World.

]]>
For the first time, physicists in the US have done lab-based experiments that show how an asteroid could be deflected by powerful bursts of X-rays. With the help of the world’s largest high frequency electromagnetic wave generator, Nathan Moore and colleagues at Sandia National Laboratories showed how an asteroid-mimicking target could be freely suspended in space while being accelerated by ultra-short X-ray bursts.

While most asteroid impacts occur far from populated areas, they still hold the potential to cause devastation. In 2013, for example, over 1600 people were injured when a meteor exploded above the Russian city of Chelyabinsk. To better defend ourselves against these threats, planetary scientists have investigated how the paths of asteroids could be deflected before they reach Earth.

In 2022, NASA successfully demonstrated a small deflection with the DART mission, which sent a spacecraft to collide with the rocky asteroid Dimorphos at a speed of 24,000 km/h. After the impact, the period of Dimorphos’ orbit around the larger asteroid, Didymos, shortened by some 33 min.

However, this approach would not be sufficient to deflect larger objects such as the famous Chicxulub asteroid. This was roughly 10 km in diameter and triggered a mass extinction event when it impacted Earth about 66 million years ago.

Powerful X-ray burst

Fortunately, as Moore explains, there is an alternative approach to a DART-like impact. “It’s been known for decades that the only way to prevent the largest asteroids from hitting the earth is to use a powerful X-ray burst from a nuclear device,” he says. “But there has never been a safe way to test that idea. Nor would testing in space be practical.”

So far, X-ray deflection techniques have only been explored in computer simulations. But now, Moore’s team has tested a much smaller scale version of a deflection in the lab.

To generate energetic bursts of X-rays, the team used a powerful facility at Sandia National Laboratories called the Z Pulsed Power Facility – or Z Machine. Currently the largest pulsed power facility in the world, the Z Machine is essentially a giant battery that releases vast amounts of stored electrical energy in powerful, ultra-short pulses, funnelled down to a centimetre-sized target.

Few millionths of a second

In this case, the researchers used the Z Machine to compress a cylinder of argon gas into a hot, dense plasma. Afterwards, the plasma radiated X-rays in nanosecond pulses, which were fired at mock asteroid targets made from discs of fused silica. Using an optical setup behind the target, the team could measure the deflection of the targets.

“These ‘practice missions’ are miniaturized – our mock asteroids are only roughly a centimetre in size – and the flight is short-lived – just a few millionths of a second,” Moore explains. “But that’s just enough to let us test the deflection models accurately.”

Because the experiment was done here on Earth, rather than in space, the team also had to ensure that the targets were in freefall when struck by the X-rays. This was done by detaching the mock asteroid from a holder about a nanosecond before it was struck.

X-ray scissors

They achieved this by suspending the sample from a support made from thin metal foil, itself attached to a cylindrical fixture. To detach the sample, they used a technique Moore calls “X-ray scissors”, which almost instantly cut the sample away from the cylindrical fixture.

When illuminated by the X-ray burst, the supporting foil rapidly heated up and vaporized, well before the motion of the deflecting target could be affected by the fixture. For a brief moment, this left the target in freefall.

In the team’s initial experiments, the X-ray scissors worked just as they intended. Simultaneously, the X-ray pulse vaporized the target surface and deflected what remained at velocities close to 70 m/s.

The team hopes that its success will be a first step towards measuring how real asteroid materials are vaporized and deflected by more powerful X-ray bursts. This could lead to the development of a vital new line of defence against devastating asteroid impacts.

“Developing a scientific understanding of how different asteroid materials will respond is critically important for designing an intercept mission and being confident that mission would work,” Moore says. “You don’t want to take chances on the next big impact.”

The research is described in Nature Physics.

The post ‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth appeared first on Physics World.

]]>
Research update Lab-based experiment shows how centimetre-sized objects are accelerated https://physicsworld.com/wp-content/uploads/2024/10/15-10-2024-Z-Machine.jpg newsletter
Patient-specific quality assurance (PSQA) based on independent 3D dose calculation https://physicsworld.com/a/patient-specific-quality-assurance-psqa-based-on-independent-3d-dose-calculation/ Tue, 15 Oct 2024 08:28:12 +0000 https://physicsworld.com/?p=117273 Join the audience for a live webinar on 16 December 2024 sponsored by LAP GmbH Laser Applikationen

The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.

]]>

In this webinar, we will discuss that patient specific quality assurance (PSQA) is an essential component of the radiation treatment process. This control allows us to ensure that the planned dose will be delivered to the patient. The increasing number of patients with indications for modulated treatments requiring PSQA has significantly increased the workload of the medical physics departments, and the need to find more efficient ways to perform it has arisen.

In recent years, there has been an increasing evolution of measurement systems. However, the experimental process involved imposes a limit on the time savings. The 3D dose calculation systems are presented as a solution to this problem, allowing the reduction of the time needed for the initiation of treatments.

The use of 3D dose calculation systems, as stated in international recommendations (TG219), requires a process of commissioning and adjustment of dose calculation parameters.

This presentation will show the implementation of PSQA based on independent 3D dose calculation for VMAT treatments in breast cancer using DICOM information from the plan and LOG files. Comparative results with measurement-based PSQA systems will also be presented.

An interactive Q&A session follows the presentation.

Dr Daniel Venencia is the chief of the medical physics department at Instituto Zunino – Fundación Marie Curie in Cordoba, Argentina. He holds a BSc in physics and a PhD from the Universidad Nacional de Córdoba (UNC), Daniel has completed postgraduate studies in radiotherapy and nuclear medicine. With extensive experience in the field, Daniel has directed more than 20 MSc and BSc theses and three doctoral theses. He has delivered more than 400 presentations at national and international congresses. He has published in prestigious journals, including the Journal of Applied Clinical Medical Physics and the International Journal of Radiation Oncology, Biology and Physics. His work continues to make significant contributions to the advancement of medical physics.

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 16 December 2024 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2024/10/Webinar_PSQA_RadCalc_Dec_2024.jpg
Quantum material detects tiny mechanical strains https://physicsworld.com/a/quantum-material-detects-tiny-mechanical-strains/ Tue, 15 Oct 2024 08:00:14 +0000 https://physicsworld.com/?p=117392 Sensitivity of vanadium-oxide-based device breaks previous record by more than an order of magnitude

The post Quantum material detects tiny mechanical strains appeared first on Physics World.

]]>
A new sensor can detect mechanical strains that are more than an order of magnitude weaker than was possible with previously reported devices. Developed at Nanjing University, China, the sensor works by detecting changes that take place in single-crystal vanadium oxide materials as they undergo a transition from a conducting to an insulating phase. The new device could have applications in electronics engineering as well as materials science.

To detect tiny deformations in materials, you ideally want a sensor that undergoes a seamless and easily measurable transition whenever a strain – even a very weak one – is applied to it. Phase transitions, such as the shift from a metal to an insulator, fit the bill because they produce a significant change in the material’s resistance, making it possible to generate large electrical signals. These signals can then be measured and used to quantify the strain that triggered them.

Traditional strain sensors, however, are based on metal and semiconductor compounds, which have resistances that don’t change much under strain. This makes it hard to detect weak strains caused by, for example, the movement of microscopic water droplets around a surface.

A research team co-led by Feng Miao and Shi-Jun Liang has now got around this problem by developing a sensor based on the bronze phase of vanadium oxide, VO2(B). The team initially chose to study this material purely to understand the mechanisms behind its temperature-induced phase transitions. Along the way, though, they noticed something unusual. “As our research progressed, we discovered that this material exhibits a unique response to strain,” Liang recalls. “This prompted us to shift the project’s focus.”

A fabrication challenge

Because the structure of vanadium oxide is not simple, fabricating a sensor from this quantum material was among the team’s biggest challenges. To make their device, the Nanjing researchers used a specially-adapted hydrogen-assisted chemical vapour deposition micro-nano fabrication process. This enabled them to produce high-quality, smooth single crystals of the material, which they characterized using a combination of electrical and spectroscopic techniques, including high-resolution transmission electron microscopy (HRTEM). They then needed to transfer this crystal from the SiO2/Si wafer on which it was grown to a flexible substrate (a smooth and insulating polyimide), which posed further experimental challenges, Liang says.

Once they had accomplished this, the researchers loaded the polyimide substrate/VO2(B) into a customized strain setup. They bonded the device to a homemade socket and induced uniaxial tensile strain in the material by vertically pushing a nanopositioner-controlled needle through it. This bends the flexible substrate and curves the upper surface of the sample.

They then measured how the current-voltage characteristics of the mechanical sensor changed as they applied strain to it. Under no strain, the channel current of the device registers 165 μA at a bias of 0.5 V, indicating that it is conducting. When the strain increases to 0.95%, however, the current drops to just 0.50 μA, suggesting a shift into an insulating state.

A strikingly large variation

The researchers also measured the response of the device to intermediate strains. As they increased the applied strain, they found that at first, the device’s resistance increased only slightly. When the uniaxial tensile strain hit a value of 0.33%, though, the resistance jumped, and afterwards it increased exponentially with applied strain. By the time they reached 0.78% strain, the resistance was more than 2600 times greater than it was in the strain-free state.

This strikingly large variation is due to a strain-induced metal-insulator transition in the single-crystal VO2(B) flake, Miao explains. “As the strain increases, the entire material transitions to an insulator, resulting in a significant increase in its resistance that we can measure,” he says. This resistance change is durable, he adds, and can be measured with the same precision even after 700 cycles, proving that the technique is reliable.

Detecting airflows and vibrations

To test their device, the Nanjing University team used it to sense the slight mechanical deformation caused by placing a micron-sized piece of plastic on it. As well as detecting the slight mechanical pressure of small objects like this, they found that the device can also monitor gentle airflows and sense tiny vibrations such as those produced when tiny water droplets (about 9 μL in volume) move on flexible substrates.

“Our work shows that quantum materials like vanadium oxide show much potential for strain detection applications,” Miao tells Physics World. “This may motivate researchers in materials science and electronic engineering to study such compounds in this context.”

This work, which is detailed in Chinese Physics Letters, was a proof-of-concept validation, Liang adds. Future studies will involve growing large-area samples and exploring how to integrate them into flexible devices. “These will allow us to make ultra-sensitive quantum material sensing chips,” he says.

The post Quantum material detects tiny mechanical strains appeared first on Physics World.

]]>
Research update Sensitivity of vanadium-oxide-based device breaks previous record by more than an order of magnitude https://physicsworld.com/wp-content/uploads/2024/10/flexible-mechanical-sensor.jpg newsletter
Electrical sutures accelerate wound healing https://physicsworld.com/a/electrical-sutures-accelerate-wound-healing/ Mon, 14 Oct 2024 08:45:01 +0000 https://physicsworld.com/?p=117383 Surgical stitches that generate electrical charge speed up the healing of muscle wounds in rats

The post Electrical sutures accelerate wound healing appeared first on Physics World.

]]>
Surgical sutures are strong, flexible fibres used to close wounds caused by trauma or surgery. But could these stitches do more than just hold wounds closed? Could they, for example, be designed to accelerate the healing process?

A research team headed up at Donghua University in Shanghai has now developed sutures that can generate electricity at the wound site. They demonstrated that the electrical stimulation produced by these sutures can speed the healing of muscle wounds in rats and reduce the risk of infection.

“Our research group has been working on fibre electronics for almost 10 years, and has developed a series of new fibre materials with electrical powering, sensing and interaction functions,” says co-project leader Chengyi Hou. “But this is our first attempt to apply fibre electronics in the biomedical field, as we believe the electricity produced by these fibres might have an effect on living organisms and influence their bioelectricity.”

The idea is that the suture will generate electricity via a triboelectric mechanism, in which movement caused by muscles contracting and relaxing generates an electric field at the wound site. The resulting electrical stimulation should accelerate wound repair by encouraging cell proliferation and migration to the affected area. It’s also essential that the suture material is biocompatible and biodegradable, eliminating the need for surgical stitch removal.

To meet these requirements, Hou and colleagues created a bioabsorbable electrical stimulation suture (BioES-suture). The BioES-suture is made from a resorbable magnesium (Mg) filament electrode, wrapped with a layer of bioabsorbable PLGA (poly(lactic-co-glycolic acid)) nanofibres, and coated with a sheath made of the biodegradable thermoplastic polycaprolactone (PCL).

Structure of the BioES-suture

After the BioES-suture is used to stitch a wound, any subsequent tissue movement results in repeated contact and separation between the PLGA and PCL layers. This generates an electric field at the wound site, the Mg electrode then harvests this electrical energy to provide stimulation and enhance wound healing.

Clinical compatibility

The researchers measured the strength of the BioES-suture, finding that it had comparable sewing strength to commercial sutures. They also tested its biocompatibility by culturing fibroblasts (cells that play a crucial role in wound healing) on Mg filaments, PLGA-coated Mg and BioES-sutures. After a week, the viability of these cells was similar to that of control cells grown in standard petri dishes.

To examine the biodegradability, the researchers immersed the BioES-suture in saline. The core (Mg electrode and nanofibre assembly) completely degraded within 14 days (the muscle recovery period). The PCL layer remained intact for up to 24 weeks, after which, no obvious BioES-suture could be seen.

Next, the researchers investigated the suture’s ability to generate electricity. They wound the BioES-suture onto an artificial muscle fibre and stretched it underwater to simulate muscle deformation. The BioES-suture’s electrical output was 7.32 V in air and 8.71 V in water, enough to light up an LCD screen.

They also monitored the BioES-suture’s power generation capacity in vivo, by stitching it into the leg muscle of rats. During normal exercise, the output voltage was about 2.3 V, showing that the BioES-suture can effectively convert natural body movements into stable electrical impulses.

Healing ability

To assess the BioES-suture’s ability to promote wound healing, the researchers first examined an in vitro wound model. Wounds receiving electrical stimulation from the BioES-suture exhibited faster migration of fibroblasts than a non-stimulated control group, as well as increased cell proliferation and expression of growth factors. The original wound area of approximately 69% was reduced to 10.8% after 24 h exposure to the BioES-sutures, compared with 32.6% for traditional sutures.

The team also assessed the material’s antibacterial capabilities by immersing a standard suture, BioES-suture and electricity-producing BioES-suture in S. aureus and E. coli cultures for 24 h. The electricity-producing BioES-suture significantly inhibited bacterial growth compared with the other two, suggesting that this electrical stimulation could provide an antimicrobial effect during wound healing.

Finally, the researchers evaluated the therapeutic effect in vivo, by using BioES-sutures to treat bleeding muscle incisions in rats. Two other groups of rats were treated with standard surgical sutures and no stitches. Electromyographic (EMG) measurements showed that the BioES-suture significantly increased EMG signal intensity, confirming its ability to generate electricity from mechanical movements.

After 10 days, they examined extracted muscle tissue from the three groups of rats. Compared with the other groups, the BioES-suture improved tissue migration from the wound bed and accelerated wound regeneration, achieving near-complete (96.5%) wound healing. Tissue staining indicated significantly enhanced secretion of key growth factors in the BioES-suture group compared with the other groups.

The researchers suggest that electrical stimulation from the BioES-suture promotes wound healing via a two-fold mechanism: the stimulation enhances the secretion of growth factors at the wound; these growth factors then promote cell migration, proliferation and deposition of extracellular matrix to accelerate wound healing.

In an infected rat wound, stitching with BioES-suture led to better healing and significantly lower bacterial count than wounds stitched with ordinary surgical sutures. The bacterial count remained low even without daily wound disinfection, indicating that the BioES-suture could potentially reduce post-operative infections.

The next step will be to test the potential of the BioES-suture in humans. The team has now started clinical trials, Hou tells Physics World.

The BioES-suture is described in Nature Communications.

The post Electrical sutures accelerate wound healing appeared first on Physics World.

]]>
Research update Surgical stitches that generate electrical charge speed up the healing of muscle wounds in rats https://physicsworld.com/wp-content/uploads/2024/10/14-10-24-electrical-suture.jpg newsletter
Top-cited authors from China discuss the importance of citation metrics https://physicsworld.com/a/top-cited-authors-from-china-discuss-the-importance-of-citation-metrics/ Fri, 11 Oct 2024 09:26:06 +0000 https://physicsworld.com/?p=117193 More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing

The post Top-cited authors from China discuss the importance of citation metrics appeared first on Physics World.

]]>
More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

Among them are quantum physicist Xin Wang from Xi’an Jiaotong University and environmental scientist Huijuan Cui from the Institute of Geographic Sciences and Natural Resources Research.

Cui, who carries out research into climate change, says that China’s carbon neutrality goal has attracted attention all over the world, which may be a reason why the paper, published in Environmental Research Letters, garnered so many citations. “As the Chinese government pays more attention on sustainability issues like climate change…we see growing activities and influence from Chinese researchers,” she says.

A similar impact can be seen in Wang’s work in “chiral quantum networks”, which is published in Quantum Science and Technology, and is equally seen as an area that is quickly gaining traction.

Citations have an important role in Chinese research, and they can also highlight a research topic’s growing impact. “They indicate that what we are studying is a mainstream research field,” Wang says. “Our peers agree with our results and judgement of the field’s future.” Cui, meanwhile, says that citations reflect a “a positive acceptance and recognition of the quality of the research”.

Wang, however, notes that citations and impact doesn’t necessarily happen overnight and that researchers must not base their work’s impact on instantly generating citations.

He adds that some pioneering papers are not well-cited initially with researchers only beginning to realize their value after several years. “If we are confident that our findings are important, we should not be upset with its bad citation but keep on working,” he says. “It is the role of the researcher to stick with their gut to uncover their key research questions. Citations will come afterwards.”

Language barriers

When it comes to Chinese researchers getting their research cited internationally, Wang says that the language barrier is one of the greatest challenges. “The readability of a paper has a close relation with its citation,” adds Wang. “Most highly cited papers not only have an insight into scientific problems, but also are well-written.”

He adds that non-native speakers tend to avoid using “snappy” expressions, which often leads to a conservative and uninspiring tone. “These expressions are grammatically correct but awkward to native speakers,” Wang states.

Despite the potential difficulties with slow citations and language barriers, Cui says that success can be achieved through determination and focussing on important research questions. “Constant effort yields success,” adds Cui. “Keep digging into interesting questions and keep writing high-quality papers.”

That view is backed by Wang. “If your research is well-cited, congratulations,” adds Wang. “However, please do not be upset with a paper with few citations – it still might be pioneering work in its field.”

  • For the full list of top-cited papers from China for 2024, see here. Xin Wang’s and Huijuan Cui’s award-winning research can be read here and here, respectively

The post Top-cited authors from China discuss the importance of citation metrics appeared first on Physics World.

]]>
Blog More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing https://physicsworld.com/wp-content/uploads/2024/10/2024-10-sponsored-headshots.jpg newsletter
MRI-linac keeps track of brain tumour changes during radiotherapy https://physicsworld.com/a/mri-linac-keeps-track-of-brain-tumour-changes-during-radiotherapy/ Thu, 10 Oct 2024 15:00:32 +0000 https://physicsworld.com/?p=117347 Daily MR imaging could enable treatment adaptation to glioblastoma growth or shrinkage during radiotherapy

The post MRI-linac keeps track of brain tumour changes during radiotherapy appeared first on Physics World.

]]>
Glioblastoma, the most common primary brain cancer, is treated with surgical resection where possible followed by chemoradiotherapy. Researchers at the University of Miami’s Sylvester Comprehensive Cancer Center have now demonstrated that delivering the radiotherapy on an MRI-linac could provide an early warning of tumour growth, potentially enabling rapid adaptation during the course of treatment.

The Sylvester Comprehensive Cancer Center has been treating glioblastoma patients with MRI-guided radiotherapy since 2017. While standard clinical practice employs MRI scans before and after treatment (roughly three months apart) to monitor a patient’s response, the MRI-linac enables daily imaging. The research team, led by radiation oncologist Eric Mellon, proposed that such daily scans could reveal any changes in the tumour volume or resection cavity far earlier than the standard approach.

To investigate this idea, Mellon and colleagues studied 36 patients with glioblastoma undergoing chemoradiotherapy on a 0.35 T MRI-linac. During 30 radiotherapy fractions, delivered over six weeks, they imaged patients daily on the MRI-linac to assess the volumes of lesions and surgical resection cavities (the site where the tumour was removed).

The researchers then compared the non-contrast MRI-linac images to images recorded pre- (one week before) and post- (one month after) treatment using a standalone 3T MRI with gadolinium contrast. Detailing their findings in the International Journal of Radiation Oncology – Biology – Physics, they report that in general, lesion and cavity volumes seen on non-contrast MRI-linac scans correlated strongly with volumes measured using standalone contrast MRI.

Of the patients in this study, eight had a cavity in the brain, 12 had a lesion and 16 had both cavity and lesion. From pre- to post-radiotherapy, 18 patients exhibited lesion growth, while 11 had cavity shrinkage. In 74% of the cases, changes in lesion volume (growth, shrinkage or no change) assessed on the MR-linac matched those seen on contrast MRI.

“If MRI-linac lesion growth did occur, which was in 60% of our patients [with lesions], there is a 57% chance that it will correspond with tumour growth on standalone post-contrast imaging,” said first author Kaylie Cullison, who shared the study findings at the recent ASTRO Annual Meeting.

In the other 26% of cases, contrast MRI suggested lesion shrinkage while the MRI-linac scans showed lesion growth. Cullison suggested that this may be partly due to radiation-induced oedema, which is difficult to distinguish from tumour on the non-contrast MRI-linac images.

The significant anatomic changes seen during daily imaging of glioblastoma patients suggest that adaptation could play an important role in improving their treatment. In cases where lesions or surgical resection cavities shrink, for example, treatment margins could be reduced to spare normal brain tissue from irradiation. Conversely, for patients with growing lesions, radiotherapy margins could be expanded to ensure complete tumour coverage.

Importantly, there were no cases in this study where patients showed a decrease in their MRI-linac lesion volumes and an increase in their standalone MRI volumes from pre- to post-treatment. In other words, the MR-linac did not miss any cases of true tumour growth. “You can use the MRI-linac non-contrast imaging as an early warning system for potential tumour growth,” said Cullison.

Based on their findings, the researchers propose an adaptive workflow for glioblastoma radiotherapy. For resection cavities, which are clearly visible on non-contrast MRI-linac images, adaptation to shrinkage seen on weekly (standalone or MRI-linac) non-contrast MR images is feasible. Alongside, if an MRI-linac scan shows lesion progression during treatment, gadolinium contrast could be administered (for standalone MRI or MRI-linac scans) to confirm this growth and define adaptive target volumes.

An additional advantage of this workflow is it reduces the use of contrast. Glioblastoma evolution is typically evaluated using contrast-enhanced MRI. However, potential gadolinium deposition with repeated contrast scans is a concern among patients, and the US Food & Drug Administration advises that gadolinium contrast studies should be minimized where possible. This new adaptive approach meets this requirement by only requiring contrast when non-contrast MRI shows an increase in lesion size.

Cullison tells Physics World that the team will next conduct an adaptive radiation therapy trial using the proposed workflow, to determine whether it improves patient outcomes. “We also plan further exploration and analysis of our data, including multiparametric MRI from the MRI-linac, in a larger patient cohort to try to predict patient outcomes (tumour growth; true progression versus pseudo-progression; survival times, etc) earlier than current methods allow,” she explains.

The post MRI-linac keeps track of brain tumour changes during radiotherapy appeared first on Physics World.

]]>
Research update Daily MR imaging could enable treatment adaptation to glioblastoma growth or shrinkage during radiotherapy https://physicsworld.com/wp-content/uploads/2024/10/10-10-24-Cullison-Mellon.jpg
Unlocking the future of materials science with magnetic microscopy https://physicsworld.com/a/unlocking-the-future-of-materials-science-with-magnetic-microscopy/ Thu, 10 Oct 2024 14:00:12 +0000 https://physicsworld.com/?p=117198 JPhys Materials explore some of the key magnetic imaging technologies for the upcoming decade

The post Unlocking the future of materials science with magnetic microscopy appeared first on Physics World.

]]>

With a rapidly growing interest in magnetic materials for unconventional computing, data storage, and sensor applications, active research is needed not only on material synthesis but also characterization of their properties. In addition to structural and integral magnetic characterizations, imaging of magnetization patterns, current distributions and magnetic fields at nano- and microscale is of major importance to understand the material responses and qualify them for specific applications.

In this webinar, four experts will present on some of the key magnetic imaging technologies for the upcoming decade:

  • Scanning SQUID microscopy
  • Nanoscale magnetic resonance imaging
  • Coherent X-ray magnetic imaging
  • Scanning electron microscopy with polarization analysis

The webinar will run for two hours, with time for audience Q&A after each speaker.

Those interested in exploring this topic further are encouraged to read the 2024 roadmap on magnetic microscopy techniques and their applications in materials science, a single access point of information for experts in the field as well as the young generation of students, available open access in Journal of Physics: Materials.

Katja Nowack received her PhD in physics at Delft University of Technology in 2009, focussing on controlling and reading out the spin of single electrons in electrostatically defined quantum dots for spin-based quantum information processing. During her postdoc at Stanford University, she shifted to low-temperature magnetic imaging using scanning superconducting quantum interference devices (SQUIDs). In 2015, she joined the Department of Physics at Cornell University, where her lab develops magnetic imaging techniques to study quantum materials and devices, including topological material, unconventional superconductors and superconducting circuits.

Christian Degen joined ETH Zurich in 2011 after positions at MIT, Leiden University and IBM Research, Almaden. His background includes a PhD in magnetic resonance (Beat Meier) and postdoctoral training in scanning force microscopy (Dan Rugar). Since 2009, he has led a research group on quantum sensing and nanomechanics. He is a co-founder of the microscopy start-up QZabre.

Claire Donnelly. Following her MPhys at the University of Oxford, Claire went to Switzerland to carry out her PhD studies at the Paul Scherrer Institute and ETH Zurich. She was awarded her PhD in 2017 for her work on 3D systems, in which she developed X-ray magnetic tomography, work that was recognized by a number of awards. After a postdoc at the ETH Zurich, she moved to the University of Cambridge and the Cavendish Laboratory as a Leverhulme Early Career Research Fellow, where she focused on the behaviour of three-dimensional magnetic nanostructures. Since September 2021 she is a Lise Meitner Group Leader of Spin3D at the Max Planck Institute for Chemical Physics of Solids in Dresden, Germany. Her group focuses on the physics of three-dimensional magnetic and superconducting systems, and developing synchrotron X-ray-based methods to resolve their structure in 3D.

Mathias Kläui is professor of physics at Johannes Gutenberg-University Mainz and adjunct professor at the Norwegian University of Science and Technology. He received his PhD at the University of Cambridge, after which he joined the IBM Research Labs in Zürich. He was a junior group leader at the University of Konstanz and then became associate professor in a joint appointment between the EPFL and the PSI in Switzerland before moving to Mainz. He has published more than 400 articles and given more than 250 invited talks, is a Fellow of the IEEE, IOP and APS and has been awarded a number of prizes and scholarships.

About this journal

JPhys Materials is a new open access journal highlighting the most significant and exciting advances in materials science.

Editor-in-chief: Stephan Roche is ICREA professor at the Catalan Institute of Nanosciences and Nanotechnology (ICN2) and the Barcelona Institute of Science and Technology.

 

The post Unlocking the future of materials science with magnetic microscopy appeared first on Physics World.

]]>
Webinar JPhys Materials explore some of the key magnetic imaging technologies for the upcoming decade https://physicsworld.com/wp-content/uploads/2024/10/iStock-873546438_800.jpg
Deep connections: why two AI pioneers won the Nobel Prize for Physics https://physicsworld.com/a/deep-connections-why-two-ai-pioneers-won-the-nobel-prize-for-physics/ Thu, 10 Oct 2024 13:00:22 +0000 https://physicsworld.com/?p=116910 Our podcast guest is Anil Ananthaswamy, author of Why Machines Learn

The post Deep connections: why two AI pioneers won the Nobel Prize for Physics appeared first on Physics World.

]]>
It came as a bolt from the blue for many Nobel watchers. This year’s Nobel Prize for Physics went to John Hopfield and Geoffrey Hinton for their “foundational discoveries and inventions that enable machine learning and artificial neural networks”.

In this podcast I explore the connections between artificial intelligence (AI) and physics with the author Anil Ananthaswamy – who has written the book Why Machines Learn: The Elegant Maths Behind Modern AI. We delve into the careers of Hinton and Hopfield and explain how they laid much of the groundwork for today’s AI systems.

We also look at why Hinton has spoken out about the dangers of AI and chat about the connection between this year’s physics and chemistry Nobel prizes.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Deep connections: why two AI pioneers won the Nobel Prize for Physics appeared first on Physics World.

]]>
Podcasts Our podcast guest is Anil Ananthaswamy, author of Why Machines Learn https://physicsworld.com/wp-content/uploads/2024/10/brain-computer-intelligence-concept-landscape-1027941874-Shutterstock_Jackie-Niam.jpg newsletter1
Aluminium oxide reveals its surface secrets https://physicsworld.com/a/aluminium-oxide-reveals-its-surface-secrets/ Thu, 10 Oct 2024 08:30:55 +0000 https://physicsworld.com/?p=117262 New non-contact atomic force microscopy images shed more light on the "enigmatic insulator" aluminium oxide

The post Aluminium oxide reveals its surface secrets appeared first on Physics World.

]]>
Determining the surface structure of an insulating material is a difficult task, but it is important for understanding its chemical and physical properties. A team of researchers in Austria has now succeeded in doing just this for the technologically important insulator aluminium oxide (Al2O3). The team’s new images – obtained using non-contact atomic force microscopy (AFM) – not only reveal the material’s surface structure but also explain why a simple cut through a crystal is not energetically favourable for the material and leads to a complex rearrangement of the surface.

Al2O3 is an excellent insulator and is routinely employed in many applications, for example as a support material for catalysts, as a chemically resistant ceramic and in electronic components. Characterizing how the surface atoms arrange themselves in this material is important for understanding, among other things, how chemical reactions occur on it.

A technique that works for all materials

Atoms in the bulk of a material arrange themselves in an ordered crystal lattice, but the situation is very different on the surface. The more insulating a material is, the more difficult it is to analyse its surface structure using conventional experimental techniques, which typically require conductivity.

Researchers led by Jan Balajka and Johanna Hütner at TU Wien have now used non-contact AFM to study the basal (0001) plane of Al2O3. This technique works – even for completely insulating materials – by scanning a sharp tip mounted on a quartz tuning fork at a distance of just 0.1 nm above a sample’s surface. The frequency of the fork varies as the tip interacts with the surface atoms and by measuring these changes, an image of the surface structure can be generated.

The problem is that while non-contact AFM can identify where the atoms are located, it cannot distinguish between the different elements making up a compound. Balajka, Hütner and colleagues overcame this problem by modifying the tip and attaching a single oxygen atom to it. The oxygen atoms on the surface of the sample being studied repel this oxygen atom, while its aluminium atoms attract it.

“Mapping the local repulsion or attraction enabled us to visualize the chemical identity of each surface atom directly,” explains Hütner. “The complex three-dimensional structure of the subsurface layers was then determined computationally with novel machine learning algorithms using the experimental images as input,” adds Balajka.

Surface restructuring

According to their analyses, which are detailed in Science, when a cut is made on the Al2O3 surface, it restructures so that the aluminium in the topmost layer is able to penetrate deeper into the material and chemically bond with the oxygen atoms therein. This reconstruction energetically stabilizes the structure, but it remains stoichiometrically the same.

“The atomic structure is a foundational attribute of any material and is reflected in its macroscopic properties,” says Balajka. “The surface structure governs any surface chemistry, such as chemical reactions in catalytic processes.”

Balajka says that the challenges the team had to overcome in this work were threefold: “The first was the strongly insulating character of the material; the second, the lack of chemical sensitivity in (conventional) scanning probe microscopy; and the third, the structural complexity of the alumina surface, which leads to a large configuration of possible structures.”

As an enigmatic insulator, alumina has posed significant challenges for experimental studies and its surface structure has evaded precise determination since 1960s, Balajka tells Physics World. Indeed, it was listed as one of the “three mysteries in surface science” in the late 1990s.

The new findings provide a fundamental piece of knowledge: the detailed surface structure of an important material, and pave the way for advancement in catalysis, materials science and many other fields, he adds. “The experimental and computational approaches we employed in this study can be applied to study other materials that have been too complex or inaccessible to conventional techniques.”

The post Aluminium oxide reveals its surface secrets appeared first on Physics World.

]]>
Research update New non-contact atomic force microscopy images shed more light on the "enigmatic insulator" aluminium oxide https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_Gruppenfoto_Aluminiumoxid_2048.jpg newsletter1
Enigmatic particle might be a molecular pentaquark https://physicsworld.com/a/enigmatic-particle-might-be-a-molecular-pentaquark/ Wed, 09 Oct 2024 13:57:04 +0000 https://physicsworld.com/?p=117332 Decay rate of exotic hadron suggests it comprises five quarks

The post Enigmatic particle might be a molecular pentaquark appeared first on Physics World.

]]>
The enigmatic Ξ(2030) particle, once thought to consist of three quarks, may actually be a molecular pentaquark – an exotic hadron comprising five quarks. That is the conclusion of Chinese physicists Cai Cheng and Jing-wen Feng at Sichuan Normal University and Yin Huang at Southwest Jiaotong University. They employed a simplified strong interaction theory to calculate the decay rate of the exotic hadron, concluding that it comprises five quarks.

This composition aligns more closely with experimental data than does the traditional three-quark model for Ξ(2030). While other pentaquarks have been identified in accelerator experiments to date, these particles are still considered exotic and are poorly understood compared to two-quark mesons and three-quark baryons. As a result, this latest work is a significant step towards understanding pentaquarks.

The Ξ(2030) is named for its mass in megaelectronvolts and was first discovered at Fermilab in 1977. At that time, the idea of exotic hadrons that did not fit into the conventional meson–baryon classification was not widely accepted. Conventionally, a meson comprises a quark and an antiquark and a baryon contains three quarks.

Deviation from three-quark model

Consequently, based on its properties, the scientific community classified the particle as a baryon, similar to protons and neutrons. However, further investigations at CERN, SLAC, and Fermilab revealed that the particle’s interaction properties deviated significantly from what the three-quark model predicted, leading scientists to question its three-quark nature.

To address this issue earlier this year, Yin Huang and colleague Hao Hei proposed that the Ξ(2030) could be a molecular pentaquark, suggesting that it consists of a meson and a baryon loosely bound together by the strong nuclear force. In the present study, Cheng, Feng, and Huang elaborated on this idea, analysing a model where the particle is composed of a K meson, which contains a strange antiquark and a light quark (either up or down), alongside a Σ baryon that comprises a strange quark and two light quarks.

To do the study, the team had to use a simplified approach to calculating strong interactions. This is because quantum chromodynamics, the comprehensive theory describing such interactions, is too complex for detailed calculations of hadronic properties. Their approach focuses on hadrons rather than the fundamental quarks and gluons that make up hadrons. They calculated the probabilities of the Ξ(2030) decaying into various strongly interacting particles, including π and K mesons, as well as Σ and Λ baryons.

“It is confirmed that this particle is a hadron molecular state, and its core is primarily composed of K and Σ components,” explains Feng. “The main decay channels are K+Σ and K+Λ, which are consistent with the experimental results. This conclusion not only deepens our understanding of the internal structure of the Ξ(2030), but also further supports the applicability of the concept of hadronic molecular state in particle physics.”

Extremely short lifetime

The Ξ(2030) particle has an extremely short lifetime of about 10-23 s , making it challenging to study experimentally. As a result, measuring its properties can be imprecise. The uncertainty surrounding these measurements means that comparisons with theoretical results are not always conclusive, indicating that further experimental work is essential to validate the team’s claims regarding the interaction between the meson and baryons that make up the Ξ(2030).

“However, experimental verification still needs time, involving multi-party cooperation and detailed planning, and may also require technological innovation or experimental equipment improvement,” said Huang.

Despite the challenges, the researchers are not pausing their theoretical investigations. They plan to delve deeper into the structure of the Ξ(2030) because the particle’s complex nature could provide valuable insights into the subatomic strong interaction, which remains poorly understood due to the intricacies of quantum chromodynamics.

“Current studies have shown that although the theoretically calculated total decay rate of Ξ(2030) is basically consistent with the experimental data, the slight difference reveals the complexity of the particle’s internal structure,” concluded Feng. “This important discovery not only reinforces the hypothesis of Ξ(2030) as a meson–baryon molecular state, but also suggests that the particle may contain additional components, such as a possible triquark configuration.”

Moreover, the very conclusion regarding the molecular pentaquark structure of Ξ(2030) warrants further scrutiny. The effective theory employed by the authors draws on data from other experiments with strongly interacting particles and includes a fitting parameter not derived from the foundational principles of quantum chromodynamics. This raises the possibility of alternative structures for Ξ(2030).

“Maybe Ξ(2030) is a molecular state, but that means explaining why K and Σ should stick together – [Cheng and colleagues] do provide an explanation but their mechanism is not validated against other observations so it is impossible to evaluate its plausibility,” said Eric Swanson at University of Pittsburgh, who was not involved in the study.

The research is described in Physical Review D.

The post Enigmatic particle might be a molecular pentaquark appeared first on Physics World.

]]>
Research update Decay rate of exotic hadron suggests it comprises five quarks https://physicsworld.com/wp-content/uploads/2024/10/CERN-pentaquark-.jpg newsletter1
Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize https://physicsworld.com/a/pioneers-of-ai-based-protein-structure-prediction-share-2024-chemistry-nobel-prize/ Wed, 09 Oct 2024 09:45:31 +0000 https://physicsworld.com/?p=116909 Protein designer is also honoured in this year’s award

The post Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize appeared first on Physics World.

]]>
The 2024 Nobel Prize for Chemistry has been awarded to David Baker, Demis Hassibis and John Jumper for their work on proteins.

Baker bagged half the prize “for computational protein design” and Hassibis and Jumper share the other half for “for protein structure prediction”.

Baker is a biochemist based at the University of Washington in Seattle. Hassibis did a PhD in cognitive neuroscience at University College London and is CEO and co-founder of UK-based Google DeepMind. Also based at Google DeepMind, Jumper studied physics at Vanderbilt University and the University of Cambridge before doing a PhD in chemistry at the University of Chicago.

Entirely new protein

In 2003 Baker was the first to create an entirely new protein from its constituent amino acids – and his research group has since created many more new proteins. Some of these molecules have found use in sensors, nanomaterials, vaccines and pharmaceuticals.

In 2020 Jumper and Hassibis created AlphaFold2, which is an artificial-intelligence model that can predict the structure of a protein based on its amino-acid sequence. A protein begins as a linear chain of amino acids that folds itself to create a complicated 3D structure.

These structures can be determined  experimentally using techniques including X-ray crystallography, electron microscopy and nuclear magnetic resonance. However this is time-consuming and expensive.

Used by millions

AlphaFold2 was trained using many different protein structures and went on to successfully predict the structures of nearly all of the 200,000 known proteins. It has been used by millions of people around the world and could boost our understanding of a wide range of biological and chemical processes including bacterial resistance to antibiotics and the decomposition of plastics.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize appeared first on Physics World.

]]>
News Protein designer is also honoured in this year’s award https://physicsworld.com/wp-content/uploads/2024/10/9-10-2024-Chemistry-laureates.jpg
Pele’s hair-raising physics: glassy gifts from a volcano goddess https://physicsworld.com/a/peles-hair-raising-physics-glassy-gifts-from-a-volcano-goddess/ Tue, 08 Oct 2024 13:00:30 +0000 https://physicsworld.com/?p=116989 Volcanic hairs and tears reveal a wealth of information about what lies within lava

The post Pele’s hair-raising physics: glassy gifts from a volcano goddess appeared first on Physics World.

]]>
A sensible crew cut, a chic bob, an outrageous mullet. You can infer a lot about a person by how they choose to style their hair. But it might surprise you to know that it is possible to learn more about some objects in the natural world from their “hair” – be it the “quantum hair” that can reveal the deepest darkest secrets of what happens within a black hole, or glassy hair that emerges from the depths of our planet, via a volcano.

In December 2017 University of Oxford volcanologist Tamsin Mather travelled to Nicaragua to visit an “old friend”: the Masaya volcano, some 20 km south of the country’s capital of Managua. Recent activity had created a small, churning lava lake in the centre of the volcano’s active crater, one whose “mesmerising” glow at night attracted a stream of enchanted tourists.

For those who could draw their eyes away from the roiling lava, however, another treat awaited: a gossamer carpet of yellow fibres strung across the downwind crater’s edge. Known to geologists as “Pele’s hair”, Mather describes these beautiful deposits as like “glistening spiders’ webs”, shiny and glass-like, looking like “fresh cut grass after some dew”.

These glassy strands, often blown along by the wind, have been found in the vicinity of volcanoes across the globe – not only Masaya, but also Mount Etna in Italy, Erta Ale in Ethiopia, and across Iceland, where they are instead dubbed nornahár, or “witches’ hair”. They have even been found produced by underwater volcanoes at depths of up to 4.5 km below sea level. However, Pele’s hair is arguably most associated with Hawaii, from whose religion (not the footballer) the deposits take their name (see box “The legend of Pele”).

Lava fountains and candy floss

Although you might hardly guess it from its fine nature, Pele’s hair has quite the violent birth. It forms when droplets of molten rock are flung into the air from lava fountains, cascades, particularly vigorous flows or even bursting gas bubbles. This material is then stretched out into long threads as the air (or, in some cases, water) quenches them into a volcanic glass. Pele’s hair can be both thicker and finer than its human counterpart, ranging from around 1 to 300 µm thick (Jour. Research US Geol. Survey 5 93). While the strands are typically around 5–15 cm in length, some have been recorded to reach a whopping 2 m long.

Microscope image of Pele's hair

Katryn Wiese – an earth scientist at the College of San Mateo in California – explains that the hairs form in the same way that glass blowers craft their wares. “Melt a silica-rich material like beach sand and as it cools down, blow air through it to elongate it and stretch it out,” she says. Key to the formation of Pele’s hair, Wiese notes, is that the molten lava does not have time to crystallize as it cools. “Pele’s hair is really no different than ash. Ash is basically small beads of microscopic glass, whereas Pele’s hair is a strung-out thin line of glass.”

Go to a funfair and you’ll see this same process at play at the candy floss stall. “Sugar is melted by a heat coil in the centre of a cotton candy machine and then the liquid melted sugar is blown outwards while the device spins,” Wiese explains, to produce “thin threads of liquid that freeze into non-crystalline sugar or glass”.

Just as there is a fine art to spinning cotton candy, so too does the formation of Pele’s hair require very specific conditions to be met. First, the lava has to cool slowly enough so it can stretch out into thin strands. Second, the lava must be sufficiently fluid, rather than being more viscous. That’s why Pele’s hair is only formed by so-called basaltic eruptions, where the magma has a relatively low silica content of around 45–52%.

The composition of the initial lava is also a factor in the colour of the hairs, which can range from a golden yellow to a dark brown. “Hawaiian glasses are classically amber coloured,” notes Wiese. She explains that basalts from Hawaii are primarily made up of silica and aluminium oxides (a mix of iron, magnesium and calcium oxides), as well as trace amounts of other elements and gases. “The gases often contribute to oxidation of the elements and can also lead to different colours in the glass – the same process as blown glass in the art world.”

The legend of Pele

the Halema‘uma‘u pit crater of the volcano Kīlauea

Both Pele’s hair and Pele’s tears take their name from the Hawaiian goddess of volcanoes and fire: Pelehonuamea, “She who shapes the sacred land”, who is believed to reside beneath the summit of the volcano Kīlauea on the Big Island – the current eruptive centre of the Hawaiian hotspot.

Many ancient legends of Pele depict the deity as having a fiery personality. According to one account, it was this temperament that brought her to Hawaii in the first place, having been born on the island of Tahiti. As the story goes, Pele seduced the husband of her sister Nāmaka, the water goddess. This led to a fight between the siblings that proved the final straw for their father, who sent Pele into exile.

Accepting a great canoe from her brother, the king of the sharks, Pele voyaged across the seas – trying to light her fires on every island she reached – pursued by the vengeful Nāmaka. Mirroring how the Hawaiian islands were erupted in sequence as the Earth’s crust moved relative to the underlying hotspot, Pele moved along the chain repeatedly trying to dig a fiery crater in which to live, only for each to be extinguished by Nāmaka.

The pair had their final confrontation on Maui, with Nāmaka defeating Pele and tearing her apart at the hill known today as Ka Iwi o Pele – “the bones of Pele”. Her spirit, meanwhile, flew to Kīlauea, finding its eternal home in the Halema‘uma‘u pit crater.

Tears and hairs – volcanic insights

Another important factor in the formation of Pele’s hair is the velocity at which magma is “spurted” out during an eruption, according to Japanese volcanologist Daisuke Shimozuru, who was studying Pele’s hair and tears in the 1990s.

Based on experiments involving jets of ink released from a nozzle at different speeds, Shimozuru concluded that thread-like expulsions like Pele’s hair are only formed when the eruption velocity is sufficiently high (Bulletin of Volcanology 56 217). At lower speeds, the molten material is instead quenched without being stretched, forming glassy droplets, referred to as Pele’s tears, sometimes with a hair or two attached.

Two black glass beads on a person's hand

According to Kenna Rubin – a volcanologist at the University of Rhode Island – studying the shape of these black globules can shine a light on the properties of the lava that formed them. They can provide information not only about the ejection speed, but also related parameters such as the temperature, viscosity and the distance they travelled in the atmosphere before solidifying.

Furthermore, the tears can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities known as “vesicles”. Analysing these gases can reveal many details of the chemical composition of the magma that released them. These can be a useful tool to shine a light on the exact nature of the hazard posed by such eruptions.

In a similar fashion, Pele’s hair can also offer valuable insights to volcanologists about the nature of the eruptions that formed them – thereby helping to inform models of the hazards that future volcanoes may pose to nearby life and property.

Window within, and to the past

“Pele’s hair and tears are a subset of the pantheon of particles ejected by a volcano when they erupt,” notes Rubin. By examining the particles that come out over time, as well as studying the geophysical activity at a volcano, such as seismicity and gas ejection, researchers “can then make inferences about the conditions that were extant in past eruptions”. In turn, she adds, “This allows us to look at old eruption deposits that we didn’t witness erupting, and infer the same kinds of conditions.”

While Pele’s hair and tears are both relatively rare volcanic products, when they do exist they can help to constrain the eruption conditions – offering a window into not only recent but also past eruptions when so-called “fossil” samples have been preserved.

A lava lake on Volcan Masaya

Alongside the composition of the glasses (and any trapped gases within such), the shape of hairs and tears can shine a light on the various forces that affected them as they were flying through the air cooling. In fact, the presence of the hair around a volcano is itself a sign that the lava is of the least viscous type, and is undergoing some form of fountaining or bubbling.

There are, of course, many other types of material or fragments of rock that get ejected into the air when volcanoes erupt. But the great thing about Pele’s hair is that, having cooled from lava to a glass, it represents the lava’s bulk composition. As Wiese notes, “We can quickly determine the composition of the lavas that are erupting from just a single sample.”

For example, Mather collected samples of Pele’s hair from Masaya during a 2001 return visit to her cherished Nicaraguan haunt, enabling Mather and her colleagues to determine the composition of the lava erupting from Masaya’s vent in terms of both major elements and lead isotopes (Journal of Atmospheric Chemistry 46 207; Atmospheric Environment 37 4453). As Mather says, “With other measurements we can think about how this composition changes with time and also compare it with the gas and particles that are dispersed in the plume.”

Pele’s curse

Drift of Pele's hair on a rock

There is an urban legend on the islands that anything native to Hawaii – whether it be sand, rock or even volcanic glass – cannot be removed without being cursed by Pele herself. Despite invoking Hawaii’s ancient volcano goddess, the myth is believed to actually be quite recent in origin. According to one account, it was dreamt up by a frustrated park ranger who were frustrated by tourists taking rocks from the island as souvenirs. Another attributes it to tour drivers, who tired of tourists bringing said rocks onto their buses, and leaving dirt behind.

Either way, the story has taken hold as if it were an ancient Hawaiian taboo, one that some take extremely seriously. Volcanologist Kenna Rubin, for one, often receives returned rocks at her office at the University of Hawaii. “Tourists and visitors find my contact details online and return the lava rocks, or Pele’s hair,” she explains. “They apologise for taking the items as they feel they have been cursed by the goddess.”

The legend of Pele’s curse may be fictitious, but the hazards presented by Pele’s hair are very real, both to the unwitting visitor to Hawaii, and also the state’s permanent residents. Like fibreglass – which the hairs closely resemble – broken slivers of the hair can gain sharp ends that easily puncture the skin (or, worse, the eye) and break into smaller pieces as people try to remove them.

Not only can an active lava lake produce enough of the hair to carpet the surrounding area, but strands are easily picked up by the wind. From Kīlauea Volcano, for example, the US Geological Survey notes that prevailing winds tend to blow much of the Pele’s hair that is produced south to the Ka‘ū Desert, where it builds up in drifts against gully walls (see photo). In fact, hairs have been known to be carried up to tens of kilometres from the originating volcanic vent – and it is not uncommon on Hawaii to find Pele’s hair snagged on trees, utility poles and the like.

Hair in the catchment

Wind-blown Pele’s hair also poses a threat to the many locals who collect rainwater for drinking. “As ash, laze [“lava haze” – a mix of glass shards and acid released when basaltic lava enters the ocean] and Pele’s hair have been found to contain various metals and are hazardous to ingest, catchment users should avoid accumulating it in their water tanks,” the Hawaii State Department of Health advises in the event of volcanic activity.

However, even though Pele’s hair has the potential to harm humans, there are some residents of Hawaii who do benefit from it – birds. Collecting the strands like the bits of straw they resemble, our avian friends have been known to use the volcanic deposits to feather their nests; in fact, one made entirely from Pele’s hair has been preserved for posterity in the collections of the Hawaii Volcanoes National Park.

Pele’s tears can also serve as a proxy for the severity of eruptions. In a study published this March, geologist Scott Moyer and environmental scientist Dork Sahagian showed that the diameter of vesicles preserved in Pele’s tears from Hawaii is related to the height of the lava fountains that formed them (Frontiers in Earth Science 12 10.3389/feart.2024.1379985). Fountain height, in turn, is constrained by the separated gas content of the source magma, which controls eruption intensity.

It’s clear that Pele’s hair and tears are far more than a beautiful natural curiosity. Thanks to the tools and techniques of geoscience, we can use them to unravel the mysteries of Earth’s hidden interior.

The post Pele’s hair-raising physics: glassy gifts from a volcano goddess appeared first on Physics World.

]]>
Feature Volcanic hairs and tears reveal a wealth of information about what lies within lava https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Randall-Peles-hair-66303979-Shutterstock_MarcelClemens.jpg newsletter
John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics https://physicsworld.com/a/john-hopfield-and-geoffrey-hinton-share-the-2024-nobel-prize-for-physics/ Tue, 08 Oct 2024 09:45:44 +0000 https://physicsworld.com/?p=116908 Duo win for their work on machine learning

The post John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics appeared first on Physics World.

]]>
John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics for their “foundational discoveries and inventions that enable machine learning and artificial neural networks”. Known to some as the “godfather of artificial intelligence (AI)”, Hinton, 76, is currently based at the University of Toronto in Canada. Hopfield, 91, is at Princeton University in the US.

Ellen Moons from Karlstad University, who chairs the Nobel Committee for Physics, said at today’s announcement in Stockholm: “This year’s laureates used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets. These artificial neural networks have been used to advance research across physics topics as diverse as particle physics, materials science and astrophysics.”

Speaking on the telephone after the prize was announced, Hinton said, “I’m flabbergasted. I had no idea this would happen. I’m very surprised”. He added that machine learning and artificial intelligence will have a huge influence on society that will be comparable to the industrial revolution. However, he pointed out that there could be danger ahead because “we have no experience dealing with things that are smarter than us.”

“Two kinds of regret”

Hinton admitted that he does have some regrets about his work in the field. “There’s two kinds of regret. There’s regrets where you feel guilty because you did something you knew you shouldn’t have done. And then there are regrets where you did something that you would do again in the same circumstance but it may in the end not turn out well. That second kind of regret I have. I am worried the overall consequence of this might be systems more intelligent than us that eventually take control.”

Hinton spoke to the Nobel press conference from the West Coast of the US, where it was about 3 a.m. “I’m in a cheap hotel in California that doesn’t have a very good Internet connection. I was going to get an MRI scan today but I think I’ll have to cancel it.”

Hopfield began his career as a condensed-matter physicist before making the shift to neuroscience. In a 2014 perspective article for the journal Physical Biology called “Two cultures? Experiences at the physics–biology interface”, Hopfield wrote, “Mathematical theory had great predictive power in physics, but very little in biology. As a result, mathematics is considered the language of the physics paradigm, a language in which most biologists could remain illiterate.” Hopfield saw this as an opportunity because the physics paradigm “brings refreshing attitudes and a different choice of problems to the interface”. However, he was not without his critics in the biology community and wrote that one must have “have a thick skin”.

In the early 1980s, Hopfield developed his eponymous network, which can be used to store patterns and then retrieve them using incomplete information. This is called associative memory and an analogue in human cognition would be recalling a word when you only know the context and maybe the first letter or two.

Different types of network

A Hopfield network is  layer of neurons (or nodes) that are all connected together such that the state, 0 or 1, of each node is affected by the states of its neighbours (see above). This is similar to how magnetic materials are modelled by physicists – and a Hopfield network is reminiscent of a spin glass.

When an image is fed into the network, the strengths of the connections between nodes are adjusted and the image is stored in a low-energy state. This minimization process is essentially learning. When an imperfect version of the same image is input, it is subject to an energy-minimization process that will flip the values of some of the nodes until the two images resemble each other. What is more, several images can be stored in a Hopfield network, which can usually differentiate between all of them. Later networks used nodes that could take on more than two values, allowing more complex images to be stored and retrieved. As the networks improved, evermore subtle differences between images could be detected.

A little later on in the 1980s, Hinton was exploring how algorithms could be used to process patterns in the same way as the human brain. Using a simple Hopfield network as a starting point, he and a colleague borrowed ideas from statistical physics to develop a Boltzmann machine. It is so named because it works in analogy to the Boltzmann equation, which says that some states are more probable than others based on the energy of a system.

A Boltzmann machine typically has two connected layers of nodes – a visible layer that is the interface for inputting and outputting information, and a hidden layer. A Boltzmann machine can be generative – if it is trained on a set of similar images, it can produce a new and original image that is similar. The machine can also learn to categorise images. It was realized that the performance of a Boltzmann machine could be enhanced by eliminating connections between some nodes, creating “restricted Boltzmann machines”.

Hopfield networks and Boltzmann machines laid the foundations for the development of later machine learning and artificial-intelligence technologies – some of which we use today.

A life in science

Diagram showing the brain’s neural network and an artificial neural network

Born on 6 December 1947 in London, UK, Hinton graduated with a degree in experimental psychology in 1970 from Cambridge University before doing a PhD on AI at the University of Edinburgh, which he completed in 1975. After a spell at the University of Sussex, Hinton moved to the University of California, San Diego, in 1978, before going toCarnegie-Mellon University in 1982 and Toronto in 1987.

After becoming a founding director of the Gatsby Computational Neuroscience Unit at University College London in 1998, Hinton returned to Toronto in 2001 where he has remained since. From 2014, Hinton divided his time between Toronto and Google but then resigned from Google in 2023 “to freely speak out about the risks of AI.”

Elected as a  Fellow of the Royal Society in 1998, Hinton has won many other awards including the inaugural David E Rumelhart Prize in 2001 for the application of the backpropagation algorithm and Boltzmann machines. He also won the Royal Society’s James Clerk Maxwell Medal in 2016 and the Turing Award from the Association for Computing Machinery in 2018.

Hopfield was born on 15 July 1933 in Chicago, Illinois. After receiving a degree in 1954 from Swarthmore College in 1958 he completed a PhD in physics at Cornell University. Hopfield then spent two years at Bell Labs before moving to the University of California, Berkeley, in 1961.

In 1964 Hopfield went to Princeton University and then in 1980 moved to the California Institute of Technology. He returned to Princeton in 1997 where he remained for the rest of his career.

As well as the Nobel prize, Hopfield won the 2001 Dirac Medal and Prize from the International Center for Theoretical Physics as well the Albert Einstein World Award of Science in 2005. He also served as president of the American Physical Society in 2006.

  • Two papers written by this year’s physics laureates in journals published by IOP Publishing, which publishes Physics World, can be read here.
  • The Institute of Physics, which publishes Physics World, is running a survey gauging the views of the physics community on AI and physics till the end of this month. Click here to take part.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics appeared first on Physics World.

]]>
News Duo win for their work on machine learning https://physicsworld.com/wp-content/uploads/2024/10/00001-NOBEL-Physics-2024-new.jpg
Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows https://physicsworld.com/a/roger-penrose-the-nobel-laureate-with-a-preference-for-transparencies-over-slideshows/ Tue, 08 Oct 2024 09:34:26 +0000 https://physicsworld.com/?p=117289 Tushna Commissariat recounts a fascinating chat with Roger Penrose

The post Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows appeared first on Physics World.

]]>

 

As a young physics student, I spent the summer of 2004 toting around Roger Penrose’s The Road to Reality: A Complete Guide to the Laws of the Universe. It was one of the most challenging popular-science books I had ever come across, and I, like many others, was intrigued by Penrose’s treatise and his particular ideas about our cosmos. So I must admit that nearly a decade later, when I had the opportunity to meet the man himself at a 2015 conference hosted by Queen Mary University London, I was still somewhat starstruck.

The conference in question was one celebrating “Einstein’s Legacy: Celebrating 100 years of General Relativity”, and included scientists, writers and journalists who gave talks on everything from the “physiology of GR” to light cones and black holes. Penrose was one of the plenary speakers on a Saturday evening and I was promptly amused when he began his talk on “Light cones, black holes, infinity and beyond”, with a rather beautiful if extremely old-school transparency. Those who had attended his talks before (and indeed even to this day) already knew of this particular habit, as Penrose famously dislikes slides and prefers to give his talks with his own hand-drawn colourful sketches – in fact, I’ve never seen quite such a colourful black hole! In my blog from 2015, I described the talk as “equal parts complex, intriguing and amusing”, and I recall thoroughly enjoying it.

As any good science journalist would, I attempted to speak with him after the talk, but he was absolutely mobbed by the many students and other enthusiastic scientists at the event. So I decided to bide my time and attempt to catch him at the dinner after, where he again held court with all the QMUL students who hung on to his every word. It was only after 10 p.m. that I managed to get him alone to interview him. My colleague and I set up a camera in a quiet classroom and as we asked Penrose our first question on cosmology, a deep rumbling sound took over the room – a District and Hammersmith tube line runs past most of the classrooms at the campus.

We spent most of the interview stopping and starting and attempting to perfectly time when the next tube would rumble past. Penrose was extremely patient despite how late it was, and the fact that he had been talking for hours already. The many interruptions to filming did mean that we had the chance to chat casually with him, and though I cannot recall the exact details, the conversation was equal parts fascinating and rambling, as we went off on many tangents.

You can watch the final version of my interview with Penrose above, to learn more about who inspired him, his views on the future of cosmology, and how his career-long interest in back holes – which won him the 2020 Nobel prize – first began.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows appeared first on Physics World.

]]>
Blog Tushna Commissariat recounts a fascinating chat with Roger Penrose https://physicsworld.com/wp-content/uploads/2020/10/Penrose-home-pic.jpg
Laureates on film: Nobel winners who have graced our silver screen https://physicsworld.com/a/laureates-on-film-nobel-winners-who-have-graced-our-silver-screen/ Tue, 08 Oct 2024 09:00:38 +0000 https://physicsworld.com/?p=116907 Chatting with Frank Wilczek and Albert Fert

The post Laureates on film: Nobel winners who have graced our silver screen appeared first on Physics World.

]]>

One of the benefits of being on Physics World is that you get to meet some of the world’s best and brightest physicists – some of whom are Nobel laureates and some who could very well be among this year’s winners.

For years I attended the March Meeting of the American Physical Society – a gathering of upwards of 10,000 physicists where you are sure to bump into a Nobel laureate or two. At the 2011 meeting in Dallas I had the pleasure of interviewing MIT’s Frank Wilczek, who shared the 2004 prize with David Gross and David Politzer for their “for the discovery of asymptotic freedom in the theory of the strong interaction”.

But instead of looking back on his work on quarks and gluons, Wilczek was keen to chat about the physics of superconductivity and it’s wide-reaching influence on theoretical physics. You can watch a video of that interview above or here: “Superconductivity: a far-reaching theory”.

Amusing innuendo

Wilczek was a lovely guy and I was really pleased four years later when he recognized me at the Royal Society in London. We were both admiring portraits of fellows, and the amusing innuendo found in one of the picture captions. On a more serious note, we were both there for a celebration of Maxwell’s equations and you can read more about the event here: “A great day out in celebration of Maxwell’s equations”.

Also at that event in London were John Pendry of nearby Imperial College London and Harvard University’s Federico Capasso – who are both on our list of people who could win this year’s Nobel prize. Pendry is a pioneer in the mathematics that describes how metamaterials can be used to manipulate light in weird and wonderful ways – and Capasso has spent much of his career making such metamaterials in the lab, and commercially.

The Royal Society was also where I recorded a video interview with Albert Fert, who shared the 2007 prize with Peter Grünberg for their work on giant magnetoresistance (watch below). A decade or so earlier, I had completed a PhD on ultrathin magnetic materials, so I was very happy to hear that two pioneers of the field had been honoured.

In the interview, Fert looks to the future of spintronics. This is an emerging field in which the magnetic spin of materials is used to store and transport information – potentially using much less energy than conventional electronics.

I recorded a second video interview that day with David Awschalom, now at the University of Chicago. He is a pioneer in spintronics and much of his work is now focused on using spins for quantum computing. Another potential Nobel laureate perhaps?

We don’t do video interviews anymore – instead we chat with people on our podcasts. As you can see from our videos, I really struggled with the medium. The laureates, however, were real pros!

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Laureates on film: Nobel winners who have graced our silver screen appeared first on Physics World.

]]>
Blog Chatting with Frank Wilczek and Albert Fert https://physicsworld.com/wp-content/uploads/2024/10/Albert-Fert.jpg
How to rotate your mattress like a physics Nobel prizewinner https://physicsworld.com/a/how-to-rotate-your-mattress-like-a-physics-nobel-prizewinner/ Mon, 07 Oct 2024 17:00:37 +0000 https://physicsworld.com/?p=117281 A tongue-in-cheek e-mail exchange with 1973 Nobel Prize winner Brian Josephson shows that for some laureates, scientific rigour extends to ordinary life, too

The post How to rotate your mattress like a physics Nobel prizewinner appeared first on Physics World.

]]>
Amid the hype of Nobel Prize week, it’s important to remember that in many respects, Nobel laureates are just like the rest of us. They wake up and get dressed. They eat. They go about their daily lives. And when it’s time for bed, they lie down on mattresses that have been rotated scientifically, according to the principles of symmetry group theory.

Well, Brian Josephson does, anyway.

In the early 1960s, Josephson – then a PhD student in theoretical condensed matter physics at the University of Cambridge, UK – predicted that a superconducting current should be able to tunnel through an insulating junction even when there is no voltage across it. He also predicted that if a voltage is applied, the current will oscillate at a well-defined frequency. These predictions were soon verified experimentally, and in 1973 he received a half share of the Nobel Prize for Physics (Ivar Giaever and Leo Esaki, who did experimental research on quantum tunnelling in superconductors and semiconductors respectively, got the other half).

Subsequent work has borne out the importance of Josephson’s discovery. “Josephson junctions” are integral to instruments called SQUIDs (superconducting quantum interference devices) that measure magnetic fields with exquisite sensitivity. More recently, they’ve become the foundation for superconducting qubits, which drive many of today’s quantum computers.

Josephson himself, however, lost interest in theoretical condensed-matter physics. Instead, he has devoted most of his post-PhD career to the physics of consciousness, researching topics such as telepathy and psychokinesis under the auspices of the Mind-Matter Unification Project he founded.

An unusual scientific paper

Josephson’s later work hasn’t attracted much support from his fellow physicists. Still, he remains an active member of the community and, incidentally, a semi-regular contributor to Physics World’s postbag. It was in this context that I learned of his work on the pressing domestic dilemma of mattress rotation.

In December 2014, Josephson responded to a call for submissions to Physics World’s Lateral Thoughts column of humorous essays with a brief but tantalizing message. “What a pity my ‘Group Theory and the Art of Mattress Turning’ is too short for this,” he wrote. This document, Josephson explained, describes “the order-4 symmetry group of a mattress, and how an alternating sequence of the two easiest non-trivial group operations…takes you in sequence through all four mattress orientations, thereby preserving as much as possible the symmetry of the mattress under perturbations by sleepers [and] enhancing its lifetime.”

At the time, I had only recently purchased my first mattress, and I was keen to avoid shelling out for another any time soon. I therefore asked for more details. Within days, Josephson kindly replied with a copy of his mock paper, in the form of a scanned “cribsheet” which, he explained, lives under the mattress in the home he shares with his wife.

An argument from symmetry

Like all good scientific papers, Josephson’s “Group Theory and the Art of Mattress Turning” begins with a summary of the problem. “A mattress may be laid down on a bed in four different orientations,” it states. “For maximum life it should be cycled regularly through these four orientations. How may a mattress user ensure that this be done?”

The paper goes on to observe that the symmetry group of a mattress (that is, the collection of all transformations under which it is mathematically invariant) contains four elements. The first element is the identity transformation, which leaves the mattress’ orientation unchanged. The other three elements are rotations about the mattress’ axes of symmetry. Listed “in order of increasing physical effort required to perform”, these rotations are:

  • V, rotation by π (180 degrees) about a vertical axis (that is, keeping the mattress flat and spinning it around so that the erstwhile head area is at the feet)
  • L, rotation by π about the longer axis of symmetry (that is, flipping the mattress from the side of the bed, such that the head and foot ends remain in the same position relative to the bed, but the mattress is now upside down)
  • S, rotation by π about the shorter axis of symmetry (that is, flipping the mattress from the end of the bed, such that the head and foot ends swap places while the mattress is simultaneously turned upside down)

“Ideally, S should be avoided in order to minimize effort”, the paper continues. Fortunately, there is a solution: “It is easily seen that alternate applications of V and L will cause the mattress to go through all ‘proper’ orientations relative to the bed, in a cycle of order 4. The following algorithm will achieve this in practice: Odd months, rotate about the lOng axis. eVen months, rotate about the Vertical axis.” In case this isn’t memorable enough, the paper advises that “potential users of this algorithm may find it helpful to write it down on a piece of paper which should be slipped under the mattress for retrieval later when it may have been forgotten”.

A challenging problem

The paper concludes, as per convention, with an acknowledgement section and a list of references. In the former, Josephson thanks “cj” – presumably his wife, Carol – “for bringing this challenging problem to my attention”. The latter contains a single citation, to a “Theory of Compliant Mattress Group lecture notes on applications of group theory” supposedly produced by Josephson’s office-mate Volker Heine.

The most endearing part of the paper, though, is the area below the references in the scanned cribsheet. This contains extensive handwritten notes on months and rotations, strongly suggesting that Josephson does, in fact, rotate his mattress according to the above-outlined principles. Indeed, in a postscript to his e-mail, Josephson noted that he and Carol recently had to modify the algorithm in response to a change in experimental conditions, namely the purchase of “a very flexible foam mattress”. This, he observed, “makes S rotations easier than L rotations, so we use that instead”.

I wish I could say that I adopted this method of mattress rotation in my own domestic life. Alas, my housekeeping is not up to Nobel laureate standards: I rotate my mattress approximately once a season, not once a month as the algorithm requires. However, whenever I do get round to it, I always think of Brian Josephson, the unconventional Nobel laureate whose tongue-in-cheek determination to apply physics to his daily life never fails to make me smile.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

 

The post How to rotate your mattress like a physics Nobel prizewinner appeared first on Physics World.

]]>
Blog A tongue-in-cheek e-mail exchange with 1973 Nobel Prize winner Brian Josephson shows that for some laureates, scientific rigour extends to ordinary life, too https://physicsworld.com/wp-content/uploads/2024/10/Mattress.jpg
European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ https://physicsworld.com/a/european-space-agency-launches-hera-mission-to-investigate-asteroid-crash-scene/ Mon, 07 Oct 2024 14:53:21 +0000 https://physicsworld.com/?p=117278 Hera will perform a close-up examination of a 2022 impact on Dimorphos by NASA's DART mission

The post European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ appeared first on Physics World.

]]>
The European Space Agency (ESA) has launched a €360m mission to perform a close-up “crash-scene” investigation of the 150 m-diameter asteroid Dimorphos, which was purposely hit by a NASA probe in 2022. Hera took off aboard a SpaceX Falcon 9 rocket from Cape Canaveral at 10:52 local time. The mission should reach the asteroid in December 2026.

On 26 September 2022, NASA confirmed that its $330m Double Asteroid Redirection Test (DART) mission successfully demonstrated “kinetic impact” by hitting Dimorphos at a speed of 6.1 km/s. This resulted in the asteroid being put on a slightly different orbit around its companion body – a 780 m-diameter asteroid called “Didymos”.

A month later in October, NASA confirmed that DART had altered Dimorphos’ orbit by 33 minutes, shortening the 11 hour and 55-minute orbit to 11 hours and 23 minutes. This was some 25 times greater than the 73 seconds NASA had defined as a minimum successful orbit period change. Much of the momentum change came from the ejecta liberated by the impact including a plume of debris that extended more than 10 000 km into space.

Mars flyby

The Hera mission, which has 12 instruments including cameras and thermal-infrared imagers, will perform a detailed post-impact survey of Dimorphos. This will involve measuring its size, shape mass and orbit more precisely than has been carried out to date by follow-up measurements from ground- and space-based observatories including the Hubble Space Telescope.

It is hoped that Hera will be able to reach up to 200 m from the surface of Dimorphos to deliver 2 cm imaging resolution in certain sections.

Part of the Hera mission involves releasing two cubesats – each the size of a shoebox – that will also have imagers and radar onboard. They will examine Dimorphos’ internal structure to determine whether it is a rubble pile or has a solid core surrounding by layers of boulders.

The cubesats will also attempt to land on the asteroid with one measuring the asteroid’s gravitational field. The cubesats are also technology demonstrators, testing communication in deep space between them and Hera.

Once Hera’s mission is complete about six months after arrival at Dimorphos, it may also attempt to land on the asteroid, although a decision to do so has not yet been made.

On its way to Dimorphos, next year Hera will carry out a “swingby” of Mars and a flyby of the Martian moon Deimos.

The post European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ appeared first on Physics World.

]]>
News Hera will perform a close-up examination of a 2022 impact on Dimorphos by NASA's DART mission https://physicsworld.com/wp-content/uploads/2024/10/Last_view_of_Hera_spacecraft-small.jpg
Use our infographic to predict this year’s Nobel prize winners https://physicsworld.com/a/use-our-infographic-to-predict-this-years-nobel-prize-winners/ Mon, 07 Oct 2024 14:00:18 +0000 https://physicsworld.com/?p=116905 We are expecting a prize in condensed-matter physics in 2024

The post Use our infographic to predict this year’s Nobel prize winners appeared first on Physics World.

]]>
PW Nobel Infographic

Part of the fun of the run-up to the announcement of the Nobel Prize for Physics is the speculation – serious, silly or otherwise – of who will be this year’s winner(s). Here at Physics World, we don’t shy away from making predictions but our track record is not particularly good.

That’s not surprising, because the process of choosing Nobel winners is highly secretive and we know nothing about who has been nominated for this year’s prize. That’s thanks to the 50-year embargo on all information related to the decision.

The 2024 prize will be announced tomorrow and if you would like to know more about how the Nobel Committee for Physics operates, check out this article that’s based on an interview with a former committee chair: “Inside the Nobels: Lars Brink reveals how the world’s top physics prize is awarded”.

Charting history

Several years ago we created an infographic that charts the history of the Nobel Prize for Physics in terms of the discipline of the winning work (see figure). For example, last year the prize was shared by Pierre Agostini, Ferenc Krausz and Anne L’Huillier for their pioneering work using attosecond laser pulses to study the behaviour of electrons. We categorized this prize as “atomic, molecular and optical” and you can see that prize at the top of the infographic, connected to its category by a darkish blue line.

As well as revealing which disciplines of physics have received the most attention from successive Nobel committees, the infographic also shows that some disciplines fall in and out of favour, while others have produced a steady stream of winners over the past 12 decades. The infographic shows, for example, the return of quantum physics to the Nobel realm. The discipline was popular with the Nobel committee in the 1910s–1950s and then fell completely out of favour until 2012.

Another thing that is apparent from the infographic is that after about 1990 there tends to be well-defined gaps between disciplines. And for no good scientific reason, we have decided that we can analyse these gaps and use the results to make predictions!

Partially correct

Last year, we noticed that atomic, molecular and optical physics was due a prize. That observation, in part, led us to predict that Paul Corkum, Ferenc Krausz and Anne L’Huillier would win in 2023. This partially correct prediction has emboldened our faith in the mystical ability of our infographic to help predict winners.

So what does that mean for our predictions for this year?

The infographic makes it clear that we are overdue a prize in condensed-matter physics. Some possibilities that we have identified include magic-angle graphene and metamaterials.

So tune into Physics World tomorrow and find out if we are right.

 

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

 

The post Use our infographic to predict this year’s Nobel prize winners appeared first on Physics World.

]]>
Blog We are expecting a prize in condensed-matter physics in 2024 https://physicsworld.com/wp-content/uploads/2024/10/PW-Nobel-Infographic-list.jpg
To boost battery recycling, keep a close eye on the data https://physicsworld.com/a/to-boost-battery-recycling-keep-a-close-eye-on-the-data/ Mon, 07 Oct 2024 10:20:09 +0000 https://physicsworld.com/?p=117140 Real-time analysis can drive improvements that benefit manufacturers as well as the environment, says Kalle Blomberg

The post To boost battery recycling, keep a close eye on the data appeared first on Physics World.

]]>
How did Sensmet get started?

The initial idea to build an online system that uses a micro-plasma to analyse metals in liquids came from Toni Laurila, who is now Sensmet’s co-founder and CEO. He got the idea during his post-doctoral studies at the University of Cambridge, UK, and after he returned to Finland, we started to develop an online instrument for industrial process and environmental applications.

Typically, if you need to measure metals in liquid – whether it’s wastewater, industrial process water or natural bodies like rivers and lakes – you collect a sample and send it to a laboratory for analysis. Depending on the lab, it might take up to several days to get the results. If you need to control a process based on such outdated data, it’s like trying to drive your car while solely relying on a rearview mirror where the image is 4‒10 hours old. By the time you see what’s happening, you’ve already veered off the road.

We saw that we can do for liquid monitoring what other companies did for online gas monitoring around 30 years ago, when the regulations started changing in a way that meant practically all gaseous emissions needed to be monitored in real time. We believe this will also be the future for liquids.

What kinds of liquids are you analysing?

Regulations on real-time monitoring of liquids are going to come at some point, and we believe that our technology will make that possible, but it has not happened yet. This means that for now, we are focusing on analysing liquids involved in industrial processes, because that is an area where we can give customers a return on their investment.

A good example is the battery industry, which is growing rapidly due to the popularity of electric cars. This is driving huge demand for lithium and other metals. If we want to produce enough electric cars to reduce emissions from petrol and diesel vehicles, we can’t do it just by mining new metals. The recycling rate for old batteries also needs to rise.

How does battery recycling work, and how do Sensmet’s analysers help?

Typically, you take the end-of-life battery from an electric car and shred it into very fine particles to create what’s known as the black mass. Separating the valuable metals from the black mass then involves a hydrometallurgical process, where the metals are converted into a liquid form, typically by dissolving or leaching them in acids. Once the valuable metals are dissolved, they are extracted from the black mass one by one through processes such as solvent extraction or ion exchange.

What makes our analyser particularly well-suited for monitoring this battery recycling process is that we can measure multiple metals simultaneously. This includes light elements such as lithium and sodium that cannot be measured using X-ray fluorescence, a commonly used technique for metals analysis.

Real-time measurement is essential for optimizing the battery metal recycling process. By continuously monitoring the concentrations of key metals such as lithium, manganese, cobalt, nickel, copper, aluminium and calcium, process operators can quickly detect anomalies, enhancing both quality and efficiency. The speed of the processes used to separate elements from the black mass is another critical factor. If you’re having to wait around for a laboratory analysis, you cannot optimize them very well. You’re not getting the rapid, real-time measurements you need to improve your yield, and that can mean increased waste.

Sensmet installation

A clear example is ion exchange columns, which require periodic regeneration as they become saturated. Our analyser monitors the solution from these columns, and when it detects a rise in, say, nickel concentration, the customer knows it’s time to regenerate the column. In these situations, the speed of analysis is crucial for optimizing the production efficiency.

What challenges did you encounter in developing your analyser?

While proving a technology’s effectiveness in the lab is relatively straightforward, developing a product that performs reliably in real-world conditions is much more challenging. Our customers require an analyser that is both robust and reliable in demanding industrial environments, consistently delivering accurate results day after day, year after year.

We also conduct environmental online monitoring of industrial wastewater, which is challenging in Finland, where winter temperatures can drop to –35 °C. To address this, we can house our analyser in a container and use heated measurement lines to transfer the liquid samples, for example from a settling pond.

These harsh conditions and customer requirements are some of the reasons we chose to use a spectrometer from Avantes in our analyser. The way Avantes builds their spectrometers, they are quite robust. If you accidentally hit them a little bit, they maintain their calibration.

What are some other advantages of Avantes spectrometers?

We bought our first spectrometer from them before we spun out of the university in 2017. It was a high-resolution system for plasma research, and it allowed us to do very fast measurements and collect multiple spectra at high speeds. After that, it was easy to choose the next spectrometer from the same manufacturer because we’d already built the programs and controls for our prototype analyser based on it. And we’ve always had very good service from Avantes. When we have faced a problem, they’ve always helped us quickly. That’s very important, especially at the university stage when we were using the spectrometer beyond its regular scope.

What do you know now that you wish you’d known when Sensmet got started?

When we started building our analyser and realized what it could do, we felt like kids in a candy shop surrounded by a million treats.  There is water everywhere, so we believed our technology had universal appeal and expected everyone to adopt it immediately.

As a start-up, focus is everything. You need to concentrate on a specific market and convince those customers that your product is the right fit for them. Only then can you expand to the next market. However, we were young with limited experience, so it took us some time to realize this.

What are you working on now?

Our first product is ready, so our focus is on pushing it to the market. We are working with multiple battery manufacturing companies and mining companies to make ourselves known as a reliable provider of analysers that can really bring significant added value to customer processes.

Kalle Blomberg is the chief technology officer at Sensmet

The post To boost battery recycling, keep a close eye on the data appeared first on Physics World.

]]>
Interview Real-time analysis can drive improvements that benefit manufacturers as well as the environment, says Kalle Blomberg https://physicsworld.com/wp-content/uploads/2024/10/web-Sensmet-environmental-monitoring.jpg newsletter
Fusion, the Web and electric planes: how spin-offs from big science are transforming the world https://physicsworld.com/a/fusion-the-web-and-electric-planes-how-spin-offs-from-big-science-are-transforming-the-world/ Mon, 07 Oct 2024 10:00:41 +0000 https://physicsworld.com/?p=116924 James McKenzie looks at some of the unexpected spin-offs from big science

The post Fusion, the Web and electric planes: how spin-offs from big science are transforming the world appeared first on Physics World.

]]>
With the CERN particle-physics lab turning 70 this year, I’ve been thinking about the impact of big science on business. There are hundreds – if not thousands – of examples I could cite, the most famous being, of course, the World Wide Web. It was devised at CERN in 1989 by the British computer scientist Tim Berners-Lee, who was seeking a way to organize and share the huge amounts of data produced by the lab’s fundamental science experiments.

While the Web wasn’t a spin-off technology as such, it’s hard to think of anything developed with one purpose in mind that’s had such far-reaching applications across the whole of business and society. Indeed, CERN can lay claim to lots of spin-off firms that have pushed the boundaries of technology. Many of those firms specialize in detectors, imaging and sensors, but quite a few are involved in materials, coatings, healthcare and environmental applications.

It would be impossible for me to discuss them all in a short article, but there are lots – and CERN is rather good these days at knowledge transfer. So too are large national labs, such as Harwell and Daresbury in the UK, which have co-ordinated spin-out and knowledge transfer activities supported by UK Research and Innovation. A recent report from the UK government claims that firms spun out from the country’s public sector had raised a total of £5.1bn of investment and created more than 7000 new jobs over the last four decades.

One particularly exciting spin-off from big science is from the burgeoning fusion industry. There are currently about 40 different companies around the world trying to develop commercial fusion-power plants that can serve as a sustainable source of electricity in our quest for net zero. Whilst the sector is making steady progress towards that goal, the associated technology could have some other rather interesting applications too.

Fusion tech

Consider Tokamak Energy, which was founded in 2009 by a group of scientists and researchers at the UK Atomic Energy Authority, making it a spin-out of sorts. The company’s main aim is to build a tokamak fusion plant that could one day deliver electricity to the grid. But over the years it’s also become rather good at making high-temperature superconducting (HTS) magnets, with more than 200 patents to its name.

The company is, for example, working with the US Department of Energy, via the Defense Advanced Research Projects Agency (DARPA), to build a magnetohydrodynamic drive (MHD). Such a device, which provides propulsion without any moving parts, conjures up visions of the great 1990s movie The Hunt for Red October, where Sean Connery played a Soviet sailor captaining a submarine that can’t be detected by sonar.

One particularly exciting spin-off from big science is from the burgeoning fusion industry

In terms of physics, an MHD drive uses electric fields to accelerate an electrically conducting fluid. A magnetic field applied perpendicularly to the flow creates a thrust – the Lorenz force – at 90° to the electric and magnetic fields, in accordance with the right-hand rule. Back in the 1990s, the Japanese firm Mitsubishi did build a ship – Yamato 1 – powered by a prototype MHD thruster, but with the technology available at the time limiting magnetic fields to just 4 T, the boat only had a top speed of 15 km/h.

Since then, however, HTS magnet technology has markedly improved. In 2019, for example, Tokamak Energy announced it had built a magnet that produced a record-breaking 24 T field at 20 K. Based on superconducting barium-copper-oxide tape technology, the magnet is designed to be used in the poloidal field coils of a tokamak fusion device. The superconducting magnets at the Joint European Torus (JET) fusion facility in the UK, in contrast, produced fields of only 4 T.

For Tokamak Energy to create such a powerful magnet was quite an achievement, and you can imagine that it could improve MHD performance and open the door to many other applications too. In fact, the company has just launched a new business division called TE Magnetics, focusing on HTS magnet technology. It wants to tap into a market that a recent report from Future Market Insights reckons was worth an astonishing $3.3bn in 2023.

Aircraft advances

David Kingham, co-founder and executive vice-chair of Tokamak Energy, points to applications of HTS magnets in everything from space thrusters and proton-beam therapy to motors and generators for wind turbines and planes. That final application is perhaps the most intriguing as it’s very difficult for non-superconducting motors to achieve the huge power density needed for large aircraft to fly.

If you’re thinking an HTS-powered plane sounds far-fetched, it turns out that Airbus is already on the case, as are many other firms too. Over the last few years, Airbus has been developing prototype motors using this kind of technology that, to me, are a serious contender in the quest for low-carbon air travel. Through its ASCEND programme, the company has already built a 500 kW powertrain featuring an electric motor powered by the current from HTS tape.

Airbus thinks the cryogenics needed to cool the tape could be driven by the liquid hydrogen fuel that would generate the power in a fuel cell. The beauty of superconducting systems is that they’re much more efficient than conventional technology and can deliver huge power densities – pointing the way to lighter and more efficient planes.

If you think a plane powered by high-temperature superconductors sounds far-fetched, it turns out that Airbus is already on the case

There’s obviously a little more work to do before such technology can reach commercial reality. After all, getting today’s city-hopping turboprop planes off the ground using electric power alone would require around 8 MW of power. But what Airbus has done is a promising start – and reliable HTS magnets will be vital for this work to really succeed.

Another company working on the electrification of air transport is Evolito, which was spun out in 2021 by the UK firm YASA. Now owned by Mercedes-Benz, YASA is a pioneer of “axial-flux” electric motors, which have very high power densities yet don’t need to be cooled to cryogenic temperatures. YASA has already worked with Rolls-Royce to develop Spirit of Innovation, which in 2021 claimed the record for the world’s fastest electric plane, clocking a top speed of 623 km/h.

My message is simple: spin-offs and spin-outs are everywhere. So next time you have your head down and are working on something very specific, keep an open mind as to what else it could be used for – it may be more commercially relevant than you think. The applications could be even more than you ever imagined – and if you don’t believe me, just go and ask Tim Berners-Lee.

The post Fusion, the Web and electric planes: how spin-offs from big science are transforming the world appeared first on Physics World.

]]>
Opinion and reviews James McKenzie looks at some of the unexpected spin-offs from big science https://physicsworld.com/wp-content/uploads/2024/09/2024-10-Transactions-Tokamak.jpg newsletter
Heart-on-a-chip reveals impact of spaceflight on cardiac health https://physicsworld.com/a/heart-on-a-chip-reveals-impact-of-spaceflight-on-cardiac-health/ Mon, 07 Oct 2024 08:00:22 +0000 https://physicsworld.com/?p=117225 A heart-on-a-chip platform sent to the International Space Station reveals how 30 days in space alters heart muscle cells

The post Heart-on-a-chip reveals impact of spaceflight on cardiac health appeared first on Physics World.

]]>
Astronauts spending time in the harsh environment of space often experience damaging effects on their health, including a deterioration in heart function. Indeed, the landmark NASA Twins Study found that an astronaut who spent a year on the International Space Station (ISS) had significantly increased cardiac output and reduced arterial pressure compared with his identical twin who remained on Earth. And with missions planned to Mars and beyond, there’s an urgent need to understand how long-duration spaceflight affects the cardiovascular system.

With this aim, a research team headed up at Johns Hopkins University has sent a heart-on-a-chip platform to the International Space Station and investigated the impact of 30 days in space on the cardiac cells within. The findings, reported in the Proceedings of the National Academy of Sciences, could also shed light on the changes in heart structure and function that occur naturally due to ageing.

“I began cardiac research after my own father died of heart disease when I was a senior college student, and my main motivation for studying the effects of spaceflight on cardiac cells stemmed from the striking resemblance between cardiac deterioration in microgravity and the ageing process on Earth,” project leader Deok-Ho Kim tells Physics World. “The ability to counteract the impacts of microgravity on cardiac function will be essential for prolonged duration human spaceflights, and may lead to therapies for ageing hearts on Earth.”

Engineered heart tissues

The heart-on-a-chip platform is based on engineered heart tissues (EHTs), in which heart muscle cells (cardiomyocytes) derived from human-induced pluripotent stem cells are cultured within a hydrogel scaffold. The key advantage of this design over previous studies using 2D cultured cells is its ability to more accurately replicate human cardiac muscle tissue.

“Cells cultured on traditional 2D petri dishes do not behave as they would in the body, whereas our platform provides a physiologically relevant 3D environment that mimics in vivo conditions,” Kim explains.

Inside the platform, the EHTs are mounted between two posts, one of which is flexible and contains a small magnet that moves as the tissue contracts. Small magnetic sensors measure the changes in magnetic flux to determine tissue contraction in real time.

Designed for space

To allow culture of the cardiac cells in microgravity, Kim’s team – primarily postdoctoral fellow Jonathan Tsui – developed custom sealed tissue chambers containing six EHTs. These chambers, along with the magnetic sensors and associated electronics, were housed within a compact plate habitat that required minimal handling to maintain cell viability. “The platform was designed to be easily maintained by astronauts aboard the ISS, an important consideration as crew time is a precious resource,” says Kim.

The tissue chambers were carefully transported by Tsui to the Kennedy Space Center, then launched to the ISS aboard the SpaceX CRS-20 mission in March 2020. The researchers then monitored the function of the cardiac tissues for 30 days in microgravity, using the sensors to automatically detect magnet motion as the cells beat. The raw data were transmitted down from the ISS and converted into force and frequency measurements that provided insight into the contraction strength and beating patterns, respectively.

Once the samples were back on Earth, the researchers examined the cardiac tissues during a nine-day recovery period. They compared their findings with results from an identical set of EHTs cultured on Earth for the same duration.

Cardiac impact

After 12 days on the ISS, the EHTs exhibited a significant decrease in contraction strength compared with both baseline values and the control EHTs on Earth. This reduction persisted throughout the experiment and during the recovery period on Earth. The cardiac tissues also exhibited increased incidences of arrhythmia (irregular heart rhythm) whilst on the ISS, although this resolved once back on Earth.

At the end of the experiment (day 39), Kim and colleagues examined the cardiac tissue using transmission electron microscopy. They found that spaceflight caused sarcomeres (protein bundles that help muscle cells contract) to become shorter and more disordered – a marker of human heart disease. The changes did not resolve after return to Earth and may be why the cardiac tissues did not regain contraction strength in the recovery period. The team also observed mitochondrial damage in the cells, including fragmentation, swelling and abnormal structural changes.

To further assess the impact of prolonged microgravity, the researchers performed RNA sequencing on the returned tissue samples. They observed up-regulation of genes associated with metabolic disorders, heart failure, oxidative stress and inflammation, as well as down-regulation of genes related to contractility and calcium signalling. Finally, they used in silico modelling to determine that spaceflight-induced oxidative stress and mitochondrial dysfunction were key to the tissue damage and cardiac dysfunction seen in space-flown EHTs.

“By conducting a detailed investigation into cellular changes under real microgravity conditions, we aimed to uncover the mechanisms behind these alterations, potentially leading to therapies that could benefit both astronauts and the ageing population,” says Kim.

Last year, the researchers sent a second batch of EHTs to the ISS to screen drugs that may protect against the effects of low gravity. They are currently analysing the data from these studies. “These results will help us refine the effectiveness of promising drug therapies for our upcoming third mission,” says Kim.

The post Heart-on-a-chip reveals impact of spaceflight on cardiac health appeared first on Physics World.

]]>
Research update A heart-on-a-chip platform sent to the International Space Station reveals how 30 days in space alters heart muscle cells https://physicsworld.com/wp-content/uploads/2024/10/7-10-24-Heart-on-a-chip-Tsui-Countryman.jpg newsletter1
Study finds preschool children form ‘social droplets’ when moving around the classroom https://physicsworld.com/a/study-finds-preschool-children-form-social-droplets-when-moving-around-the-classroom/ Sat, 05 Oct 2024 09:00:18 +0000 https://physicsworld.com/?p=117253 The movement of preschool children results in two distinct phases, find study

The post Study finds preschool children form ‘social droplets’ when moving around the classroom appeared first on Physics World.

]]>
If you have ever experienced a preschool environment you will know how seemingly chaotic it can be. Now physicists in the US and Germany have examined the movement of preschool children in classroom and playground settings to determine if any rules can be gleaned from their dawdling.

To do so they put radio-frequency tags on the vests of more than 200 children aged between two and four and then monitored their position and trajectories via receivers placed around the environment.

The researchers found that the dynamics resembled two distinct phases. The first is a gas-like phase in which the children are moving freely while exploring their surroundings.

This was mostly seen in the playground where children could roam without restriction, with the researchers finding that toddlers’ movement is similar to that of pedestrian flow.

The second phase is a “liquid-vapour-like state”, in which the children act like molecules to form “droplets” of social groups. In other words, they coalesce into smaller, more clustered groups with some “free-moving” individuals entering and exiting these groups.

The team found that this phase was most evident in classrooms, in which the children are more constrained and social communication plays a bigger role. Indeed, this type of behaviour has not been observed in human movement before, with the findings offering new insights about the dynamics of low-speed movement.

The post Study finds preschool children form ‘social droplets’ when moving around the classroom appeared first on Physics World.

]]>
Blog The movement of preschool children results in two distinct phases, find study https://physicsworld.com/wp-content/uploads/2024/10/young-children-play-outside-794064370-Shutterstock_Monkey-Business-Images.jpg
Silk-on-graphene films line up for next-generation bioelectronics https://physicsworld.com/a/silk-on-graphene-films-line-up-for-next-generation-bioelectronics/ Fri, 04 Oct 2024 12:30:26 +0000 https://physicsworld.com/?p=117203 Researchers have grown a uniform two-dimensional layer of silk protein fragments on a van der Waals substrate for the first time

The post Silk-on-graphene films line up for next-generation bioelectronics appeared first on Physics World.

]]>
Researchers have succeeded in growing a uniform 2D layer of silk protein fragments on a van der Waals substrate – in this case, graphene – for the first time. The feat should prove important for developing silk-based electronics, which have been limited until now because of the difficulty in controlling the inherent disorder of the fibrillar silk architecture.

Silk is a protein-based material that humans have been using for over 5000 years. In recent years, researchers have been looking to exploit one of its two main components, silk fibroin (which is made up of protein fragments), in electronic and bioelectronic applications. This is because it can self-assemble into a range of fibril-based architectures that boast excellent mechanical and optical properties. Indeed, devices in which silk fibroin films are interfaced with van der Waals solids, metals or oxides appear to be particularly promising for making next-generation thin-film transistors, memory transistors (or memristors), human–machine interfaces and sensors.

There is a problem, however, in that silk cannot be used in its natural form for such devices because its fibres are arranged in a disordered, tangled fashion. This means it cannot uniformly or accurately modulate electronic signals.

Controlling natural disorder

A team of researchers, led by materials scientist and engineer James De Yoreo of the PNNL and the University of Washington, has now found a way to control this disorder. In their work, the researchers grew highly organized 2D films of silk fibroins on graphene, a highly conducting sheet of carbon just one atom thick.

Using atomic force microscopy, nano-Fourier transform infrared spectroscopy and molecular dynamics calculations, the researchers observed that the films consist of stable lamellae of silk fibroin molecules that have the same structure as the nano-crystallites of natural silk. The fibroins pack in precise parallel beta-sheets – a common protein shape found in nature – on this substrate.

Thanks to scanning Kelvin probe measurements, De Yoreo and colleagues also found that the films modulate the electric potential of the graphene substrate’s surface.

The researchers say that they took advantage of the inherent interactions of the silk molecules with the substrate and its crystallinity to force the silk molecules to assemble into a crystalline layer at the interface between the two materials. They then regulated the concentration of the aqueous solution in which the silk proteins had been dissolved to limit the number of silk layers that form. In this way, they were able to assemble single monolayers, bilayers or much thicker multilayers.

Uniform properties

Since the material is highly ordered, its properties are uniform, says De Yereo. What’s more, because of the strong intermolecular interactions in the beta-sheet arrangement and the strong interactions with the substrate, it is highly stable. “In its pure state, it can regulate the surface potential of the underlying conductive substrate, but there are techniques for doping silk to introduce both optical and electronic properties that can greatly expand its useful properties,” he explains.

The researchers hope their results will help in the development of 2D bioelectronic devices that exploit natural silk-based layers chemically modified to provide different electronic functions. They also plan to use their starting material to create purely synthetic silk-like layers assembled out of artificial, sequence-defined polymers that mimic the amino acid sequence of the silk molecule. “In particular, we see potential for using these materials in memristors, for computing based on neural networks,” De Yereo tells Physics World. “These are networks that could allow computers to mimic how the brain functions.”

It is important to note that the system developed in this work is nontoxic and water-based, which is crucial for biocompatibility, adds the study’s lead author Chenyang Shi.

The research is detailed in Science Advances.

The post Silk-on-graphene films line up for next-generation bioelectronics appeared first on Physics World.

]]>
Research update Researchers have grown a uniform two-dimensional layer of silk protein fragments on a van der Waals substrate for the first time https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_2D-silk-hero-image.jpg newsletter1
‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics https://physicsworld.com/a/sometimes-nature-will-surprise-us-juan-pedro-ochoa-ricoux-on-eureka-moments-and-the-future-of-neutrino-physics/ Fri, 04 Oct 2024 10:00:10 +0000 https://physicsworld.com/?p=116939 Particle physicist Juan Pedro Ochoa-Ricoux talks about how the next generation of neutrino experiments will test the boundaries of the Standard Model

The post ‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics appeared first on Physics World.

]]>
It was a once-in-a-lifetime moment during a meeting in 2011 when Juan Pedro Ochoa-Ricoux realized that new physics was emerging in front of his eyes. He was a postdoc at the Lawrence Berkeley National Laboratory in the US, working on the Daya Bay Reactor Neutrino Experiment in China. The team was looking at their first results when they realized that some of their antineutrinos were missing.

Ochoa-Ricoux has been searching for the secrets of neutrinos since he began his master’s degree at the California Institute of Technology (Caltech) in the US in 2003. He then completed his PhD, also at Caltech, in 2009, and is now a professor at the University of California Irvine, where neutrinos are still the focus of his research.

The neutrino’s non-zero mass directly conflicts with the Standard Model of particle physics, which is exciting news for particle physicists like Ochoa-Ricoux. “We actually like it when the theory doesn’t match the experiment,” he jokes, adding that his motivation for studying these elusive particles is for the new physics they could reveal. “We need to know how to extend [the Standard Model] and neutrinos are one area where we know it has to be extended.”

Because they rarely interact with matter, neutrinos are notoriously hard to study. Electron antineutrinos are however produced in measurable quantities by nuclear reactors and this is what Daya Bay was measuring. The experiment consisted of eight detectors measuring the electron antineutrino flux at different distances from six nuclear reactors. As the antineutrinos disperse, the detectors further away are expected to measure a smaller signal than those close by.

However, when Ochoa-Ricoux and his team analysed their results, they found “a deficit in the far location that could not only be explained by the fact that those detectors were farther away”. Neutrinos come in three types, or “flavours”, and it seemed that some of the electron antineutrinos produced in the power plants were changing into tau and muon antineutrinos, meaning the detector didn’t pick them up.

This transformation of neutrino type, also known as “oscillation”, occurs for both neutrinos and antineutrinos. It was first observed in 1998, with the discovery leading to the award of the 2015 Nobel Prize for Physics. However, physicists are still not sure if antineutrinos and neutrinos oscillate in the same way. If they don’t, that could explain why there is more matter than antimatter in the universe.

The mathematics of neutrino oscillation is complex. Among many parameters, physicists define an angle called θ13, which plays a role in determining the probability of certain flavour oscillations. For differences in oscillation probabilities between neutrinos and antineutrinos to be possible, this quantity must be non-zero. When Ochoa-Ricoux was working on the Main Injector Neutrino Oscillation Search (MINOS) at Fermilab in the US for his PhD, he had found tantalizing but inconclusive evidence that θ13 is different from zero.

Juan Pedro Ochoa–Ricoux at the JUNO Observatory

The memorable meeting Ochoa-Ricoux recalled at the start of this article was, however, the first moment he realized “Oh, this is real”. Their antineutrino deficit data eventually proved that the angle is about nine degrees. This discovery set the stage for Ochoa-Ricoux’s career, which, a little like the oscillating neutrino, he describes as a “mixture of everything”.

The asymmetry between antimatter and matter is one of the biggest mysteries in physics and in the next four years, two experiments – HyperKamiokande in Japan and the Deep Underground Neutrino Experiment (DUNE) in the US – will start looking for evidence of matter–antimatter asymmetry in neutrino oscillation (Ochoa-Ricoux is a member of DUNE). “Had θ13 been zero” he says, “my job and my life would have been very very different”.

All hands on deck

On the one hand you analyse the data, but before you can do that, you actually have to build the apparatus

Ochoa-Ricoux wasn’t just analysing the results from Daya Bay, he was also assembling and testing the experiment. This was sometimes frustrating work – he remembers having to painstakingly remeasure detector components because they wouldn’t fit inside the machine. But he emphasizes that this was an important part of the Daya Bay discovery. “On the one hand you analyse the data, but before you can do that, you actually have to build the apparatus,” he says.

While Ochoa-Ricoux now spends much less time climbing inside detector equipment, he is actively involved in designing the next generation of neutrino experiments. As well as DUNE, he works on Daya Bay’s successor, the Jiangmen Underground Neutrino Observatory (JUNO) in China, a nuclear reactor experiment that is projected to start taking data at the end of the year.

The first neutrino oscillation measurement was made in 1998 by the Japanese researcher Takaaki Kajita, who would later share the 2015 Nobel Prize for Physics for his work. However, the experiment where Kajita made this observation, called SuperKamiokande, was originally designed to search for proton decay.

Ochoa-Ricoux thinks that DUNE and JUNO need to be open to finding something equally unexpected. JUNO’s main aim is to determine which neutrino mass is the heaviest by measuring oscillating antineutrinos from nuclear power plants. It will also detect neutrinos coming from the Sun or the atmosphere, and Ochoa-Ricoux thinks this flexibility is vital.

“Sometimes nature will surprise us and we need to be ready for that,” he says, “I think we need to design our experiments in such a way that we can be sensitive to those surprises.”

Exploring the unknown

Experiments like DUNE and JUNO could change our understanding of the universe, but there is no guarantee that neutrinos hold the key to mysteries like matter–antimatter asymmetry. There’s therefore pressure to deliver results, but Ochoa-Ricoux is excited that the field is taking leaps into the unknown.

When you understand your world better, sometimes it’s impossible to predict what applications will come

He also argues that as well as advancing fundamental science, these projects could lead to new technologies. Medical imaging devices like MRI and PET scanners are offshoots of particle physics and he believes that “When you understand your world better, sometimes it’s impossible to predict what applications will come.”

However, at the heart of Ochoa-Ricoux’s mindset is the same fascination with the mysteries of the universe that motivated him to pursue neutrino physics as a student. For him, projects like JUNO and DUNE can justify themselves on those grounds alone. “We’re humans. We need to understand the world we live in. I think that’s highly valuable.”

The post ‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics appeared first on Physics World.

]]>
Feature Particle physicist Juan Pedro Ochoa-Ricoux talks about how the next generation of neutrino experiments will test the boundaries of the Standard Model https://physicsworld.com/wp-content/uploads/2024/10/2024-09-Careers-Ochoa-Ricoux-dome.jpg newsletter
Gender gap in physics entrenched by biased collaboration networks, study finds https://physicsworld.com/a/gender-gap-in-physics-entrenched-by-biased-collaboration-networks-study-finds/ Fri, 04 Oct 2024 08:00:00 +0000 https://physicsworld.com/?p=117163 Interventions to integrate young female physicists into established networks could help tackle under-representation

The post Gender gap in physics entrenched by biased collaboration networks, study finds appeared first on Physics World.

]]>
Biased collaboration and citation patterns are responsible for driving the gender gap in physics. That is according to a new study, which finds that poor female representation persists due to established male physicists preferring to work with early-career male researchers. The study’s authors say that integrating young female physicists into established networks could help to tackle the under-representation of women (Communications Physics 7 309).

The gender gap in physics is one of the largest in science and recent research suggests that it could take couple of centuries until there are equal numbers of senior male and female physicists.

Keen to understand the network dynamics behind the gap, Fariba Karimi at the Complexity Science Hub in Austria and colleagues analysed 668,028 papers published in American Physical Society journals between 1893 to 2020 and 8.5 million citations.

They deduced with “high confidence” the genders of 136,598 first authors in the APS dataset and used this data to construct citation and co-authorship networks.

Despite rising overall numbers of female physicists and female-led papers, the authors find that the ratio of male to female first authors and researchers has remained stable for decades. In fact, the gender gap in absolute numbers appears to be growing.

The researchers then developed a model of the citation and co-authorship networks to explore how the “adoption” of new members by established members impacts network growth.

Small changes

The model focused on two mechanisms. One is “asymmetric mixing” – the inclination of people to adopt people like themselves. The other is general preferential attachment, or the idea that established network members attract more connections.

The model mirrors real-world dynamics and shows that these mechanisms and adoption behaviours cause group ratio inequalities to persist. In the case of physicists, the gender imbalance continues because male physicists are more likely to collaborate with and cite their male counterparts.

Compared with women, men entering the network are more likely to be adopted by those who are already well established in the network, which tends to be men. This trend has been shown elsewhere with research in 2022 finding that male-led papers are more likely to cite male-led work.

The team then used their model to show how small changes to a two-group system can alter the group balance. They find that if the simulation’s mixing values – such as adoption behaviours – are altered slightly in favour of a smaller, less dominant group, that group’s size quickly catches up with that of the dominant group.

Karimi says that it is “not just about having more women” but also about how they are integrated into networks. “In real systems, it’s not as simple as someone coming and connecting to others in a network,” adds Karimi. “It is also a matter of who takes in the newcomer and adopts him or her into their personal network.”

To alter the network dynamics, the study authors suggest interventions such as creating opportunities for junior women to collaborate with senior men and giving female researchers more opportunities for funding and promotion. “If we don’t take these interventions soon, this gap will not close very easily,” says Karimi.

The post Gender gap in physics entrenched by biased collaboration networks, study finds appeared first on Physics World.

]]>
News Interventions to integrate young female physicists into established networks could help tackle under-representation https://physicsworld.com/wp-content/uploads/2024/10/people-connections-management-1264228218-iStock_cagkansayin.jpg newsletter1
Nobel predictions and humorous encounters with physics laureates https://physicsworld.com/a/nobel-predictions-and-humorous-encounters-with-physics-laureates/ Thu, 03 Oct 2024 14:01:12 +0000 https://physicsworld.com/?p=116906 Physics World editors gaze into their crystal ball and reminisce about past Nobel winners

The post Nobel predictions and humorous encounters with physics laureates appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast, our very own Matin Durrani and Hamish Johnston explain why they think that this year’s Nobel Prize for Physics could be awarded for work in condensed-matter physics – and who could be in the running. They also reminisce about some of the many Nobel laureates that they have met over the years and the excitement that comes every October when the winners are announced.

 

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Nobel predictions and humorous encounters with physics laureates appeared first on Physics World.

]]>
Podcasts Physics World editors gaze into their crystal ball and reminisce about past Nobel winners https://physicsworld.com/wp-content/uploads/2024/10/Matin-and-Hamish.jpg newsletter
Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ https://physicsworld.com/a/celebrating-with-a-new-nobel-laureate-in-canadas-steeltown/ Thu, 03 Oct 2024 14:00:23 +0000 https://physicsworld.com/?p=116904 The magical day Bertram Brockhouse won his prize

The post Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ appeared first on Physics World.

]]>
For nearly two decades I have been covering the Nobel prize for Physics World and every October I tune into to the announcement that’s made live from Stockholm. But, the frisson that I feel with each announcement brings me straight back to a day 30 years ago when Bertram Brockhouse bagged the award.

Three decades ago I was living in Hamilton, an industrial city at the western end of Lake Ontario. About 70 km from downtown Toronto and staunchly blue collar, Hamilton was famous for its smoke-belching steel mills and its beloved Tiger-Cats of the Canadian Football League. In addition to steel, the city has been home to myriad manufacturing companies and in the days of Empire it had been dubbed the “Birmingham of Canada”.

So it’s safe to say that Hamilton in the 1990s was not the sort of place where you would expect to run into a Nobel laureate.

But that changed one day in October 1994. I began that day listening to a news bulletin on CBC radio – and the lead item was that the Canadian physicist Bertram Brockhouse had won half of the 1994 Nobel Prize for Physics for his pioneering work on inelastic neutron scattering.

In 1994 Brockhouse was an emeritus professor of physics at McMaster University in Hamilton – where I was doing a PhD. What’s more, I had been an undergraduate intern at Chalk River Laboratories, where I worked at the Neutron Physics Branch – which was founded by Brockhouse in 1960 before he left for McMaster.

“Son of a gun”

Needless to say, I was very excited to get to the physics department and join in the celebrations that morning. And I was not disappointed. As I arrived, the normally mild-mannered theorist Jules Carbotte was skipping along the corridor shouting “Bert Brockhouse, son of a gun” as he punched the air.

I don’t remember seeing Brockhouse that day, but everyone else was in very good spirits. Indeed, it was the start of celebrations at the university that seemed very inclusive to me – with faculty, students and members of the wider community invited to what seemed like endless parties and receptions. This was understandable because Brockhouse was McMaster’s first Nobel prize winner. There have been three more since – including another in physics, with the 2018 laureate Donna Strickland having done her degree in engineering physics at McMaster.

At one of those receptions I was introduced to Brockhouse and discovered that he lived in one of my favourite parts of Hamilton – a semi-rural and heavily-wooded portion of the Niagara Escarpment nestled between the former towns of Ancaster and Dundas. Instead of talking about neutrons, I believe we chatted about the growing number of deer in the area and how they were wreaking havoc in people’s gardens.

Coffee lounge gang

Brockhouse had retired a decade earlier, but he was often at the university where he shared a small office with other emeriti professors – a gang that I would often see in the coffee lounge. As I recall, he was very quickly given an office of his own (and perhaps a personal assistant) to help him cope with his new fame.

While writing this piece, I was surprised to discover that Brockhouse was just 76 when he bagged his Nobel for work he had done 40 years previously. Perhaps because 30 years have passed, 76 no longer seems old to me – but I don’t think this is just my perception. Today, as mandatory retirement fades into the past and people are encouraged to remain physically and mentally active, 76 is not that old for a working physicist. Many people that age and older continue to make important contributions to physics.

Indeed, one of Brockhouse’s colleagues at McMaster – Tom Timusk – remains active in research into his 90s. In 2003 Timusk published an obituary of Brockhouse in Nature and it reminded me of what Brockhouse said to a gathering of students after he won the prize: “I used to think that my work was not important, but recently I have had to change my mind.”

How nice to be able look back on one’s work and find value. I suspect that like Brockhouse, many people underestimate their contributions to the greater good. But unlike, Brockhouse, some will never stand corrected.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ appeared first on Physics World.

]]>
Blog The magical day Bertram Brockhouse won his prize https://physicsworld.com/wp-content/uploads/2024/10/3-10-24-Brockhouse.jpg
Camera takes inspiration from cats’ eyes to improve imaging performance https://physicsworld.com/a/camera-takes-inspiration-from-cats-eyes-to-improve-imaging-performance/ Thu, 03 Oct 2024 12:00:23 +0000 https://physicsworld.com/?p=117174 Device might be employed in applications such as autonomous vehicles, drones and surveillance systems

The post Camera takes inspiration from cats’ eyes to improve imaging performance appeared first on Physics World.

]]>
Features of feline eyes

A novel camera inspired by structures within cats’ eyes could be employed in autonomous vehicles, drones and surveillance systems – applications where precise object detection in varied light conditions and complex backgrounds is critical.

One key feature of the new device is the use of a vertically elongated slit, like the pupils of cats’ eyes, which are different from those of other mammals, explains Minseok Kim of the Gwangju Institute of Science and Technology in Korea. As in a cat’s eye, this pupil creates an asymmetric depth of focus when it dilates and contracts, allowing the camera to blur out backgrounds and focus sharply on objects. Another feature is a metal reflector that enables more efficient light absorption in low-light settings. This mimics the tapetum lucidum, a mirror-like structure that gives cats’ eyes their characteristic glow. It reflects incident light back into the retina, allowing it to amplify light.

“The result is a camera that works well in both bright and low-light environments, allowing it to capture high-sensitivity images without the need for complex software post-processing,” Kim says.

Mimicking animal eyes

Kim and colleagues have been working on mimicking the eyes of various animals for several years. Some of their recent studies include structures inspired by fish eyes, fiddler crab eyes, cuttlefish eyes and avian eyes. They decided to work on this latest project with the aim of overcoming the limitations of current cameras systems, in particular, their difficulty in handling very low or very bright lighting conditions. They also wanted to do away with the post-processing image software required to better distinguish objects from their backgrounds.

One of the main difficulties that the researchers had to overcome in this study was to simplify the intricate structure of the tapetum lucidum. Instead of replicating it exactly, they used a metal reflector placed beneath a hemispherical silicon photodiode array, which reduces excessive light and enhances photosensitivity. This design allows for clear focusing under bright light and improved sensitivity in dim conditions.

“Another challenge was to create a vertical pupil that could mimic the cat’s ability to focus sharply on an object while blurring the background,” says Kim. “We were able to construct the vertical aperture using a 3D printer, but our future work will focus on making this pupil dynamic so it can automatically adjust its size in response to changing light conditions.”

Many application areas

The research could significantly improve technologies that rely on high-performance imaging in difficult lighting conditions, Kim tells Physics World. The team expects the system to be highly useful in autonomous vehicles, where precise object detection is critical for safe navigation.

“It could also be applied to drones and surveillance systems that operate in various lighting environments, as well as in military applications where camouflage-breaking capabilities are essential,” Kim adds. “The system could also find use in medical imaging, where the ability to capture high-sensitivity, real-time images without extensive software processing is crucial.”

The researchers now plan to further optimize their camera’s pixel density – which they admit is quite low at the moment – and its resolution to improve image quality. “We also aim to conduct more real-world tests, particularly in applications such as autonomous driving and robotic surveillance, to evaluate how the system performs in practical settings,” says Kim. “Lastly, we are looking into binocular object recognition systems so that the camera can handle more complex visual tasks.”

The study is detailed in Science Advances.

The post Camera takes inspiration from cats’ eyes to improve imaging performance appeared first on Physics World.

]]>
Research update Device might be employed in applications such as autonomous vehicles, drones and surveillance systems https://physicsworld.com/wp-content/uploads/2024/10/03-10-24-cats-eyes-camera-featured.jpg
Robert Laughlin: the Nobel interview that became an impromptu press conference https://physicsworld.com/a/robert-laughlin-the-nobel-interview-that-became-an-impromptu-press-conference/ Thu, 03 Oct 2024 09:13:31 +0000 https://physicsworld.com/?p=116903 Matin Durrani winces at the time he met Nobel laureate Robert Laughlin

The post Robert Laughlin: the Nobel interview that became an impromptu press conference appeared first on Physics World.

]]>
As a science journalist, some interviews you do go well, some don’t, but at least they usually have a distinct start and end. That wasn’t the case with Robert Laughlin, whom I once met at the annual Lindau conference for Nobel-prize-winners in Germany.

Most of the conference involves Nobel laureates giving lectures to a select band of PhD students from around the world. But Laughlin, who’d shared the 1998 Nobel Prize for Physics for his work on quantum fluids with fractional charges, had agreed to speak to me in a private room at the conference venue on the shores of Lake Constance.

Things started sensibly enough (he was ostensibly talking about a new book he was writing) but after about 20 minutes, a conference official barged in.

There’d be an over-booking and no, we weren’t allowed to stay. We were two people in the wrong place at the wrong time – and the fact that one of us was a Nobel-prize-winning physicist didn’t cut any mustard. Out we went.

Laughlin and I packed up our stuff and reconvened at an outside terrace in the summer sun, where we tried to pick up the thread of our conversation.

Now, laureates like Laughlin are the big draw of the Lindau conference – in fact, they’re the whole reason the meeting takes place. If Lindau were a music festival, they’d be the artists everyone’s come to see.

Before I knew what was going on, first one then two then three students had sidled up to our table. Like electrons around a nucleus, they’d been attracted by the presence of a Nobel laureate and weren’t going to miss out.

Laughlin didn’t appear fazed by the unexpected turn of events; in fact, I’m sure Nobel laureates love nothing better than being the centre of attention. Within minutes, the entire table had been surrounded by a phalanx of hangers-on.

Our one-to-one interview had become an impromptu one-man press conference with me seemingly serving as Laughlin’s minder. As he held court to his gaggle of fawning students, apparently oblivious that I was still there, Laughlin was in his element.

Laughlin probably doesn’t remember the encounter: Nobel laureates, who are the only real celebrities in physics, meet hundreds of people all the time. The students, however, appeared to be enjoying themselves, so the conference organizers must have been happy.

But I just ended up squirming in my seat. I put my notebook back in my bag and let Laughlin take over.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Robert Laughlin: the Nobel interview that became an impromptu press conference appeared first on Physics World.

]]>
Blog Matin Durrani winces at the time he met Nobel laureate Robert Laughlin https://physicsworld.com/wp-content/uploads/2024/10/DURRANI-Laughlin.jpg
Steven Weinberg: the Nobel laureate who liked nuts https://physicsworld.com/a/steven-weinberg-the-nobel-laureate-who-liked-nuts/ Wed, 02 Oct 2024 14:00:31 +0000 https://physicsworld.com/?p=116902 Matin Durrani recounts a one-sided interview with Steven Weinberg

The post Steven Weinberg: the Nobel laureate who liked nuts appeared first on Physics World.

]]>
Steven Weinberg

It was 2003 and Steven Weinberg was sitting with me in the lobby of a hotel in Geneva, explaining his research into fundamental physics, when he paused to grab a handful of peanuts from a bowl on the table in front of us.

I had been speaking to Weinberg as he’d come to Switzerland to give a lecture at CERN on the development of the Standard Model of particle physics, in which he’d played a key part, and had agreed to an interview with Physics World during a break in his schedule.

The old-fashioned Dictaphone on which I recorded our interview on has gone missing so I’ve only got a hazy recollection of what he said. I do remember that Weinberg was charming, friendly and witty, but it was pretty clear he felt he was in the company of some kind of intellectual buffoon.

Turning round, he asked me: “Do you like nuts?”

You see, the only time Weinberg properly interacted with me was to reveal how he enjoyed those little bags of nuts you get on plane journeys (he was obviously used to flying business class); it was then that he wanted my view of them too. It was as if Weinberg doubted I could handle anything deeper than airline snacks and was just trying to be kind.

That’s what happens when you an interview a Nobel laureate. Apart from them enjoying the sound of their own voice, they obviously know they know several orders of magnitude more than you do about their specialist subject.

You’re left squirming and feeling ever so slightly inadequate, trying to absorb a whirlwind of high-level information while at the same time desperately wondering what your next question should be.

His opinion of me certainly must have dipped further a few weeks later. Despite some misgivings, I decided to write up our interview and e-mail Weinberg my draft, which covered his life, research and career.

Stupidly, I’d made a few schoolboy errors near the start, prompting Weinberg to write back, explaining he didn’t have the time or energy to check my nonsense any further (I paraphrase slightly) and, no, he wasn’t going to spend time pointing out my mistakes either.

At least Weinberg was polite, which is more than you could say for the late Subrahmanyan Chandrasekhar, who shared the 1983 Nobel Prize for Physics for his theoretical work on the structure and evolution of stars. Robert P Crease takes up the story in this memorable article.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Steven Weinberg: the Nobel laureate who liked nuts appeared first on Physics World.

]]>
Blog Matin Durrani recounts a one-sided interview with Steven Weinberg https://physicsworld.com/wp-content/uploads/2021/08/Steven-Weinberg.jpg
CERN celebrates 70 years at the helm of particle physics in lavish ceremony https://physicsworld.com/a/cern-celebrates-70-years-at-the-helm-of-particle-physics-in-lavish-ceremony/ Wed, 02 Oct 2024 12:02:53 +0000 https://physicsworld.com/?p=117142 The event was attended by 38 national delegations as well as Her Royal Highness Princess Astrid of Belgium

The post CERN celebrates 70 years at the helm of particle physics in lavish ceremony appeared first on Physics World.

]]>
Officials gathered yesterday for an official ceremony to celebrate 70 years of the CERN particle-physics lab, which was founded in 1954 in Geneva less than a decade after the end of the Second World War.

The ceremony was attended by 38 national delegations including the heads of state and government from Bulgaria, Italy, Latvia, Serbia, Slovakia and Switzerland as well as Her Royal Highness Princess Astrid of Belgium and the president of the European Commission. It marked the culmination of a year of events that showcased the lab’s history and plans for the future as it looks beyond the Large Hadron Collider.

Created to foster peace between nations and bring scientists together, CERN’s origins can be traced back to 1949 when the French Nobel-prize-winning physicist Louis de Broglie first proposed the idea a European laboratory. A resolution to create the European Council for Nuclear Research (CERN) was adopted at a UNESO conference in Paris in 1951, with 11 countries signing an agreement to establish the CERN council the year after.

CERN Council met for the first time in May 1952 and in October of that year chose Geneva as the site for a 25–30 GeV proton synchrotron. The formal convention establishing CERN was signed at a meeting in Paris in 1953 by the lab’s 12 founding member states: Belgium, Denmark, France, West Germany, Greece, Italy, the Netherlands, Norway, Sweden, Switzerland, the UK and Yugoslavia.

On 29 September 1954 CERN was formed and the provisional CERN council was dissolved. That year also saw the start of construction of the lab in which the proton synchrotron, with a circumference of 628 m, accelerated protons for the first time on 24 November 1959 with an energy of 24 GeV, becoming the world’s highest-energy particle accelerator.

A proud moment

Today CERN has 23 member states with 10 associate member states. Some 17,000 people from 100 nationalities work at CERN, mostly on the LHC but the lab also does research into antimatter research and theory. CERN is now planning on building on that success through a Future Circular Collider, which if funded, would include a 91 km circumference collider to study the Higgs boson in unprecedented detail.

As part of the celebrations, this year has seen over 100 events organized in 63 cities in 28 countries. The first public event at CERN, held on 30 January, combined science, art and culture, and featured scientists discussing the evolution of particle physics and CERN’s significant contributions in advancing this field.

Other events over the past months have focused on open questions in physics and future directions; the link between fundamental science and technology; CERN’s role as a model for international collaboration; and training, education and accessibility.

The meeting yesterday, the culmination of this year-long celebration, was held in the auditorium of CERN’s Science Gateway, which was inaugurated in October 2023.

“CERN is a great success for Europe and its global partners, and our founders would be very proud to see what CERN has accomplished over the seven decades of its life,” noted CERN director general Fabiola Gianotti. “The aspirations and values that motivated those founders remain firmly anchored in our organization today: the pursuit of scientific knowledge and technological developments for the benefit of humanity; training and education; collaboration across borders, diversity and inclusion; knowledge, technology and education accessible to society at no cost; and a great dose of boldness and determination to pursue paths that border on the impossible.”

The post CERN celebrates 70 years at the helm of particle physics in lavish ceremony appeared first on Physics World.

]]>
News The event was attended by 38 national delegations as well as Her Royal Highness Princess Astrid of Belgium https://physicsworld.com/wp-content/uploads/2024/10/10-Family-photo-CERN-small.jpg newsletter1
Rambling tour of Europe explores the backstory of the Scientific Revolution https://physicsworld.com/a/rambling-tour-of-europe-explores-the-backstory-of-the-scientific-revolution/ Wed, 02 Oct 2024 10:00:28 +0000 https://physicsworld.com/?p=116825 Victoria Atkinson reviews Inside the Stargazer’s Palace by Violet Moller

The post Rambling tour of Europe explores the backstory of the Scientific Revolution appeared first on Physics World.

]]>
Sixteenth-century Europe was a place of great change. Religious upheaval swept the continent, empires expanded and the mystic practices of the medieval world slowly began shifting toward modern science.

Copernicus’s heliocentric model of the universe, introduced in 1543, is often considered the origin of this so-called “Scientific Revolution”. However, with her latest book Inside the Stargazer’s Palace: The Transformation of Science in 16th-Century Northern Europe, historian and writer Violet Moller gives the story behind this transformation, putting lesser-known figures at the fore. She looks at the effect of religious and geopolitical events in northern Europe, starting from the late 15th century, and shows how the scholars of this period drew together strands of scientific thought that had been developing for decades.

Beginning in the German town of Nuremberg in 1471, the book is a sweeping tour of the continent, visiting the ancient university city of Louvain in what is now Belgium, the London suburb of Mortlake, Kassel in Germany and the formerly Danish island of Hven. She concludes this journey in Prague with the deposition of the Holy Roman Emperor and scientific patron Rudolf II in 1611, an event that broke apart Europe’s flourishing community of scientific minds.

As a scientist, I was disappointed to find the book fairly light on scientific detail. Inside the Stargazer’s Palace is first and foremost a history book, but I felt that some more scientific context would help most readers grasp the significance of the events Moller describes.

Nonetheless, it was fascinating to see how politics and economics across the continent shaped scientific study. In the 15th century, the scientific community in northern Europe was exceedingly small, with scholarly knowledge restricted to those who could travel to the great knowledge centres in Italy, Greece and beyond. However, the development of the printing press in 1440 and the founding of the first scientific print house in Nuremberg changed the way information was shared forever. As scientific knowledge became more accessible, interest in understanding the natural world began to grow.

Through the closely connected tales of a number of individuals – from cartographer and instrument maker Gemma Frisius to the renowned alchemist Tycho Brahe – we see the beginnings of a scientific community. As Moller says, “Everyone, it seems, knew everyone,” with theories, techniques and instruments shared across a growing network of enthusiastic practitioners.

The development of the printing press mid-century and the founding of the first scientific print house in Nuremberg changed the way information was shared forever

This complexity did not come without its challenges. Moller introduces so many significant figures, each with their own niche, that by chapter four it’s difficult to keep track of who everyone is. The emphasis on personal stories also creates a slightly muddled narrative. In the introduction, Moller tells us “This narrative is based around places,” but at times the location seems incidental at best, if not entirely irrelevant. For example, chapter five ostensibly focuses on the Danish (now Swedish) island of Hven, home to Tycho Brahe. However, over the first 20 pages, we instead follow Brahe on his travels around Europe, and the description of his famous castle-come-laboratory Uraniborg at the end of the chapter feels rather compressed. Other locations, notably Kassel and Prague, are only relevant during the lifetime of a single enthusiastic patron, begging the question of whether it was the place or the person that really mattered.

Despite this sometimes rambling focus, Moller expertly guides the reader through the significant cultural and political events of the century. Beginning in the 1510s, the spread of Lutheranism across Europe brought with it an intellectual revolution, with its fiercest proponents encouraging followers to “think in innovative ways … and focus on praising God through studying his creation”. The conflict between the new Protestant denominations and the traditional Catholic faith drove the migration of great minds, who converged on the places most supportive of their scientific endeavours.

During this period, new observations also directly challenged long-held beliefs. In the early 16th century, astronomy and astrology were one and the same, and astrological predictions underpinned everything from medicine to political decisions. However, a series of astronomical phenomena towards the end of the century – the appearance of a new star in 1572 (later confirmed as a supernova), a comet in 1577, and the conjunction of Saturn and Jupiter in 1583 – triggered a shift away from divinatory thinking in the following decades. Measurements made from these observations conflicted with accepted theories about the universe, showing that the stars and planets were much further away than previously thought.

The discussion of these phenomena is a welcome one, introducing one of surprisingly few scientific details in the book. We are still left to guess many of the basic particulars of this scientific study: what was being measured and how, and why the results were significant. Moller instead provides a list of instruments – astrolabes, quadrants, sextants, torquetums and astronomy rings – with little or no explanation of what they are or how they work.

Moller is a historian, specializing in 16th-century England, so perhaps these subjects are beyond the scope of her expertise. However, a further frustration is the almost exclusive focus on astronomy; there is scant mention of other topics such as alchemy or botany, although this was promised by the book’s synopsis. Occasionally it also seems that Moller indulges her personal enthusiasm over the needs of the reader, placing an undue emphasis on inconsequential details and characters – John Dee, for example, continues to crop up long after his relevant contributions have passed.

The lack of scientific detail and loose focus made this a sometimes frustrating read. However, I can see that for non-scientists and those who prefer a more fluid approach, the book presents an intriguing alternative view of the Scientific Revolution. By the end of Inside the Stargazer’s Palace and, correspondingly, the 16th century, the stage has been set for the discoveries to come, but it feels like we’ve taken a circuitous route to get there.

  • 2024 Oneworld 304pp £25.00hb

The post Rambling tour of Europe explores the backstory of the Scientific Revolution appeared first on Physics World.

]]>
Opinion and reviews Victoria Atkinson reviews Inside the Stargazer’s Palace by Violet Moller https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Atkinson_stargazer_iStock_BlackAperture.jpg newsletter
Nuclear clock ticks ever closer https://physicsworld.com/a/nuclear-clock-ticks-ever-closer/ Wed, 02 Oct 2024 08:30:49 +0000 https://physicsworld.com/?p=117098 New device could not only be the best time-keeper ever, it could also revolutionize fundamental physics studies

The post Nuclear clock ticks ever closer appeared first on Physics World.

]]>
Could a new type of clock potentially be more accurate than today’s best optical atomic clocks? Such a device is now nearing reality, thanks to new work by researchers at JILA and their collaborators who have successfully built all the elements necessary for a fully functioning nuclear clock. The clock might not only outperform the best time-keepers today, it could also revolutionize fundamental physics studies.

Today’s most accurate clocks rely on optically trapped ensembles of atoms or ions, such as strontium or ytterbium. They measure time by locking laser light into resonance with the frequencies of specific electronic transitions. The oscillations of the laser then behave like (very high-frequency) pendulum swings. Such clocks can be stable to within one part in 1020, which means after nearly 14 billion years (or the age of the universe), they will be out by just 10 ms.

As well as accurately keeping time, atomic clocks can be used to study fundamental physics phenomena. Nuclear clocks should be even more accurate than their atomic counterparts since they work by probing nuclear energy levels rather than electronic energy levels. They are also less sensitive to external electromagnetic fluctuations that could affect clock accuracy.

Detecting tiny temporal variations

A nucleus measures between 10-14 and 10-15 m across, while an atom is 10-10 m. Shifts between nuclear energy levels are thus higher in energy and would be resonant with a higher-frequency laser. This translates into more wave cycles per second — and can be thought of as a greater number of pendulum swings per second.

Such a nuclear transition probes fundamental particles and interactions differently to existing atomic clocks. Comparing a nuclear clock with a precise atomic clock could therefore help to unearth new discoveries related to very tiny temporal variations, such as those in the values of the fundamental constants of nature. Any detected changes would point to physics beyond the Standard Model.

The problem is that the high-frequency lasers needed to excite the nuclear transitions in most elements are not easy to come by. To excite nuclear transitions, most atomic nuclei need to be hit by high-energy X-rays. In the late 1970s, however, physicists identified thorium-229 as having the smallest energy gap of all atoms and found that it could thus be excited by lower-energy, ultraviolet light. In 2003, Ekkehard Peik and Christian Tamm at the Physikalisch-Technische Bundesanstalt (Germany’s National Metrology Institute), proposed that this transition could be used to make a nuclear clock. But, it was only in 2016 that this transition was directly observed for the first time.

In the new study, an international team led by Jun Ye at JILA, a joint institute of NIST and the University of Colorado Boulder, have fabricated all of the components needed to create a nuclear clock made from thorium-229. These are: a coherent laser for resolving different nuclear states; a “high concentration” thorium-229 sample embedded in a solid-state calcium fluoride host crystal; and a “frequency comb” referenced to an established atomic standard for precisely measuring the frequency of these transitions.

A frequency comb is a special type of laser that acts like a measuring stick for light. It works using laser light that comprises up to 106 equidistant, phase-stable frequencies (which look like the teeth of a comb) to measure other unknown frequencies with high precision and absolute traceability when compared with a radiofrequency standard. The researchers used a frequency comb operating in the infrared part of the spectrum, which they upconverted (through a cavity-enhanced high harmonic generation process) to produce a vacuum-ultraviolet frequency comb whose frequency is linked to the infrared comb. They then used one line in the comb laser to drive the thorium nuclear transition.

Comparisons for fundamental physics studies

And that is not all: the team also succeeded in directly comparing the ultraviolet frequency to the optical frequency employed in one of today’s best atomic clocks made from strontium-87. This last feat will be the starting point for future nuclear–atomic clock comparisons for fundamental physics studies. “For example, we’ll be able to precisely test if some fundamental constants (like the fine structure alpha) are constant or slowly varying over time,” says Chuankun Zhang, a graduate student in Ye’s group.

Looking forward, the researchers say that they eventually hope to use their technology to make portable solid-state nuclear clocks that can be deployed outside the laboratory for practical applications. They also want to investigate how the clock transitions shift depending on temperature and different crystal environments.

“We also plan to develop faster readout schemes of the excited nuclear states for actual clock operation,” Zhang tells Physics World.

The study is detailed in Nature.

The post Nuclear clock ticks ever closer appeared first on Physics World.

]]>
Research update New device could not only be the best time-keeper ever, it could also revolutionize fundamental physics studies https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_XUV_comb.jpg newsletter1
Fluctuations suppress condensation in 1D photon gas  https://physicsworld.com/a/fluctuations-suppress-condensation-in-1d-photon-gas/ Tue, 01 Oct 2024 15:28:54 +0000 https://physicsworld.com/?p=117127 New result backs up an important theory prediction concerning this exotic state of matter

The post Fluctuations suppress condensation in 1D photon gas  appeared first on Physics World.

]]>
The narrower the parabola shape, the more one-dimensionally the gas behaves

By tuning the spatial dimension of an optical quantum gas from 2D to 1D, physicists at Germany’s University of Bonn and University of Kaiserslautern-Landau (RPTU) have discovered that it does not condense suddenly, but instead undergoes a smooth transition. The result backs up an important theory prediction concerning this exotic state of matter, allowing it to be studied in detail for the first time in an optical quantum gas.

Decreasing the number of dimensions from three to two to one dramatically influences the physical behaviour of a system, causing different states of matter to emerge. In recent years, physicists have been using optical quantum gases to study this phenomenon.

In the new study, conducted in the framework of the collaborative research centre OSCAR, a team led by Frank Vewinger of the Institute of Applied Physics (IAP) at the University of Bonn looked at how the behaviour of a photon gas changed as it went from being 2D to 1D. The researchers prepared the 2D gas in an optical microcavity, which is a structure in which light is reflected back and forth between two mirrors. The cavity was filled with dye molecules. As the photons repeatedly interact with the dye, they cool down and the gas eventually condenses into an extended quantum state called a Bose–Einstein condensate.

Parabolic-shaped protrusions

To make the gas 1D, they modified the reflective surface of the optical cavity by laser-printing a transparent polymer nanostructure on top of one of the mirrors. This patterning created parabolic-shaped protrusions that could be elongated and made narrower – and in which the photons could be trapped.

As the gas transitioned between the 2D and 1D structures, Vewinger and colleagues measured its thermal properties as it was allowed to come back to room temperature – by coupling it to a heat bath. Usually, there is a precise temperature at which condensation occurs – think of water freezing at precisely 0°C. The situation is different when a 1D gas instead of a 2D one is created, however, explains Vewinger. “So-called thermal fluctuations take place in photon gases but they are so small in 2D that they have no real impact. However, in 1D these fluctuations can – figuratively speaking – make big waves.”

These fluctuations destroy the order in 1D systems, meaning that different regions within the gas begin to behave differently, he adds. The phase transition therefore becomes more diffuse.

A difficult experiment

The experiment was not an easy one to set up, he says. The main challenge was to adapt the direct laser writing method to create small and steep structures in which to confine the photons so that it worked for the dye-filled microcavity. “We then had to analyse the photons emitted from the microcavity.”

“Our colleagues in Kaiserslautern eventually succeeded in fabricating new tiny polymer structures with high resolution, sticking to our ultra-smooth dielectric cavity mirrors (with a roughness of around 0.5 Å) that were robust to both the chemical solvent in our dye solution and the laser irradiation employed to inject photons into the cavity,” he tells Physics World.

It is often the case in physics that theories and predictions are based on simple toy models, and these models are powerful in building robust theoretical framework, he explains. “But nature is far from simple; it is extremely difficult to build these ideal platforms to test these foundational concepts since real-world systems are usually interacting, driven-dissipative or coupled to some other system. For photon condensates, it is known that they very closely resemble an ideal Bose gas coupled to a heat bath, so we were interested in using this platform to study the effect of the dimension on the phase transition to a Bose–Einstein condensate.”

Looking forward, the researchers say they will now use their novel technique to study more elaborate forms of photon confinement – such as logarithmic or Coulomb-like confinement. They also plan to study photons confined in large lattice structures in which stable vortices can form without particle–particle interactions. “For example, in one-dimensional chains, there are predictions of an exotic zig-zag phase, induced by incoherent hopping between lattice sites,” says Vewinger. “In essence, the structuring opens up a large playground for us in which to study interesting physics.”

The present study is detailed in Nature Physics.

The post Fluctuations suppress condensation in 1D photon gas  appeared first on Physics World.

]]>
Research update New result backs up an important theory prediction concerning this exotic state of matter https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_parabola-art-2dand1d-v11.jpg
Enabling the future: printable sensors for a sustainable, intelligent world https://physicsworld.com/a/enabling-the-future-printable-sensors-for-a-sustainable-intelligent-world/ Tue, 01 Oct 2024 13:25:47 +0000 https://physicsworld.com/?p=116706 Nano Futures explores the cutting-edge science and technology driving the development of next-generation printable sensors

The post Enabling the future: printable sensors for a sustainable, intelligent world appeared first on Physics World.

]]>

Join us for an exciting webinar exploring the cutting-edge science and technology driving the development of next-generation printable sensors. These sensors, made from printable materials using simple and cost-effective methods such as printing and coating, are set to revolutionize a wealth of intelligent and sustainability-focused applications, such as smart cities, e-health, precision agriculture, Industry 4.0, and much more. Their distinct advantages – flexibility, minimal environmental impact, and suitability for high-throughput production– make them a transformative technology across various fields.

Building on the success of the Roadmap on printable electronic materials for next-generation sensors published in Nano Futures, our expert panel will offer a comprehensive overview of advancements in printable materials and devices for next-generation sensors. The webinar will explore how innovations in devices based on various printable materials, including 2D semiconductors, organic semiconductors, perovskites, and carbon nanotubes, are transforming sensor technologies for detecting light, ionizing radiation, pressure, gases, and biological substances.

Join us as we explore the status and recent breakthroughs in printable sensing materials, identify key remaining challenges, and discuss promising solutions, offering valuable insights into the potential of printable materials to enable smarter, more sustainable development.

Meet the esteemed panel of experts:

Vincenzo Pecunia is an associate professor and head of the Sustainable Optoelectronics Research Group at Simon Fraser University. He earned a BSc and MSc in electronics engineering from Politecnico di Milano and a PhD in physics from the University of Cambridge. His research focuses on printable semiconductors for electronics, sensing, and photovoltaics. In recognition of his achievements, he has been awarded the Fellowship of the Institute of Physics, the Fellowship of the Institute of Materials, Minerals & Mining, and the Fellowship of the Institution of Engineering and Technology.

Mark C Hersam is the Walter P Murphy Professor of Materials Science and Engineering, director of the Materials Research Center, and chair of the Materials Science and Engineering Department at Northwestern University (USA). His research interests include nanomaterials, additive manufacturing, nanoelectronics, scanning probe microscopy, renewable energy, and quantum information science. Mark has been repeatedly named a Clarivate Analytics Highly Cited Researcher with more than 700 peer-reviewed publications that have been cited more than 75,000 times.

Oana D Jurchescu is a Baker Professor of physics at Wake Forest University (USA) and a fellow of the Royal Society of Chemistry. She received her PhD in 2006 from University of Groningen (the Netherlands) and was a postdoctoral researcher at the National Institute of Standards and Technology (USA). Her expertise is in charge transport in organic and organic/inorganic hybrid semiconductors, device physics, and semiconductor processing. She has received numerous awards for her research and teaching excellence, including the NSF CAREER Award.

Robert Young is an emeritus professor at the University of Manchester (UK), renowned for his pioneering research on the relationship between the structure and mechanical properties of polymers and composites. His work explores the molecular-level deformation of materials such as carbon fibres, spider silk, carbon-fibre composites, carbon nanotubes, and graphene. Robert has received many prestigious awards. He was elected a fellow of the Royal Society in 2013 and a fellow of the Royal Academy of Engineering in 2006. He has written more than 330 research papers and several textbooks on polymers.

Luisa Petti received her MSc in electronic engineering from Politecnico di Milano (Italy) in 2011. She obtained her PhD in electrical engineering from ETH Zurich (Switzerland) in 2016 with a thesis entitled “Metal oxide semiconductor thin-film transistors for flexible electronics”, for which she won the ETH medal. After a short postdoc at ETH Zurich, she joined first Cambridge Display Technology Ltd in October 2016 and then FlexEnable Ltd in December 2017 in Cambridge, UK, as a scientist. In 2018, she joined the Free University of Bozen-Bolzano, where she is Associate Professor in Electronics since March 2021. Luisa’s current research includes the design, fabrication and characterization of flexible and printable sensors, energy harvesters, and thin-film devices and circuits, with a focus on sustainable and low-cost materials and manufacturing processes.

Aaron D Franklin is the Addy Professor of electrical and computer engineering and associate dean for faculty affairs in the Pratt School of Engineering at Duke University. His research group explores the use of 1D and 2D nanomaterials for high-performance nanoscale devices, low-cost printed and recyclable electronics, and biomedical sensing systems. Aaron is an IEEE Fellow and has published more than 100 scientific papers in the field of nanomaterial-based electronics. He holds more than 50 issued patents and has been engaged in two funded start-ups, one of which was acquired by a Fortune 500 company.

With support from:

The School of Sustainable Energy Engineering (SEE) sits within Simon Fraser University’s Faculty of Applied Sciences. Its research and academic domain involves the development of solutions for the harvesting, storage, transmission and use of energy, with careful consideration of economic, environmental, societal and cultural implications.

About this journal

Nano Futures is a multidisciplinary, high-impact journal publishing fundamental and applied research at the forefront of nanoscience and technological innovation.

Editor-in-chief: Amanda Barnard, senior professor of computational science and the deputy director of the School of Computing at the Australian National University

 

The post Enabling the future: printable sensors for a sustainable, intelligent world appeared first on Physics World.

]]>
Webinar Nano Futures explores the cutting-edge science and technology driving the development of next-generation printable sensors https://physicsworld.com/wp-content/uploads/2024/09/Printed-sensors-scaled.jpg
Rotating cylinder amplifies electromagnetic fields https://physicsworld.com/a/rotating-cylinder-amplifies-electromagnetic-fields/ Tue, 01 Oct 2024 08:30:07 +0000 https://physicsworld.com/?p=117078 The Zel'dovich effect is observed in an electromagnetic system for the first time

The post Rotating cylinder amplifies electromagnetic fields appeared first on Physics World.

]]>
Physicists have observed the Zel’dovich effect in an electromagnetic system – something that was thought to be incredibly difficult to do until now. This observation, in a simplified induction generator, suggests that the effect could in fact be quite fundamental in nature.

In 1971, the Russian physicist Yakov Zel’dovich predicted that electromagnetic waves scattered by a rotating metallic cylinder should be amplified by gaining mechanical rotational energy from the cylinder. The effect, explains Marion Cromb of the University of Southampton, works as follows: waves with angular momentum – or twist – that would usually be absorbed by an object, instead become amplified by that object. However, this amplification only occurs if a specific condition is met: namely, that the object is rotating at an angular velocity that’s higher than the frequency of the incoming waves divided by the wave angular momentum number. In this specific electromagnetic experiment, this number was 1, due to spin angular momentum, but it can be larger.

In previous work, Cromb and colleagues tested this theory in sound waves, but until now, it had never been proven with electromagnetic waves.

Spin component is amplified

In their new experiments, which are detailed in Nature Communications, the researchers used a gapped inductor to induce a magnetic field that oscillates at an AC frequency around a smooth cylinder made of aluminium. The gapped inductor comprises an AC current-carrying wire coiled around an iron ring with a gap in it. “This oscillating field is an easy way to create the sum of two spinning fields in opposite directions,” explains Cromb. “When the cylinder rotates faster than the field frequency, it thus amplifies the spin component rotating in the same direction.”

The cylinder acts as a resistor in the circuit when it is not moving, but as it rotates, its resistance decreases. As the rotation speed increases, after the Zel’dovich condition has been met, the resistance becomes negative. “We measured the power in the circuit at different rotation speeds and observed that it was indeed amplified once the cylinder span fast enough,” says Cromb.

Until now, it was thought that observing the Zel’dovich effect in an electromagnetic system would not be possible. This was because, in Zel’dovich’s predictions, the condition for amplification (while simple in description), would only be possible if the cylinder was rotating at speeds close to the speed of light. “Any slower, and the effect would be too small to be seen,” Cromb adds.

Once they had demonstrated the Zel’dovich effect with sound waves, the Southampton University scientists – together with their theory colleagues at the University of Glasgow and IFN Trento – realized that they could overcome some of the limitations of Zel’dovich’s example while still testing the amplification condition. “The actual experimental set-up is surprisingly simple,” Cromb tells Physics World.

Observing the effect on a quantum level?

Knowing that this effect is present in different physical systems, both in acoustics and now in electromagnetic circuits, suggests that it is quite fundamental in nature, Cromb says. And seeing it in an electromagnetic system means that the team might now be able to observe the effect on a quantum level. “This would be a fascinating test of how quantum mechanics, thermodynamics and (rotational) motion all work together.”

Looking forward, the researchers will now attempt to improve their experimental set-up. At present, it relies on an oscillating magnetic field that contains equal co-rotating and counter-rotating spin components. Only one of these should be Zel’dovich-amplified by the rotating cylinder (the co-rotating component) while the other is only ever absorbed, explains Cromb. “Ideally, we want to switch to a rotating magnetic field so we can confirm that it is only when the field and cylinder rotate in the same direction that the amplification occurs. This would mean that the whole field can be amplified and not just part of it.”

The team has already made some progress in this direction by switching to using a cylindrical stator (the stationary part), not just because it can create such a rotating magnetic field, but also because it fits snugly around the cylinder and thus interacts more strongly with it. This should increase the size of the Zel’dovich effect so it can be more easily measured.

“We hope that these improvements will help us also show a situation akin to a ‘black hole bomb’ where the Zel’dovich amplification gets reflected back efficiently enough to create a positive feedback loop, and the power in the circuit skyrockets exponentially,” says Cromb.

The post Rotating cylinder amplifies electromagnetic fields appeared first on Physics World.

]]>
Research update The Zel'dovich effect is observed in an electromagnetic system for the first time https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Zeldovich-experiment-equipment.jpeg newsletter1
Structural battery is world’s strongest, say researchers https://physicsworld.com/a/structural-battery-is-worlds-strongest-say-researchers/ Mon, 30 Sep 2024 15:34:56 +0000 https://physicsworld.com/?p=117105 Carbon fibre-based electrodes are key to success

The post Structural battery is world’s strongest, say researchers appeared first on Physics World.

]]>
A prototype described as the world’s strongest functional structural battery has been unveiled by researchers in Sweden. The device has an elastic modulus that is much higher than any previous design and was developed by Leif Asp and his colleagues at Chalmers University of Technology. The battery could be an important step towards lighter and more space-efficient electric vehicles (EVs).

Structural batteries are an emerging technology that store electrical energy while also bearing mechanical loads. They could be especially useful in EVs, where the extra weight and volume associated with batteries could be minimized by incorporating the batteries into a vehicle’s structural components.

In 2018, Asp’s team made a promising step towards practical structural batteries – and was rewarded with a mention in Physics World‘s Top ten breakthroughs of 2018. That year, the team showed how a trade-off could be reached between the mechanical strength of highly ordered carbon fibres and the desired electrochemical properties of less-ordered structures.

Building on this, Asp and colleagues unveiled their first-generation structural battery in 2021. “Here, we used carbon fibres as the negative electrode but a commercial lithium iron phosphate (LFP) on an aluminium foil as a positive electrode, and impregnated it with the resin by hand,” Asp recalls.

Solid–liquid electrolyte

This involved using a biphasic solid–liquid electrolyte, with the liquid phase transporting ions between the electrodes and the solid phase providing mechanical structure through its stiffness. The battery offered a gravimetric energy density of 24 Wh/kg. This much lower than the conventional batteries currently used in EVs – which deliver about 250 Wh/kg.

By 2023, Asp’s team had improved on this approach with a second-generation structural battery that used the same constituents, but employed an improved manufacturing method. This time, the team used an infusion technique to ensure the resin was distributed more evenly throughout the carbon fibre network.

In this incarnation, the team enhanced the battery’s negative electrode by using ultra-thin spread tow carbon fibre, where the fibres are spread into thin sheets. This approach improved both the mechanical strength and the electrical conductivity of the battery. At that stage, however, the mechanical strength of the battery was still limited by the LFP positive electrode.

Now, the team has addressed this challenge by using a carbon fibre-based positive electrode. Asp says, “This is the third generation, and is the first all-fibre structural battery, as has always been desired. Using carbon fibres in both electrodes, we could boost the battery’s elastic modulus, without suffering from reduced energy density.”

To achieve this, the researchers coated the surface of the carbon fibres with a layer of LFP using electrophoretic deposition. This is a technique whereby charged particles suspended in a liquid are deposited onto substrates using electric fields. Additionally, the team used a thin cellulose separator to further enhance the battery’s energy density.

All of these components were then embedded in the battery’s structural electrolyte and cured in resin, using the same infusion technique developed for the second-generation battery.

Stronger and denser

The latest improvements delivered a battery with an energy density of 30 Wh/kg and an elastic modulus greater than 76 GPa when tested in a direction parallel to the carbon fibres. This makes it by far the strongest structural battery reported to date, exceeding the team’s previous record of 25 GPa and making the battery stiffer than aluminium. Alongside its good mechanical performance, the battery also demonstrated nearly 100% efficiency in storing and releasing charge, even after 1000 cycles of charging and discharging.

Building on this success, the team now aims to further enhance the battery’s performance. “We are now working on small modifications to the current design,” Asp says. “We expect to be able to make structural battery cells with an elastic modulus exceeding 100 GPa and an energy density exceeding 50 Wh/kg.”

This ongoing work could pave the way for even stronger and more efficient structural batteries, which could have a transformative impact on the design and performance of EVs in the not-too-distant future. It could also help reduce the weight of laptop computers, aeroplanes and ships.

The research is described in Advanced Materials.

The post Structural battery is world’s strongest, say researchers appeared first on Physics World.

]]>
Research update Carbon fibre-based electrodes are key to success https://physicsworld.com/wp-content/uploads/2024/09/30-9-2024-strong-battery.jpg
Nickel langbeinite might be a new quantum spin liquid candidate https://physicsworld.com/a/nickel-langbeinite-might-be-a-new-quantum-spin-liquid-candidate/ Mon, 30 Sep 2024 13:00:58 +0000 https://physicsworld.com/?p=117000 The phase diagram of this new material contains a "centre of liquidity"

The post Nickel langbeinite might be a new quantum spin liquid candidate appeared first on Physics World.

]]>
A nickel-based material belonging to the langbeinite family could be a new three-dimensional quantum spin liquid candidate, according to new experiments at the ISIS Neutron and Muon Source in the UK. The work, performed by researchers from the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the Helmholtz-Zentrum Berlin (HZB) in Germany and Okayama University in Japan, is at the fundamental research stage for the moment.

Quantum spin liquids (QSLs) are magnetic materials that cannot arrange their magnetic moments (or spins) into a regular and stable pattern. This “frustrated” behaviour is very different from that of ordinary ferromagnets or antiferromagnets, which have spins that point in the same or alternating directions, respectively. Instead, the spins in QSLs constantly change direction as if they were in a fluid, producing an entangled ensemble of spin-ups and spin-downs even at ultracold temperatures, where the spins of most materials freeze solid.

So far, only a few real-world QSL materials have been observed, mostly in quasi-one-dimensional chain-like magnets and a handful of two-dimensional materials. The new candidate material – K2Ni2(SO4)3 – is a langbeinite, a family of sulphate minerals rarely found in nature whose chemical compositions can be changed by replacing one or two of the elements in the compound. K2Ni2(SO4)3 is composed of a three-dimensional network of corner-sharing triangles forming two trillium lattices made from the nickel ions. The magnetic network of langbeinite shares some similarities with the QSL pyrochlore lattice, which researchers have been studying for the last 30 years, but is also quite different in many ways.

A strongly correlated ground state at up to 20 K

The researchers, led by Ivica Živković at the EPFL, fabricated the new material especially for their study. In their previous work, which was practically the first investigation of the magnetic properties of langbeinites, they showed that the compound has a strongly correlated ground state at temperatures of up to at least 20 K.

In their latest work, they used a technique called inelastic neutron scattering, which can measure magnetic excitations, at the ISIS Neutron and Muon Source of the STFC Rutherford Appleton Laboratory to directly observe this correlation.

Theoretical calculations by Okayama University’s Harald Jeschke, which included density functional theory-based energy mappings, and classical Monte Carlo and pseudo-fermion functional renormalization group (PFFRG) calculations, performed by Johannes Reuther at the HZB to model the behaviour of K2Ni2(SO4)3, agreed exceptionally well with the experimental measurements. In particular, the phase diagram of the material revealed a “centre of liquidity” that corresponds to the trillium lattice in which each triangle is turned into a tetrahedron.

Particular set of interactions supports spin-liquid behaviour

The researchers say that they undertook the new study to better understand why the ground state of this material was so dynamic. Once they had performed their theoretical calculations and could model the material’s behaviour, the challenge was to identify the type of geometric frustration that was at play. “K2Ni2(SO4)3 is described by five magnetic interactions (J1, J2, J3, J4 and J5), but the highly frustrated tetra-trillium lattice has only one non-zero J,” explains Živković. “It took us some time to first find this particular set of interactions and then to prove that it supports spin-liquid behaviour.”

Now that we know where the highly frustrated behaviour comes from, the question is whether some exotic quasiparticles can be associated with this new spin arrangement, he tells Physics World.

Živković says the research, which is detailed in Nature Communications, remains in the realm of fundamental research for the moment and that it is too early to talk about any real-world applications.

The post Nickel langbeinite might be a new quantum spin liquid candidate appeared first on Physics World.

]]>
Research update The phase diagram of this new material contains a "centre of liquidity" https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_16_9.jpg
Metasurface-enhanced camera performs hyperspectral and polarimetric imaging https://physicsworld.com/a/metasurface-enhanced-camera-performs-hyperspectral-and-polarimetric-imaging/ Mon, 30 Sep 2024 08:30:37 +0000 https://physicsworld.com/?p=117061 Inexpensive metasurface could revolutionize the capabilities of conventional imaging systems

The post Metasurface-enhanced camera performs hyperspectral and polarimetric imaging appeared first on Physics World.

]]>
A team of US-based researchers has developed an inexpensive and ultrathin metasurface that, when paired with a neural network, enables a conventional camera to capture detailed hyperspectral and polarization data from a single snapshot. The innovation could pave the way for significant advances in medical diagnostics, environmental monitoring, remote sensing and even consumer electronics.

The research team, based at Pennsylvania State University, designed a large set of silicon-based meta-atoms with unique spectral and polarization responses. When spatially arranged within small “superpixels” these meta-atoms are capable of encoding both spectral and polarization information into distinct patterns that traditional cameras cannot detect. To recover this information into a format understandable by humans, the team uses machine learning algorithms to recognize these patterns and map them back to their corresponding encoded information.

“A normal camera typically captures only the intensity distribution of light and is insensitive to its spectral and polarization properties. Our metasurface consists of numerous distinct meta-atoms, each designed to exhibit different transmission characteristics for various incoming spectra and polarization states,” explains lead corresponding author Xingjie Ni.

“The metasurface consists of many such superpixels; the patterns generated by these superpixels are then captured by a conventional camera sensor,” he adds. “Essentially, the metasurface translates information that is normally invisible to the camera into a format it can detect. Each superpixel corresponds to one pixel in the final image, allowing us to obtain not only intensity information but also the spectrum and polarization data for each pixel.”

Widespread applications

In terms of potential applications, Ni pictures the technology enabling the development of miniaturized and portable hyperspectro-polarimetry imaging systems, which he believes could revolutionize the abilities of existing imaging systems. “For instance, we might develop a small add-on for smartphone cameras to enhance their capabilities, allowing users to capture rich spectral and polarization information that was previously inaccessible in such a compact form,” he says.

According to Ni, traditional hyperspectral and polarimetric cameras, which often are bulky and expensive to produce, capture either spectral or polarization data, but not both simultaneously. Such systems are also limited in resolution, not easily integrated into compact devices, and typically require complex alignment and calibration.

In contrast, the team’s metasurface encoder is ultracompact, lightweight and cost-effective. “By integrating it directly onto a conventional camera sensor, we eliminate the need for additional bulky components, reducing the overall size and complexity of the system,” says Ni.

Ni also observes that the metasurface’s ability to encode spectral and polarization information into intensity patterns enables simultaneous hyperspectral and polarization imaging without significant modifications to existing imaging systems. Moreover, the flexibility in designing the meta-atoms enables the team to achieve high-resolution and high-sensitivity detection of spectral and polarization variations.

“This level of customization and integration is difficult to attain with traditional optical systems. Our approach also reduces data redundancy and improves imaging speed, which is crucial for applications in dynamic, high-speed environments,” he says.

Moving forward, Ni confirms that he and his team have applied for a patent to protect the technology and facilitate its commercialization. They are now working on robust integration techniques and exploring ways to further reduce manufacturing costs by utilizing photolithography for large-scale production of the metasurfaces, which should make the technology more accessible for widespread applications.

“In addition, the concept of a light ‘encoder’ is versatile and can be extended to other aspects of light beyond spectral and polarization information,” says Ni.

“Our group is actively developing different metasurface encoders designed to capture the phase and temporal information of the light field,” he tells Physics World. “This could open up new possibilities in fields like optical computing, telecommunications and advanced imaging systems. We are excited about the potential impact of this technology and are committed to advancing it further.”

The results of the research are presented in Science Advances.

The post Metasurface-enhanced camera performs hyperspectral and polarimetric imaging appeared first on Physics World.

]]>
Research update Inexpensive metasurface could revolutionize the capabilities of conventional imaging systems https://physicsworld.com/wp-content/uploads/2024/09/30-09-24-xingjie-ni-Zhiwen-Liu.jpg newsletter1
Physicists reveal the mechanics of tea scum https://physicsworld.com/a/physicists-reveal-the-mechanics-of-tea-scum/ Sat, 28 Sep 2024 09:00:42 +0000 https://physicsworld.com/?p=117065 Researchers have looked at how tea scum breaks apart when stirred

The post Physicists reveal the mechanics of tea scum appeared first on Physics World.

]]>
If you have ever brewed a cup of black tea with hard water you will be familiar with the oily film that can form on the surface of the tea after just a few minutes.

Known as “tea scum” the film consists of calcium carbonate crystals within an organic matrix. Yet it can be easily broken apart with a quick stir of a teaspoon.

Physicists in France and the UK have now examined how this film forms and also what happens when it breaks apart through stirring.

They did so by first sprinkling graphite powder into a water tank. Thanks to capillary forces, the particles gradually clump together to form rafts. The researchers then generated waves in the tank that broke apart the rafts and filmed the process with a camera.

Through these experiments and theoretical modelling, they found that the rafts break up when diagonal cracks form at the raft’s centre. This causes them to fracture into larger chunks before the waves eventually eroded them away.

They found that the polygonal shapes created when the rafts split up is the same as that seen in ice floes.

Despite the visual similarities, however, sea ice and tea scum break up through different physical mechanisms. While ice is brittle, bending and snapping under the weight of crushing waves, the graphite rafts come apart when the viscous stress exerted by the waves overcome the capillary forces that hold the individual particles together.

Buoyed by their findings, the researchers now plan to use their model to explain the behaviour of other thin biofilms, such as pond scum.

The post Physicists reveal the mechanics of tea scum appeared first on Physics World.

]]>
Blog Researchers have looked at how tea scum breaks apart when stirred https://physicsworld.com/wp-content/uploads/2024/09/tea-scum-27-09-2014.jpg
Positronium gas is laser-cooled to one degree above absolute zero https://physicsworld.com/a/positronium-gas-is-laser-cooled-to-one-degree-above-absolute-zero/ Fri, 27 Sep 2024 13:42:03 +0000 https://physicsworld.com/?p=117071 New cooling technique could help reveal physics beyond the Standard Model

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

]]>
29-09-2024 positron cooling

Researchers at the University of Tokyo have published a paper in the journal Nature that describes a new laser technique that is capable of cooling a gas of positronium atoms to temperatures as low as 1 K. Written by Kosuke Yoshioka and colleagues at the University of Tokyo, the paper follows on from a publication earlier this year from the AEgIS team at CERN, who described how a different laser technique was used to cool positronium to 170 K.

Positronium comprises a single electron bound to its antimatter counterpart, the positron. Although electrons and positrons will ultimately annihilate each other, they can briefly bind together to form an exotic atom. Electrons and positrons are fundamental particles that are nearly point like, so positronium provides a very simple atomic system for experimental study. Indeed, this simplicity means that precision studies of positronium could reveal new physics beyond the Standard Model.

Quantum electrodynamics

One area of interest is the precise measurement of the energy required to excite positronium from its ground state to its first excited state. Such measurements could enable more rigorous experimental tests of quantum electrodynamics (QED). While QED has been confirmed to extraordinary precision, any tiny deviations could reveal new physics.

An important barrier to making precision measurements is the inherent motion of positronium atoms. “This large randomness of motion in positronium is caused by its short lifetime of 142 ns, combined with its small mass − 1000 times lighter than a hydrogen atom,” Yoshioka explains. “This makes precise studies challenging.”

In 1988, two researchers at Lawrence Livermore National Laboratory in the US published a theoretical exploration of how the challenge could be overcome by using laser cooling to slow positronium atoms to very low speeds. Laser cooling is routinely used to cool conventional atoms and involves having the atoms absorb photons and then re-emitting the photons in random directions.

Chirped pulse train

Building on this early work, Yoshioka’s team has developed new laser system that is ideal for cooling positronium. Yoshioka explains that in the Tokyo setup, “the laser emits a chirped pulse train, with the frequency increasing at 500 GHz/μs, and lasting 100 ns. Unlike previous demonstrations, our approach is optimized to cool positronium to ultralow velocities.”

In a chirped pulse, the frequency of the laser light increases over the duration of the pulse. It allows the cooling system to respond to the slowing of the atoms by keeping the photon absorption on resonance.

Using this technique, Yoshioka’s team successfully cooled positronium atoms to temperatures around 1 K, all within just 100 ns. “This temperature is significantly lower than previously achieved, and simulations suggested that an even lower temperature in the 10 mK regime could be realized via a coherent mechanism,” Yoshioka says. Although the team’s current approach is still some distance from achieving this “recoil limit” temperature, the success of their initial demonstration has given them confidence that further improvements could bring them closer to this goal.

“This breakthrough could potentially lead to stringent tests of particle physics theories and investigations into matter-antimatter asymmetry,” Yoshioka predicts. “That might allow us to uncover major mysteries in physics, such as the reason why antimatter is almost absent in our universe.”

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

]]>
Research update New cooling technique could help reveal physics beyond the Standard Model https://physicsworld.com/wp-content/uploads/2024/09/29-09-2024-positron-cooling-cropped.jpg newsletter1
Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ https://physicsworld.com/a/ask-me-anything-fatima-gunning-thinking-outside-the-box-is-a-winner-when-it-comes-to-problem-solving/ Fri, 27 Sep 2024 13:00:18 +0000 https://physicsworld.com/?p=116926 Physicist Fatima Gunning explains how mentorship has helped her grow as a researcher and teacher

The post Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

I am fortunate to have several different roles, and problem-solving is a skill I use in each. As physicists, we’re constantly solving problems in different ways, and, as researchers, we are always trying to question the unknown. To understand the physical world more, we need to be curious and willing to reformulate our questions when they are challenged.

Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

In everyday work such as administration, research, teaching and mentoring, I also find that thinking outside the box is a winner when it comes to problem solving. I try not to just go along with whatever the team or the group is thinking. Instead, I try to consider different points of view. Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

Another critical skill I use is communication. In my work, I need to be able to listen, speak and write a lot. It could be to convey why our research is important and why it should be funded. It could be to craft new policies, mediate conflict or share research findings clearly with colleagues, students, managers and members of the public. So communication is definitely key.

What do you like best and least about your job?

I graduated about 30 years ago and, during that time, the things I like best or least have never stayed the same. At the moment, the best part of my job is working with research students – not just at master’s and PhD level, but final-year undergraduates who might be getting hands-on experience in a lab for the first time. There’s great satisfaction and a sense of “job well done” whenever I demonstrate a concept they’ve known for several years but have never “seen” in action. When they shout “Ah, I get it!”, it’s a great feeling. It’s also really rewarding to receive similar reactions from my education and public engagement work, such as when I visit primary and secondary schools.

At the moment, my least favourite part of my job is the lack of time. I’m not very good at time management, and I find it hard to say “no” to people in need, especially if I know how to help them. It’s difficult to juggle work, mentoring, volunteering activities and home life. During the COVID-19 pandemic, I realized that taking time off to pursue a hobby is vital – not only for my wellbeing but also to give me clarity in decision making.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had realized the important of mentorship sooner. Throughout my career, I’ve had people who’ve supported me along the way. It might just have been a brief conversation in the corridor, help with a grant application or a serendipitous chat at a conference, although at other times it might have been through in-depth discussion of my work. I only started to regard the help as “mentorship” when I did a leadership course that included mentor/mentee training. Looking back, those encounters really boosted my confidence and helped me make rational choices.

There are so many opportunities to meet people in your field and people are always happy to share their experiences

Once you realize what mentors can do, you can plan to speak to people strategically. These conversations can help you make decisions and introduce you to new contacts. They can also help you understand what career paths are available – it’s okay to take your time to explore career options or even to change direction. Students and young professionals should also engage with professional societies, such as the Institute of Physics. There are so many opportunities to meet people in your field and people are always happy to share their experiences. We need to come out of our “shy” shells and talk to people, no matter how senior and famous they are. That’s certainly the message I’d have given myself 30 years ago.

The post Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ appeared first on Physics World.

]]>
Interview Physicist Fatima Gunning explains how mentorship has helped her grow as a researcher and teacher https://physicsworld.com/wp-content/uploads/2024/09/2024-09-20-AMA-Fatima-Gunning.jpg newsletter
Knowledge grows step-by-step despite the exponential growth of papers, finds study https://physicsworld.com/a/knowledge-grows-step-by-step-despite-the-exponential-growth-of-papers-finds-study/ Fri, 27 Sep 2024 12:02:16 +0000 https://physicsworld.com/?p=117039 The authors believe the finding indicates a decline in scientific productivity

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

]]>
Scientific knowledge is growing at a linear rate despite an exponential increase in publications. That’s according to a study by physicists in China and the US, who say their finding points to a decline in overall scientific productivity. The study therefore contradicts the notion that productivity and knowledge grow hand in hand – but adds weight to the view that the rate of scientific discovery may be slowing or that “information fatigue” and the vast number of papers can drown out new discoveries.

Defining knowledge is complex, but it can be thought of as a network of interconnected beliefs and information. To measure it, the authors previously created a knowledge quantification index (KQI). This tool uses various scientific impact metrics to examine the network structures created by publications and their citations and quantifies how well publications reduce the uncertainty of the network, and thus knowledge.

The researchers claim the tool’s effectiveness has been validated through multiple approaches, including analysing the impact of work by Nobel laureates.

In the latest study, published on arXiv, the team analysed 213 million scientific papers, published between 1800 and 2020, as well as 7.6 million patents filed between 1976 and 2020. Using the data, they built annual snapshots of citation networks, which they then scrutinised with the KQI to observe changes in knowledge over time.

The researchers – based at Shanghai Jiao Tong University in Shanghai, the University of Minnesota in the US and the Institute of Geographic Sciences and Natural Resources Research in Beijing –found that while the number of publications has been increasing exponentially, knowledge has not.

Instead, their KQI suggests that knowledge has been growing in a linear fashion. Different scientific disciplines do display varying rates of knowledge growth, but they all have the same linear growth pattern. Patent growth was found to be much slower than publication growth but also shows the linear growth in the KQI.

According to the authors, the analysis indicates “no significant change in the rate of human knowledge acquisition”, suggesting that our understanding of the world has been progressing at a steady pace.

If scientific productivity is defined as the number of papers required to grow knowledge, this signals a significant decline in productivity, the authors claim.

The analysis also revealed inflection points associated with new discoveries, major breakthroughs and other important developments, with knowledge growing at different linear rates before and after.

Such inflection points create the illusion of exponential knowledge growth due to the sudden alteration in growth rates, which may, according to the study authors, have led previous studies to conclude that knowledge is growing exponentially.

Research focus

“Research has shown that the disruptiveness of individual publications – a rough indicator of knowledge growth – has been declining over recent decades,” says Xiangyi Meng, a physicist at Northwestern University in the US, who works in network science but was not involved in the research. “This suggests that the rate of knowledge growth must be slower than the exponential rise in the number of publications.”

Meng adds, however, that the linear growth finding is “surprising” and “somewhat pessimistic” – and that further analysis is needed to confirm if knowledge growth is indeed linear or whether it “more likely, follows a near-linear polynomial pattern, considering that human civilization is accelerating on a much larger scale”.

Due to the significant variation in the quality of scientific publications, Meng says that article growth may “not be a reliable denominator for measuring scientific efficiency”. Instead, he suggests that analysing research funding and how it is allocated and evolves over time might be a better focus.

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

]]>
News The authors believe the finding indicates a decline in scientific productivity https://physicsworld.com/wp-content/uploads/2024/09/formulae-web-131120717-Shutterstock_agsandrew.jpg newsletter
Genetically engineered bacteria solve computational problems https://physicsworld.com/a/genetically-engineered-bacteria-solve-computational-problems/ Fri, 27 Sep 2024 08:00:38 +0000 https://physicsworld.com/?p=116977 A cell-based biocomputer can identify prime numbers, recognize vowels and answer mathematical questions

The post Genetically engineered bacteria solve computational problems appeared first on Physics World.

]]>
Cell-based biocomputing is a novel technique that uses cellular processes to perform computations. Such micron-scale biocomputers could overcome many of the energy, cost and technological limitations of conventional microprocessor-based computers, but the technology is still very much in its infancy. One of the key challenges is the creation of cell-based systems that can solve complex computational problems.

Now a research team from the Saha Institute of Nuclear Physics in India has used genetically modified bacteria to create a cell-based biocomputer with problem-solving capabilities. The researchers created 14 engineered bacterial cells, each of which functioned as a modular and configurable system. They demonstrated that by mixing and matching appropriate modules, the resulting multicellular system could solve nine yes/no computational decision problems and one optimization problem.

The cellular system, described in Nature Chemical Biology, can identify prime numbers, check whether a given letter is a vowel, and even determine the maximum number of pizza or pie slices obtained from a specific number of straight cuts. Here, senior author Sangram Bagh explains the study’s aims and findings.

How does cell-based computing work?

Living cells use computation to carry out biological tasks. For instance, our brain’s neurons communicate and compute to make decisions; and in the event of an external attack, our immune cells collaborate, compute and make judgements. The development of synthetic biology opens up new avenues for engineering live cells to carry out human-designed computation.

The fusion of biology and computer science has resulted in the development of living cell-based biocomputers to solve computational problems. Here, living cells are engineered to use as circuits and components to build biocomputers. Lately, researchers have been manipulating living cells to find solutions for maze and graph colouring puzzles.

Why did you employ bacteria to perform the computations?

Bacteria are single-cell organisms, 2–5 µm in size, with fast replication times (about 30 min). They can survive in many conditions and require minimum energy, thus they provide an ideal chassis for building micron-scale computer technology. We chose to use Escherichia coli, as it has been studied in detail and is easy to manipulate, making it a logical choice to build a biocomputer.

How did you engineer the bacteria to solve problems?

We built synthetic gene regulatory networks in bacteria in such a way that each bacterium worked as an artificial neuro-synapse. In this way, 14 genetically engineered bacteria were created, each acting like an artificial neuron, which we named “bactoneurons”. When these bactoneurons are mixed in a liquid culture in a test tube, they create an artificial neural network that can solve computational problems. The “LEGO-like” system incorporates 14 engineered cells (the “LEGO blocks”) that you can mix and match to build one of 12 specific problem solvers on demand.

How do the bacteria report their answers?

We pose problems to the bacteria in a chemical space using a binary system. The bacteria were questioned by adding (“one”) or not adding (“zero”) four specific chemicals. The bacterial artificial neural network analysed the data and responded by producing different fluorescent proteins. For example, when we asked if three is a prime number, in response to this question, the bacteria glowed green to print “yes”. Similarly, when we asked if four was a prime number, the bacteria glowed red and said “no”.

How could such a biocomputer be used in real-world applications?

Bacteria are tiny organisms, about one-twentieth the diameter of a human hair. It is not possible to make a silicon computer so small. Making such a small computer with bacteria will open a new horizon in microscale computer technology. Its use will extend from new medical technology and material technology to space technology.

For example, one may imagine a set of engineered bacteria or other cells within the human body taking decisions and acting upon a particular disease state, based on multiple biochemical and physiological cues.

Scientists have proposed using synthetically engineered organisms to help in situ resource utilization to build a human research base on Mars. However, it may not be possible to instruct each of the organisms remotely to perform a specific task based on local conditions. Now, one can imagine the tiny engineered organisms working as a biocomputer, interacting with each other, and taking autonomous decisions on action without any human intervention.

The importance of this work in basic science is also immense. We know that recognizing prime numbers or vowels can only be done by humans or computers – but now genetically engineered bacteria are doing the same. Such observations raise new questions about the meaning of “intelligence” and offer some insight on the biochemical nature and the origin of intelligence.

What are you planning to do next?

We would like to build more complex biocomputers to perform more complex computation tasks with multitasking capability. The ultimate goal is to build artificially intelligent bacteria.

The post Genetically engineered bacteria solve computational problems appeared first on Physics World.

]]>
Research update A cell-based biocomputer can identify prime numbers, recognize vowels and answer mathematical questions https://physicsworld.com/wp-content/uploads/2024/09/27-09-24-Bacterial-computation-SB-Graphical_Abstract.jpg newsletter1
Field work – the physics of sheep, from phase transitions to collective motion https://physicsworld.com/a/field-work-the-physics-of-sheep-from-phase-transitions-to-collective-motion/ Thu, 26 Sep 2024 12:23:48 +0000 https://physicsworld.com/?p=116797 Physics sheds a new insight on the behaviour of sheep flocks, helping with new tips on shepherding

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

]]>
You’re probably familiar with the old joke about a physicist who, when asked to use science to help a dairy farmer, begins by approximating a spherical cow in a vacuum. But maybe it’s time to challenge this satire on how physics-based models can absurdly over-simplify systems as complex as farm animals. Sure, if you want to understand how a cow or a sheep works, approximating those creatures as spheres might not be such a good idea. But if you want to understand a herd or a flock, you can learn a lot by reducing individual animals to mere particles – if not spheres, then at least ovoids (or bovoids; see what I did there?).

By taking that approach, researchers over the past few years have not only shed new insight on the behaviour of sheep flocks but also begun to explain how shepherds do what they do – and might even be able to offer them new tips about controlling their flocks. Welcome to the emerging science of sheep physics.

“Boids” of a feather

Physics-based models of the group dynamics of living organisms go back a long way. In 1987 Craig Reynolds, a software engineer with the California-based computer company Symbolics, wrote an algorithm to try to mimic the flocking of birds. By watching blackbirds flock in a local cemetery, Reynolds intuited that each bird responds to the motions of its immediate neighbours according to some simple rules.

His simulated birds, which he called “boids” (a fusion of bird and droid), would each match their speed and orientation to those of others nearby, and would avoid collisions as if there was a repulsive force between them. Those rules alone were enough to generate group movements resembling the striking flocks or “murmurations” of real-life blackbirds and starlings, that swoop and fly together in seemingly perfect unison. Reynolds’ algorithms were adapted for film animations such as the herd of wildebeest in The Lion King.

Murmuration of starlings

Over the next two or three decades, these models were modified and extended by other researchers, including the future Nobel-prize-winning physicist Giorgio Parisi, to study collective motions of organisms ranging from birds to schooling fish and swarming bacteria. Those studies fed into the emerging science of active matter, in which particles – which could be simple colloids – move under their own propulsion. In the late 1990s physicist Tamás Vicsek and his student Andras Czirók, at Eötvös University in Budapest, revealed analogies between the collective movements of such self-propelled particles and the reorientation of magnetic spins in regular arrays, which also “feel” and respond to what their neighbours are doing (Phys. Rev. Lett. 82 209; J. Phys. A: Math. Gen. 30 1375).

In particular, the group motion can undergo abrupt phase transitions – global shifts in the pattern of behaviour, analogous to how matter can switch to a bulk magnetized state – as the factors governing individual motion, such as average velocity and strength of interactions, are varied. In this way, the collective movements can be summarized in phase diagrams, like those depicting the gaseous, liquid and solid states of matter as variables such as temperature and density are changed.

Models like these have now been used to explore the dynamics not just of animals and bacteria, but also of road traffic and human pedestrians. They can predict the kinds of complex behaviours seen in the real world, such as stop-and-start waves in traffic congestion or the switch to a crowd panic state. And yet the way they represent the individual agents seems – for humans anyway – almost insultingly simple, as if we are nothing but featureless particles propelled by blind forces.

Follow the leader

If these models work for humans, you might imagine they’d be fine for sheep too – which, let’s face it, seem behaviourally and psychologically rather unsophisticated compared with us. But if that’s how you think of sheep, you’ve probably never had to shepherd them. Sheep are decidedly idiosyncratic particles.

“Why should birds, fish or sheep behave like magnetic spins?” asks Fernando Peruani of the University of Cergy Paris. “As physicists we may want that, but animals may have a different opinion.” To understand how flocks of sheep actually behave, Peruani and his colleagues first looked at the available data, and then tried to work out how to describe and explain the behaviours that they saw.

1 Are sheep like magnetic spins?

Sheep walking in a line

In a magnetic material, magnetic spins interact to promote their mutual alignment (or anti-alignment, depending on the material). In the model of collective sheep motion devised by Fernando Peruani from the University of Cergy Paris, and colleagues, each sheep is similarly assumed to move in a direction determined by interactions with all the others that depend on their distance apart and their relative angles of orientation. The model predicts the sheep will fall into loose alignment and move in a line, following a leader, that takes a more or less sinuous path over the terrain.

For one thing, says Peruani, “real flocks are not continuously on the move. Animals have to eat, rest, find new feeding areas and so on”. No existing model of collective animal motion could accommodate such intermittent switching between stationary and mobile phases. What’s more, bird murmurations don’t seem to involve any specific individual guiding the collective behaviour, but some animal groups do exhibit a hierarchy of roles.

Elephants, zebras and forest ponies, for example, tend to move in lines such that the animal at the front has a special status. An advantage of such hierarchies is that the groups can respond quickly to decisions made by the leaders, rather than having to come to some consensus within the whole group. On the other hand, it means the group is acting on less information than would be available by pooling that of everyone.

To develop their model of collective sheep behaviour, Peruani and colleagues took a minimalistic approach of watching tiny groups of Merino Arles sheep that consisted of “flocks” of just two to four individuals who were free to move around a large field. They found that the groups spend most of their time grazing but would every so often wander off collectively in a line, following the individual at the front (Nat. Phys. 18 1494).

They also saw that any member of the group is equally likely to take the lead in each of these excursions, selected seemingly at random. In other words, as George Orwell famously suggested for certain pigs, all sheep are equal but some are (temporarily) more equal than others. Peruani and colleagues suspected that this switching of leaders allows some information pooling without forcing the group to be constantly negotiating a decision.

The researchers then devised a simple model of the process in which each individual has some probability of switching from the grazing to the moving state and vice versa – rather like the transition probability for emission of a photon from an excited atom. The empirical data suggested that this probability depends on the group size, with the likelihood getting smaller as the group gets bigger. Once an individual sheep has triggered the onset of the “walking phase”, the others follow to maintain group cohesion.

In their model, each individual feels an attractive, cohesive force towards the others and, when moving, tends to align its orientation and velocity with those of its neighbour(s). Peruani and colleagues showed that the model produces episodic switching between a clustered “grazing mode” and collective motion in a line (figure 1). They could also quantify information exchange between the simulated sheep, and found that probabilistic swapping of the leader role does indeed enable the information available to each individual to be pooled efficiently between all.

Although the group size here was tiny, the team has video footage of a large flocks of sheep adopting the same follow-my-leader formation, albeit in multiple lines at once. They are now conducting a range of experiments to get a better understanding of the behavioural rules – for example, using sirens to look at how sheep respond to external stimuli and studying herds composed of sheep of different ages (and thus proclivities) to probe the effects of variability.

The team is also investigating whether individual sheep trained to move between two points can “seed” that behaviour in an entire flock. But such experiments aren’t easy, Peruani says, because it’s hard to recruit shepherds. In Europe, they tend to live in isolation on low wages, and so aren’t the most forthcoming of scientific collaborators.

The good shepherd

Of course, shepherds don’t traditionally rely on trained sheep to move their flocks. Instead, they use sheepdogs that are trained for many months before being put to work in the field. If you’ve ever watched a sheepdog in action, it’s obvious they do an amazingly complex job – and surely one that physics can’t say much about? Yet mechanical engineer Lakshminarayanan Mahadevan at Harvard University in the US says that the sheepdog’s task is basically an exercise in control theory: finding a trajectory that will guide the flock to a particular destination efficiently and accurately.

Mahadevan and colleagues found that even this phenomenon can be described using a relatively simple model (arXiv:2211.04352). From watching YouTube videos of sheepdogs in action, he figured there were two key factors governing the response of the sheep. “Sheep like to stay together,” he says – the flock has cohesion. And second, sheep don’t like sheepdogs – there is repulsion between sheep and dog. “Is that enough – cohesion plus repulsion?” Mahadevan wondered.

Sheepdogs and a flock of sheep

The researchers wrote down differential equations to describe the animals’ trajectories and then applied standard optimization techniques to minimize a quantity that captures the desired outcome: moving the flock to a specific location without losing any sheep. Despite the apparent complexity of the dynamical problem, they found it all boiled down to a simple picture. It turns out there are two key parameters that determine the best herding strategy: the size of the flock and the speed with which it moves between initial and final positions.

Four possible outcomes emerged naturally from their model. One is simply that the herding fails: nothing a dog can do will get the flock coherently from point A to point B. This might be the case, for example, if the flock is just too big, or the dog too slow. But there are three shepherding strategies that do work.

One involves the dog continually running from one side of the flock to the other, channelling the sheep in the desired direction. This is the method known to shepherds as “droving”. If, however, the herd is relatively small and the dog is fast, there can be a better technique that the team called “mustering”. Here the dog propels the flock forward by running in corkscrews around it. In this case, the flock keeps changing its overall shape like a wobbly ellipse, first elongating and then contracting around the two orthogonal axes, as if breathing. Both strategies are observed in the field (figure 2).

But the final strategy the model generated, dubbed “driving”, is not a tactic that sheepdogs have been observed to use. In this case, if the flock is large enough, the dog can run into the middle of it and the sheep retreat but don’t scatter. Then the dog can push the flock forward from within, like a driver in a car. This approach will only work if the flock is very strongly cohesive, and it’s not clear that real flocks ever have such pronounced “stickiness”.

2 Shepherding strategies: the three types of herding

Diagram of herding patterns

In the model of interactions between a sheepdog and its flock developed by Lakshminarayanan Mahadevan at Harvard University and coworkers, optimizing a mathematical function that describes how well the dog transports the flock results in three possible shepherding strategies, depending on the precise parameters in the model. In “droving”, the dog runs from side to side to steer the flock towards the target location. In “mustering”, the dog takes a helix-like trajectory, repeatedly encircling the flock. And in “driving”, the dog steers the flock from “inside” by the aversion – modelled as a repulsive force – of the sheep for the dog.

These three regimes, derived from agent-based models (ABM) and models based on ordinary differential equations (ODE), are plotted above. In the left column, the mean path of the flock (blue) over time is shown as it is driven by a shepherd on a separate path (red) towards a target (green square). Columns 2-4 show snapshots from column 1, with trajectories indicated in black, where fading indicates history. From left to right, snapshots represent the flock at later time points.

These herding scenarios can be plotted on a phase diagram, like the temperature–density diagram for states of matter, but with flock size and speed as the two axes. But do sheepdogs, or their trainers, have an implicit awareness of this phase diagram, even if they did not think of it in those terms? Mahadevan suspects that herding techniques are in fact developed by trial and error – if one strategy doesn’t work, they will try another.

Mahadevan admits that he and his colleagues have neglected some potentially important aspects of the problem. In particular, they assumed that the animals can see in every direction around them. Sheep do have a wide field of vision because, like most prey-type animals, they have eyes on the sides of their heads. But dogs, like most predators, have eyes at the front and therefore a more limited field of view. Mahadevan suspects that incorporating these features of the agents’ vision will shift the phase boundaries, but not alter the phase diagram qualitatively.

Another confounding factor is that sheep might alter their behaviour in different circumstances. Chemical engineer Tuhin Chakrabortty of the Georgia Institute of Technology in Atlanta, together with biomolecular engineer Saad Bhamla, have also used physics-based modelling to look at the shepherding problem. They say that sheep behave differently on their own from how they do in a flock. A lone sheep flees from a dog, but in a flock they employ a more “selfish” strategy, with those on the periphery trying to shove their way inside to be sheltered by the others.

3 Heavy and light: how flocks interact with sheepdogs

How flocks interact with sheepdogs

In the agent-based model of the interaction between sheep and a sheepdog devised by Tuhin Chakrabortty and Saad Bhamla, sheep may respond to a nearby dog by reorienting themselves to face away from or at right angles to it. Different sheep might have different tendencies for this – “heavy” sheep ignore the dog unless they are facing towards it. The task of the dog could be to align the flock facing away from it (herding) or to divide the flock into differently aligned subgroups (shedding).

What’s more, says Chakrabortty, contrary to the stereotype, sheep can show considerable individual variation in how they respond to a dog. Essentially, the sheep have personalities. Some seem terrified and easily panicked by a dog while others might ignore – or even confront – it. Shepherds traditionally call the former sort of sheep “light”, and the latter “heavy” (figure 3).

In the agent-based model used by Chakrabortty and Bhamla, the outcomes differ depending on whether a herd is predominantly light or heavy (arXiv:2406.06912). When a simulated herd is subjected to the “pressure” of a shepherding dog, it might do one of three things: flee in a disorganized way, shedding panicked individuals; flock in a cohesive group; or just carry on grazing while reorienting to face at right angles to the dog, as if turning away from the threat.

Again these behaviours can be summarized in a 2D phase diagram, with axes representing the size of the herd and what the two researchers call the “specificity of the sheepdog stimulus” (figure 4). This factor depends on the ratio of the controlling stimulus (the strength of sheep–dog repulsion) and random noisiness in the sheep’s response. Chakrabortty and Bhamla say that sheepdog trials are conducted for herd sizes where all three possible outcomes are well represented, creating an exacting test of the dog’s ability to get the herd to do its bidding.

4 Fleeing, flocking and grazing: types of sheep movement

Graph showing types of sheep movement

The outcomes of the shepherding model of Chakrabortty and Bhamla can be summarized in a phase diagram showing the different behavioural options – uncoordinated fleeing, controlled flocking, or indifferent grazing – as a function of two model parameters: the size of the flock Ns and the “specificity of stimulus”, which measures how strongly the sheep respond to the dog relative to their inherent randomness of action. Sheepdog trials are typically conducted for a flock size that allows for all three phases.

Into the wild

One of the key differences between the movements of sheep and those of fish or birds is that sheep are constrained to two dimensions. As condensed-matter physicists have come to recognize, the dimensionality of a problem can make a big difference to phase behaviour. Mahadevan says that dolphins make use of dimensionality when they are trying to shepherd schools of fish to feed on. To make them easier to catch, dolphins will often push the fish into shallow water first, converting a 3D problem to a 2D problem. Herders like sheepdogs might also exploit confinement effects to their benefit, for example using fences or topographic features to help contain the flock and simplify the control problem. Researchers haven’t yet explored these issues in their models.

Dolphins using herding tactics to drive a school of fish

As the case of dolphins shows, herding is a challenge faced by many predators. Mahadevan says he has witnessed such behaviour himself in the wild while observing a pack of wild dogs trying to corral wildebeest. The problem is made more complicated if the prey themselves can deploy group strategies to confound their predator – for example, by breaking the group apart to create confusion or indecision in the attacker, a behaviour seemingly adopted by fish. Then the situation becomes game-theoretic, each side trying to second-guess and outwit the other.

Sheep seem capable of such smart and adaptive responses. Bhamla says they sometimes appear to identify the strategy that a farmer has signalled to the dog and adopt the appropriate behaviour even without much input from the dog itself. And sometimes splitting a flock can be part of the shepherding plan: this is actually a task dogs are set in some sheepdog competitions, and demands considerable skill. Because sheepdogs seem to have an instinct to keep the flock together, they can struggle to overcome that urge and have to be highly trained to split the group intentionally.

Iain Couzin of the Max Planck Institute of Animal Behavior in Konstanz, Germany, who has worked extensively on agent-based models of collective animal movement, cautions that even if physical models like these seem to reproduce some of the phenomena seen in real life, that doesn’t mean the model’s rules reflect what truly governs the animals’ behaviour. It’s tempting, he says, to get “allured by the beauty of statistical physics” at the expense of the biology. All the same, he adds that whether or not such models truly capture what is going on in the field, they might offer valuable lessons for how to control and guide collectives of agent-like entities.

In particular, the studies of shepherding might reveal strategies that one could program into artificial shepherding agents such as robots or drones. Bhamla and Chakrabortty have in fact suggested how one such swarm control algorithm might be implemented. But it could be harder than it sounds. “Dogs are extremely good at inferring and predicting the idiosyncrasies of individual sheep and of sheep–sheep interactions,” says Chakrabortty. This allows them to adapt their strategy on the fly. “Farmers laugh at the idea of drones or robots,” says Bhamla. “They don’t think the technology is ready yet. The dogs benefit from centuries of directed evolution and training.”

Perhaps the findings could be valuable for another kind of animal herding too. “Maybe this work could be applied to herding kids at a daycare,” Bhamla jokes. “One of us has small kids and recognizes the challenges of herding small toddlers from one room to another, especially at a party. Perhaps there is a lesson here.” As anyone who has ever tried to organize groups of small children might say: good luck with that.

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

]]>
Feature Physics sheds a new insight on the behaviour of sheep flocks, helping with new tips on shepherding https://physicsworld.com/wp-content/uploads/2024/10/2024-09-Ball-sheep-flock-aerial-FRONTIS-colourKR.jpg newsletter
New on-chip laser fills long sought-after green gap https://physicsworld.com/a/new-on-chip-laser-fills-long-sought-after-green-gap/ Thu, 26 Sep 2024 08:30:13 +0000 https://physicsworld.com/?p=116980 Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
A series of visible-light colours generated by a microring resonator

On-chip lasers that emit green light are notoriously difficult to make. But researchers at the National Institute of Standards and Technology (NIST) and the NIST/University of Maryland Joint Quantum Institute may now have found a way to do just this, using a modified optical component known as a ring-shaped microresonator. Green lasers are important for applications including quantum sensing and computing, medicine and underwater communications.

In the new work, a research team led by Kartik Srinivasan modified a silicon nitride microresonator such that it was able to convert infrared laser light into yellow and green light. The researchers had already succeeded in using this structure to convert infrared laser light into red, orange and yellow wavelengths, as well as a wavelength of 560 nm, which lies at the edge between yellow and green light. Previously, however, they were not able to produce the full range of yellow and green colours to fill the much sought-after “green gap”.

More than 150 distinct green-gap wavelengths

To overcome this problem, the researchers made two modifications to their resonator. The first was to thicken it by 100 nm so that it could more easily generate green light with wavelengths down to 532 nm. Being able to produce such a short wavelength means that the entire green wavelength range is now covered, they say. In parallel, they modified the cladding surrounding the microresonator by etching away part of the silicon dioxide layer that it was fabricated on. This alteration made the output colours less sensitive to the dimension of the microring.

These changes meant that the team could produce more than 150 distinct green-gap wavelengths and could fine tune these too. “Previously, we could make big changes – red to orange to yellow to green – in the laser colours we could generate with OPO [optical parametric oscillation], but it was hard to make small adjustments within each of these colour bands,” says Srinivasan.

Like the previous microresonator, the new device works thanks to a process known as nonlinear wave mixing. Here, infrared light that is pumped into the ring-shaped structure is confined and guided within it. “This infrared light circulates around the ring hundreds of times due to its low loss, resulting in a build-up of intensity,” explains Srinivasan. “This high intensity enables the conversion of pump light to other wavelengths.”

Third-order optical parametric oscillation

“The purpose of the microring is to enable relatively modest, input continuous-wave laser light to build up in intensity to the point that nonlinear optical effects, which are often thought of as weak, become very significant,” says team member Xiyuan Lu.

The specific nonlinear optical process the researchers use is third-order optical parametric oscillation. “This works by taking light at a pump frequency np and creating one beam of light that’s higher in frequency (called the signal, at a frequency ns) and one beam that’s lower in frequency (called the idler, at a frequency ni),” explains first author Yi Sun. “There is a basic energy conservation requirement that 2np= ns+ ni.”

Simply put, this means that for every two pump photons that are used to excite the system, one signal photon and one idler photon are created, he tells Physics World.

Towards higher power and a broader range of colours

The NIST/University of Maryland team has been working on optical parametric oscillation as a way to convert near-infrared laser light to visible laser light for several years now. One of their main objectives was to fill the green gap in laser technology and fabricate frequency-converted lasers for quantum, biology and display applications.

“Some of the major applications we are ultimately targeting are high-end lasers, continuous-wave single-mode lasers covering the green gap or even a wider range of frequencies,” reveals team member Jordan Stone. “Applications include lasers for quantum optics, biology and spectroscopy, and perhaps laser/hologram display technologies.”

For now, the researchers are focusing on achieving higher power and a broader range of colours (perhaps even down to blue wavelengths). They also hope to make devices that can be better controlled and tuned. “We are also interested in laser injection locking with frequency-converted lasers, or using other techniques to further enhance the coherence of our lasers,” says Stone.

The work is detailed in Light: Science & Applications.

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
Research update Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies https://physicsworld.com/wp-content/uploads/2024/09/27-09-24-Color-Series-NIST.jpg
Researchers exploit quantum entanglement to create hidden images https://physicsworld.com/a/researchers-exploit-quantum-entanglement-to-create-hidden-images/ Wed, 25 Sep 2024 13:00:43 +0000 https://physicsworld.com/?p=116973 Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Encoding images in photon correlations

Ever since the double-slit experiment was performed, physicists have known that light can be observed as either a wave or a stream of particles. For everyday imaging applications, it is the wave-like aspect of light that manifests, with receptors (natural or artificial) capturing the information contained within the light waves to “see” the scene being observed.

Now, Chloé Vernière and Hugo Defienne from the Paris Institute of Nanoscience at Sorbonne University have used quantum correlations to encode an image into light such that it only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera – otherwise the image is hidden from view.

Encoding information in quantum correlations

In a study described in Physical Review Letters, Vernière and Defienne managed to hide an image of a cat from conventional light measurement devices by encoding the information in quantum entangled photons, known as a photon-pair correlation. To achieve this, they shaped spatial correlations between entangled photons – in the form of arbitrary amplitude and phase objects – to encode image information within the pair correlation. Once the information is encoded into the photon pairs, it is undetectable by conventional measurements. Instead, a single-photon detector known as an electron-multiplied charge couple device (EMCCD) camera is needed to “show” the hidden image.

“Quantum entanglement is a fascinating phenomenon, central to many quantum applications and a driving concept behind our research,” says Defienne. “In our previous work, we demonstrated that, in certain cases, quantum correlations between photons are more resistant to external disturbances, such as noise or optical scattering, than classical light. Inspired by this, we wondered how this resilience could be leveraged for imaging. We needed to use these correlations as a support – a ‘canvas’ – to imprint our image, which is exactly what we’ve achieved in this work.”

How to hide an image

The researchers used a technique known as spontaneous parametric down-conversion (SPDC), which is used in many quantum optics experiments, to generate the entangled photons. SPDC is a nonlinear process that uses a nonlinear crystal (NLC) to split a single high-energy photon from a pump beam into two lower energy entangled photons. The properties of the lower energy photons are governed by the geometry and type of the NLC and the characteristics of the pump beam.

In this study, the researchers used a continuous-wave laser that produced a collimated beam of horizontally polarized 405 nm light to illuminate a standing cat-shaped mask, which was then Fourier imaged onto an NLC using a lens. The spatially entangled near-infrared (810 nm) photons, produced after passing through the NLC, were then detected using another lens and the EMCCD.

This SPDC process produces an encoded image of a cat. This image does not appear on regular camera film and only becomes visible when the photons are counted one by one using the EMCCD. This allowed the image of the cat to be “hidden” in light and unobservable by traditional cameras.

“It is incredibly intriguing that an object’s image can be completely hidden when observed classically with a conventional camera, but then when you observe it ‘quantumly’ by counting the photons one by one and examining their correlations, you can actually see it,” says Vernière, a PhD student on the project. “For me, it is a completely new way of doing optical imaging, and I am hopeful that many powerful applications will emerge from it.”

What’s next?

This research has extended on previous work and Defienne says that the team’s next goal is to show that this new method of imaging has practical applications and is not just a scientific curiosity. “We know that images encoded in quantum correlations are more resistant to external disturbances – such as noise or scattering – than classical light. We aim to leverage this resilience to improve imaging depth in scattering media.”

When asked about the applications that this development could impact, Defienne tells Physics World: “We hope to reduce sensitivity to scattering and achieve deeper imaging in biological tissues or longer-range communication through the atmosphere than traditional technologies allow. Even though we are still far from it, this could potentially improve medical diagnostics or long-range optical communications in the future.”

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Research update Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques https://physicsworld.com/wp-content/uploads/2024/09/25-09-24-hidden-image-featured.jpg newsletter1
Ambipolar electric field helps shape Earth’s ionosphere https://physicsworld.com/a/ambipolar-electric-field-helps-shape-earths-ionosphere/ Wed, 25 Sep 2024 07:53:39 +0000 https://physicsworld.com/?p=116952 Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
A drop in electric potential of just 0.55 V measured at altitudes of between 250–768 km in the Earth’s atmosphere above the North and South poles could be the first direct measurement of our planet’s long-sought after electrostatic field. The measurements, from NASA’s Endurance mission, reveal that this field is important for driving how ions escape into space and shaping the upper layer of the atmosphere, known as the ionosphere.

Researchers first predicted the existence of the ambipolar electric field in the 1960s as the first spacecraft flying over the Earth’s poles detected charged particles (including positively-charged hydrogen and oxygen ions) flowing out from the atmosphere. The theory of a planet-wide electric field was developed to directly explain this “polar wind”, but the effects of this field were thought to be too weak to be detectable. Indeed, if the ambipolar field was the only mechanism driving the electrostatic field of Earth, then the resulting electric potential drop across the exobase transition region (which lies at an altitude of between 200–780 km) could be as low as about 0.4 V.

A team of researchers led by Glyn Collinson at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, has now succeeded in measuring this field for the first time thanks to a new instrument called a photoelectron spectrometer, which they developed. The device was mounted on the Endurance rocket, which was launched from Svalbard in the  Norwegian Arctic in May 2022. “Svalbard is the only rocket range in the world where you can fly through the polar wind and make the measurements we needed,” says team member Suzie Imber, who is a space physicist at the University of Leicester, UK.

Just the “right amount”

The spacecraft reached an altitude of 768.03 km, where it remained for 19 min while the onboard spectrometer measured the energies of electrons there every 10 seconds. It measured a drop in electric potential of 0.55 V±0.09 V over an altitude range of 258–769 km. While tiny, this is just the “right amount” to explain the polar wind without any other atmospheric effects, says Collinson.

The researchers showed that the ambipolar field, which is generated exclusively by the outward pressure of ionospheric electrons, increases the “scale height” of the ionosphere by as much as 271% (from a height of 77.0 km to a height of 208.9 km). This part of the atmosphere therefore remains denser to greater heights than it would if the field did not exist. This is because the field increases the supply of cold oxygen ions (O+) to the magnetosphere (that is, near the peak at 768 km) by more than 3.8%, so counteracting the effects of other mechanisms (such as wave-particle interactions) that can heat and accelerate these particles to velocities high enough for them to escape into space. The field also probably explains why the magnetosphere is made up primarily of cold hydrogen ions (H+).

The ambipolar field could be as fundamental for our planet as its gravity and magnetic fields, says Collinson, and it may even have helped shape how the atmosphere evolved. Similar fields might also exist on other planets in the solar system with an atmosphere, including Venus and Mars. “Understanding the forces that cause Earth’s atmosphere to slowly leak to space may be important for revealing what makes Earth habitable and why we’re all here,” he tells Physics World. “It’s also crucial to accurately forecast the impact of geomagnetic storms and ‘space weather’.”

Looking forward, the scientists say they would like to make further measurements of the Earth’s ambipolar field in the future. Happily, they recently received endorsement for a follow-up rocket – called Resolute – to do just this.

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
Research update Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields https://physicsworld.com/wp-content/uploads/2024/09/endurance-launch-photo.jpg newsletter1
Light-absorbing dye turns skin of a live mouse transparent https://physicsworld.com/a/light-absorbing-dye-turns-skin-of-a-live-mouse-transparent/ Tue, 24 Sep 2024 15:00:54 +0000 https://physicsworld.com/?p=116964 The technique could be used to observe a wide range of deep-seated biological structures and activity

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
One of the difficulties when trying to image biological tissue using optical techniques is that tissue scatters light, which makes it opaque. This scattering occurs because the different components of tissue, such as water and lipids, have different refractive indices, and it limits the depth at which light can penetrate.

A team of researchers at Stanford University in the US has now found that a common water-soluble yellow dye (among several other dye molecules) that strongly absorbs near-ultraviolet and blue light can help make biological tissue transparent in just a few minutes, thus allowing light to penetrate more deeply. In tests on mice skin, muscle and connective tissue, the team used the technique to observe a wide range of deep-seated structures and biological activity.

In their work, the research team – led by Zihao Ou (now at The University of Texas at Dallas), Mark Brongersma and Guosong Hong – rubbed the common food dye tartrazine, which is yellow/red in colour, onto the abdomen, scalp and hindlimbs of live mice. By absorbing light in the blue part of the spectrum, the dye altered the refractive index of the water in the treated areas at red-light wavelengths, such that it more closely matched that of lipids in this part of the spectrum. This effectively reduced the refractive-index contrast between the water and the lipids and allowed the biological tissue to appear more transparent at this wavelength, albeit tinged with red.

In this way, the researchers were able to visualize internal organs, such as the liver, small intestine and bladder, through the skin without requiring any surgery. They were even able to observe fluorescent protein-labelled enteric neurons in the abdomen and monitor the movements of these nerve cells. This enabled them to generate maps showing different movement patterns in the gut during digestion. They were also able to visualize blood flow in the rodents’ brains and the fine structure of muscle sarcomere fibres in their hind limbs.

Reversible effect

The skin becomes transparent in just a few minutes and the effect can be reversed by simply rinsing off the dye.

So far, this “optical clearing” study has only been conducted on animals. But if extended to humans, it could offer a variety of benefits in biology, diagnostics and even cosmetics, says Hong. Indeed, the technique could help make some types of invasive biopsies a thing of the past.

“For example, doctors might be able to diagnose deep-seated tumours by simply examining a person’s tissue without the need for invasive surgical removal. It could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin and could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin,” Hong explains. “If we could just look at what’s going on under the skin instead of cutting into it, or using radiation to get a less than clear look, we could change the way we see the human body.”

Hong tells Physics World that the collaboration originated from a casual conversation he had with Brongersma, at a café on Stanford’s campus during the summer of 2021. “Mark’s lab specializes in nanophotonics while my lab focuses on new strategies for enhancing deep-tissue imaging of neural activity and light delivery for optogenetics. At the time, one of my graduate students, Nick Rommelfanger (third author of the current paper), was working on applying the ‘Kramers-Kronig’ relations to investigate microwave–brain interactions. Meanwhile, my postdoc Zihao Ou (first author of this paper) had been systematically screening a variety of dye molecules to understand their interactions with light.”

Tartrazine emerged as the leading candidate, says Hong. “This dye showed intense absorption in the near-ultraviolet/blue spectrum (and thus strong enhancement of refractive index in the red spectrum), minimal absorption beyond 600 nm, high water solubility and excellent biocompatibility, as it is an FD&C-approved food dye.”

“We realized that the Kramers-Kronig relations could be applied to the resonance absorption of dye molecules, which led me to ask Mark about the feasibility of matching the refractive index in biological tissues, with the aim of reducing light scattering,” Hong explains. “Over the past three years, both our labs have had numerous productive discussions, with exciting results far exceeding our initial expectations.”

The researchers say they are now focusing on identifying other dye molecules with greater efficiency in achieving tissue transparency. “Additionally, we are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” reveals Hong.

The study is detailed in Science.

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
Research update The technique could be used to observe a wide range of deep-seated biological structures and activity https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Zihao-Ou-Lab-1a.jpg newsletter1
Science thrives on constructive and respectful peer review https://physicsworld.com/a/science-thrives-on-constructive-and-respectful-peer-review/ Tue, 24 Sep 2024 12:42:06 +0000 https://physicsworld.com/?p=116969 Unhelpful or rude feedback can shake the confidence of early career researchers

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
It is Peer Review Week and celebrations are well under way at IOP Publishing (IOPP), which brings you the Physics World Weekly podcast.

Reviewer feedback to authors plays a crucial role in the peer-review process, boosting the quality of published papers to the benefit of authors and the wider scientific community. But sometimes authors receive very unhelpful or outright rude feedback about their work. These inappropriate comments can shake the confidence of early career researchers, and even dissuade them from pursuing careers in science.

Our guest in this episode is Laura Feetham-Walker, who is reviewer engagement manager at IOPP. She explains how the publisher is raising awareness of the importance of constructive and respectful peer review feedback and how innovations can help to create a positive peer review culture.

As part of the campaign, IOPP asked some leading physicists to recount the worst reviewer comments that they have received – and Feetham-Walker shares some real shockers in the podcast.

IOPP has created a video called “Unprofessional peer reviews can harm science” in which leading scientists share inappropriate reviews that they have received.

The publisher also offers a  Peer Review Excellence  training and certification programme, which equips early-career researchers in the physical sciences with the skills to provide constructive feedback.

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
Podcasts Unhelpful or rude feedback can shake the confidence of early career researchers https://physicsworld.com/wp-content/uploads/2024/09/Laura-Feetham-Walker.jpg newsletter1
Convection enhances heat transport in sea ice https://physicsworld.com/a/convection-enhances-heat-transport-in-sea-ice/ Tue, 24 Sep 2024 08:42:25 +0000 https://physicsworld.com/?p=116946 New mathematical framework could allow for more accurate climate models

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
The thermal conductivity of sea ice can significantly increase when convective flow is present within the ice. This new result, from researchers at Macquarie University, Australia, and the University of Utah and Dartmouth College, both in the US, could allow for more accurate climate models – especially since current global models only account for temperature and salinity and not convective flow.

Around 15% of the ocean’s surface will be covered with sea ice at some time in a year. Sea ice is a thin layer that separates the atmosphere and the ocean and it is responsible for regulating heat exchange between the two in the polar regions of our planet. The thermal conductivity of sea ice is a key parameter in climate models. It has proved difficult to measure, however, because of its complex structure, made up of ice, air bubbles and brine inclusions, which form as the ice freezes from the surface of the ocean to deeper down. Indeed, sea ice can be thought of as being a porous composite material and is therefore very sensitive to changes in temperature and salinity.

The salty liquid within the brine inclusions is heavier than fresh ocean water. This results in convective flow within the ice, creating channels through which liquid can flow out, explains applied mathematician Noa Kraitzman at Macquarie, who led this new research effort. “Our new framework characterizes enhanced thermal transport in porous sea ice by combining advection-diffusion processes with homogenization theory, which simplifies complex physical properties into an effective bulk coefficient.”

Thermal conductivity of sea ice can increase by a factor of two to three

The new work builds on a 2001 study in which researchers observed an increase in thermal conductivity in sea ice at warmer temperatures. “In our calculations, we had to derive new bounds on the effective thermal conductivity, while also accounting for complex, two-dimensional convective fluid flow and developing a theoretical model that could be directly compared with experimental measurements in the field,” explains Kraitzman. “We employed Padé approximations to obtain the required bounds and parametrized the Péclet number specifically for sea ice, considering it as a saturated rock.”

Padé approximations are routinely used to approximate a function by a rational analysis of given order and the Péclet number is a dimensionless parameter defined as the ratio between the rate of advection to the rate of diffusion.

The results suggest that the effective thermal conductivity of sea ice can increase by a factor of two to three because of conductive flow, especially in the lower, warmer sections of the ice, where temperature and the ice’s permeability favour convection, Kraitzman tells Physics World. “This enhancement is mainly confined to the bottom 10 cm during the freezing season, when convective flows are present within the sea ice. Incorporating these bounds into global climate models could improve their ability to predict thermal transport through sea ice, resulting in more accurate predictions of sea ice melt rates.”

Looking forward, Kraitzman and colleagues say they now hope to acquire additional field measurements to refine and validate their model. They also want to extend their mathematical framework to include more general 3D flows and incorporate the complex fluid exchange processes that exist between ocean and sea ice. “By addressing these different areas, we aim to improve the accuracy and applicability of our model, particularly in ocean-sea ice interaction models, aiming for a better understanding of polar heat exchange processes and their global impacts,” says Kraitzman.

The present work is detailed in Proceedings of the Royal Society A.

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
Research update New mathematical framework could allow for more accurate climate models https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_ProfKG_Homogenization_2024-1600x1000-1.jpg
Short-range order always appears in new type of alloy https://physicsworld.com/a/short-range-order-always-appears-in-new-type-of-alloy/ Mon, 23 Sep 2024 13:00:48 +0000 https://physicsworld.com/?p=116932 New insights into hidden atomic ordering could help in the development of more robust alloys

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Short-range order plays an important role in defining the properties and performance of “multi-principal element alloys” (MPEAs), but the way in which this order develops is little understood, making it difficult to control. In a surprising new discovery, a US-based research collaboration has have found that this order exists regardless of how MPEAs are processed. The finding will help scientists develop more effective ways to improve the properties of these materials and even tune them for specific applications, especially those with demanding conditions.

MPEAs are a relatively new type of alloy and consist of three or more components in nearly equal proportions. This makes them very different to conventional alloys, which are made from just one or two principal elements with trace elements added to improve their performance.

In recent years, MPEAs have spurred a flurry of interest thanks to their high strength, hardness and toughness over temperature ranges at which traditional alloys, such as steel, can fail. They could also be more resistant to corrosion, making them promising for use in extreme conditions, such as in power plants, or aerospace and automotive technologies, to name but three.

Ubiquitous short-range order

MPEAs were originally thought of as being random solid solutions with the constituent elements being haphazardly dispersed, but recent experiments have shown that this is not the case.

The researchers – from Penn State University, the University of California, Irvine, the University of Massachusetts, Amherst, and Brookhaven National Laboratory – studied the cobalt/chromium/nickel (CoCrNi) alloy, one of the best-known examples of an MPEA. This face-centred cubic (FCC) alloy boasts the highest fracture toughness for an alloy at liquid helium temperatures ever recorded.

Using an improved transmission electron microscopy characterization technique combined with advanced three-dimensional printing and atomistic modelling, the team found that short-range order, which occurs when atoms are arranged in a non-random way over short distances, appears in three CoCrNi-based FCC MPEAs under a variety of processing and thermal treatment conditions.

Their computational modelling calculations also revealed that local chemical order forms in the liquid–solid interface when the alloys are rapidly cooled, even at a rate of 100 billion °C/s. This effect comes from the rapid atomic diffusion in the supercooled liquid, at rates equal to or even greater than the rate of solidification. Short-range order is therefore an inherent characteristic of FCC MPEAs, the researchers say.

The new findings are in contrast to the previous notion that the elements in MPEAs arrange themselves randomly in the crystal lattice if they cool rapidly during solidification. It also refutes the idea that short-range order develops mainly during annealing (a process in which heating and slow cooling are used to improve material properties such as strength, hardness and ductility).

Short-range order can affect MPEA properties, such as strength or resistance to radiation damage. The researchers, who report their work in Nature Communications, say they now plan to explore how corrosion and radiation damage affect the short-range order in MPEAs.

“MPEAs hold promise for structural applications in extreme environments. However, to facilitate their eventual use in industry, we need to have a more fundamental understanding of the structural origins that give rise to their superior properties,” says team co-lead Yang Yang, who works in the engineering science and mechanics department at Penn State.

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Research update New insights into hidden atomic ordering could help in the development of more robust alloys https://physicsworld.com/wp-content/uploads/2024/09/SRO-photo-CFN-image-contest.jpg
We should treat our students the same way we would want our own children to be treated https://physicsworld.com/a/we-should-treat-our-students-the-same-way-we-would-want-our-own-children-to-be-treated/ Mon, 23 Sep 2024 10:00:38 +0000 https://physicsworld.com/?p=116687 Pete Vukusic says that students' positive experiences matter profoundly

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
“Thank goodness I don’t have to teach anymore.” These were the words spoken by a senior colleague and former mentor upon hearing about the success of their grant application. They had been someone I had respected. Such comments, however, reflect an attitude that persists across many UK higher-education (HE) science departments. Our departments’ students, our own children even, studying across the UK at HE institutes deserve far better.

It is no secret in university science departments that lecturing, tutoring and lab supervision are perceived by some colleagues to be mere distractions from what they consider their “real” work and purpose to be. These colleagues may evasively try to limit their exposure to teaching, and their commitment to its high-quality delivery. This may involve focusing time and attention solely on research activities or being named on as many research grant applications as possible.

University workload models set time aside for funded research projects, as they should. Research grants provide universities with funding that contributes to their finances and are an undeniably important revenue stream. However, an aversion to – or flagrant avoidance of – teaching by some colleagues is encountered by many who have oversight and responsibility for the organization and provision of education within university science departments.

It is also a behaviour and mindset that is recognized by students, and which negatively impacts their university experience. Avoidance of teaching displayed, and sometimes privately endorsed, by senior or influential colleagues in a department can also shape its culture and compromise the quality of education that is delivered. Such attitudes have been known to diffuse into a department’s environment, negatively impacting students’ experiences and further learning. Students certainly notice and are affected by this.

The quality of physics students’ experiences depends on many factors. One is the likelihood of graduating with skills that make them employable and have successful careers. Others include: the structure, organization and content of their programme; the quality of their modules and the enthusiasm and energy with which they are delivered; the quality of the resources to which they have access; and the extent to which their individual learning needs are supported.

We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching.

In the UK, the quality of departments’ and institutions’ delivery of these and other components has been assessed since 2005 by the National Student Survey (NSS). Although imperfect and continuing to evolve, it is commissioned every year by the Office for Students on behalf of UK funding and regulatory bodies and is delivered independently by Ipsos.

The NSS can be a helpful tool to gather final-year students’ opinions and experiences about their institutions and degree programmes. Publication of the NSS datasets in July each year should, in principle, provide departments and institutions with the information they need to recognize their weaknesses and improve their subsequent students’ experiences. They would normally be motivated to do this because of the direct impact NSS outcomes have on institutions’ league table positions. These league tables can tangibly impact student recruitment and, therefore, an institution’s finances.

My sincerely held contention, however, communicated some years ago to a red-faced finger-wagging senior manager during a fraught meeting, is this. We should ignore NSS outcomes. They don’t, and shouldn’t, matter. This is a bold statement; career-ending, even. I articulated that we and all our colleagues should instead wholeheartedly strive to treat our students as we would want our own children, or our younger selves, to be treated, across every academic aspect and learning-related component of their journey while they are with us. This would be the right and virtuous thing to do.  In fact, if we do this, the positive NSS outcomes would take care of themselves.

Academic guardians

I have been on the frontline of university teaching, research, external examining and education leadership for close to 30 years. My heartfelt counsel, formed during this journey, is that our students’ positive experiences matter profoundly. They matter because, in joining our departments and committing three or more years and many tens of thousands of pounds to us, our students have placed their fragile and uncertain futures and aspirations into our hands.

We should feel privileged to hold this position and should respond to and collaborate with them positively, always supportively and with compassion, kindness and empathy. We should never be the traditionally tough and inflexible guardians of a discipline that is academically demanding, and which can, in a professional physics academic career, be competitively unyielding. That is not our job. Our roles, instead, should be as our students’ academic guardians, enthusiastically taking them with us across this astonishing scientific and mathematical world; teaching, supporting and enabling wherever we possibly can.

A narrative such as this sounds fantastical. It seems far removed from the rigours and tensions of day-in, day-out delivery of lecture modules, teaching labs and multiple research targets. But the metaphor it represents has been the beating heart of the most successfully effective, positive and inclusive learning environments I have encountered in UK and international HE departments during my long academic and professional journey.

I urge physics and science colleagues working in my own and other UK HE departments to remember and consider what it can be like to be an anxious or confused student, whose cognitive processes are still developing, whose self-confidence may be low and who may, separately, be facing other challenges to their circumstances. We should then behave appropriately. We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching. Ego has no place. We should show kindness, patience, and a willingness to engage them in a community of learning, framed by supportive and inclusive encouragement. We should treat our students the way we would want our own children to be treated.

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
Opinion and reviews Pete Vukusic says that students' positive experiences matter profoundly https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-Vukusic-teacher-and-students-in-3D-printing-lab-875671948-iStock_monkeybusinessimages.jpg newsletter
Working in quantum tech: where are the opportunities for success? https://physicsworld.com/a/working-in-quantum-tech-where-are-the-opportunities-for-success/ Mon, 23 Sep 2024 09:53:55 +0000 https://physicsworld.com/?p=116928 Quantum professionals describe the emerging industry, and the skills required to thrive

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>

The quantum industry in booming. An estimated $42bn was invested in the sector in 2023 and is projected to rise to $106 billion by 2040. In this episode of Physics World Stories, two experts from the quantum industry share their experiences, and give advice on how to enter this blossoming sector. Quantum technologies – including computing, communications and sensing – could vastly outperform today’s technology for certain applications, such as efficient and scalable artificial intelligence.

Our first guest is Matthew Hutchings, chief product officer and co-founder of SEEQC. Based in New York and with facilities in Europe, SEEQC is developing a digital quantum computing platform with a broad industrial market due to its combination of classical and quantum technologies. Hutchings speaks about the increasing need for engineering positions in a sector that to date has been dominated by workers with a PhD in quantum information science.

The second guest is Araceli Venegas-Gomez, founder and CEO of QURECA, which helps to train and recruit individuals, while also providing business development services. Venegas-Gomez’s journey into the sector began with her reading about quantum mechanics as a hobby while working in aerospace engineering. In launching QURECA, she realized there was an important gap to be filled between quantum information science and business – two communities that have tended to speak entirely different languages.

Get even more tips and advice in the recent feature article ‘Taking the leap – how to prepare for your future in the quantum workforce’.

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>
Quantum professionals describe the emerging industry, and the skills required to thrive Quantum professionals describe the emerging industry, and the skills required to thrive Physics World Working in quantum tech: where are the opportunities for success? full false 45:53 Podcasts Quantum professionals describe the emerging industry, and the skills required to thrive https://physicsworld.com/wp-content/uploads/2024/09/Quantum-globe-1169711469-iStock_metamorworks-scaled.jpg newsletter
Thermal dissipation decoheres qubits https://physicsworld.com/a/thermal-dissipation-decoheres-qubits/ Mon, 23 Sep 2024 08:04:21 +0000 https://physicsworld.com/?p=116942 Superconducting quantum bits release their energy into their environment as photons

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
How does a Josephson junction, which is the basic component of a superconducting quantum bit (or qubit), release its energy into the environment? It is radiated as photons, according to new experiments by researchers at Aalto University Finland in collaboration with colleagues from Spain and the US who used a thermal radiation detector known as a bolometer to measure this radiation directly in the electrical circuits holding the qubits. The work will allow for a better understanding of the loss and decoherence mechanism in qubits that can disrupt and destroy quantum information, they say.

Quantum computers make use of qubits to store and process information. The most advanced quantum computers to date – including those being developed by IT giants Google and IBM – use qubits made from superconducting electronic circuits operating at very low temperatures. To further improve qubits, researchers need to better understand how they dissipate heat, says Bayan Karimi, who is the first author of a paper describing the new study. This heat transfer is a form of decoherence – a phenomenon by which the quantum states in qubits revert to behaving like classical 0s and 1s and lose the precious quantum information they contain.

“An understanding of dissipation in a single Josephson junction coupled to an environment remains strikingly incomplete, however,” she explains. “Today, a junction can be modelled and characterized without a detailed knowledge of, for instance, where energy is dissipated in a circuit. But improving design and performance will require a more complete picture.”

Physical environment is important

In the new work, Karimi and colleagues used a nano-bolometer to measure the very weak radiation emitted from a Josephson junction over a broad range of frequencies up to 100::GHz. The researchers identified several operation regimes depending on the junction bias, each with a dominant dissipation mechanism. “The whole frequency-dependent power and shape of the current-voltage characteristics can be attributed to the physical environment of the junction,” says Jukka Pekola, who led this new research effort.

The thermal detector works by converting radiation into heat and is composed of an absorber (made of copper), the temperature of which changes when it detects the radiation. The researchers measure this variation using a sensitive thermometer, comprising a tunnel junction between the copper absorber and a superconductor.

“Our work will help us better understand the nature of heat dissipation of qubits that can disrupt and destroy quantum information and how these coherence losses can be directly measured as thermal losses in the electrical circuit holding the qubits,” Karimi tells Physics World.

In the current study, which is detailed in Nature Nanotechnology, the researchers say they measured continuous energy release from a Josephson junction when it was biased by a voltage. They now aim to find out how their detector can sense single heat loss events when the Josephson junction or qubit releases energy. “At best, we will be able to count single photons,” says Pekola.

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
Research update Superconducting quantum bits release their energy into their environment as photons https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Picture2.jpg
The physics of cycling’s ‘Everesting’ challenge revealed https://physicsworld.com/a/the-physics-of-cyclings-everesting-challenge-revealed/ Fri, 20 Sep 2024 15:00:04 +0000 https://physicsworld.com/?p=116931 Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
“Everesting” involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m.

The challenge became popular during the COVID-19 lockdowns and in 2021 the Irish cyclist Ronan McLaughlin was reported to have set a new “Everesting” record of 6:40:54. This was almost 20 minutes faster than the previous world record of 6:59:38 set by the US’s Sean Gardner in 2020.

Yet a debate soon ensued on social media concerning the significant tailwind that day of 5.5 meters per second, which they claimed would have helped McLaughlin to climb the hill multiple times.

But did it? To investigate, Martin Bier, a physicist at East Carolina University in North Carolina, has now analysed what effect air resistance might have when cycling up and down a hill.

“Cycling uses ‘rolling’, which is much smoother and faster, and more efficient [than running],” notes Bier. “All of the work is purely against gravity and friction.”

Bier calculated that a tailwind does help slightly when going uphill, but most of the work when doing so is generating enough power to overcome gravity rather than air resistance.

When coming downhill, however, any headwind becomes significant given that the force of air resistance increases with the square of the cyclist’s speed. The headwind can then have a huge effect, causing a significant reduction in speed.

So, while a tailwind going up is negligible the headwind coming down certainly won’t be. “There are no easy tricks,” Bier adds. “If you want to be a better Everester, you need to lose weight and generate more [power]. This is what matters — there’s no way around it.”

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
Blog Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m https://physicsworld.com/wp-content/uploads/2024/09/cyclists-silhouette-286024589-Shutterstock_LittlePerfectStock.jpg newsletter
Air-powered computers make a comeback https://physicsworld.com/a/air-powered-computers-make-a-comeback/ Fri, 20 Sep 2024 11:00:44 +0000 https://physicsworld.com/?p=116911 Novel device contains a pneumatic logic circuit made from 21 microfluidic valves

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers were able to make air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Complex logic circuits

Combining an increasing number of microfluidic valves enables the creation of complex air-powered logic circuits. In the new study, detailed in Device, Grover and colleagues made a device that uses 21 microfluidic valves to perform a parity bit calculation – an important calculation employed by many electronic computers to detect errors and other problems.

The novel air-powered computer detects differences in air pressure flowing through the valves to count the number of bits. If there is an error, it outputs an error signal by blowing a whistle. As a proof-of-concept, the researchers used their device to detect anomalies in an intermittent pneumatic compression (IPC) device – a leg sleeve that fills with air and regularly squeezes a patient’s legs to increase blood flow, with the aim of preventing blood clots that could lead to strokes. Normally, these machines are monitored using electronic equipment.

“IPC devices can save lives, but they aren’t as widely employed as they could be,” says Grover. “In part, this is because they’re so expensive. We wanted to see if we could reduce their cost by replacing some of their electronic hardware with pneumatic logic.”

Air’s viscosity is important

Air-powered computers behave very similarly, but not quite identically to electronic computers, Grover adds. “For example, we can often take an existing electronic circuit and make an air-powered version of it and it’ll work just fine, but at other times the air-powered device will behave completely differently and we have to tweak the design to make it function.”

The variations between the two types of computers come down to one important physical difference between electricity and air, he explains: electricity does not have viscosity, but air does. “There are also lots of little design details that are of little consequence in electronic circuits but which become important in pneumatic circuits because of air’s viscosity. This makes our job a bit harder, but it also means we can do things with pneumatic logic that aren’t possible – or are much harder to do – with electronic logic.”

In this work, the researchers focused on biomedical applications for their air-powered computer, but they say that this is just the “tip of the iceberg” for this technology. Air-powered systems are ubiquitous, from the brakes on a train, to assembly-line robots and medical ventilators, to name but three. “By using air-powered computers to operate and monitor these systems, we could make these important systems more affordable, more reliable and safer,” says Grover.

“I have been developing air-powered logic for around 20 years now, and we’re always looking for new applications,” he tells Physics World. “What is more, there are areas in which they have advantages over conventional electronic computers.”

One specific application of interest is moving grain inside silos, he says. These enormous structures hold grain and other agricultural products and people often have to climb inside to spread out the grain – an extremely dangerous task because they can become trapped and suffocate.

“Robots could take the place of humans here, but conventional electronic robots could generate electronic sparks that could create flammable dust inside the silo,” Grover explains. “An air-powered robot, on the other hand, would work inside the silo without this risk. We are thus working on an air-powered ‘brain’ for such a robot to keep people out of harm’s way.”

Air-powered computers aren’t a new idea, he adds. Decades ago, there was a multitude of devices being designed that ran on water or air to perform calculations. Air-powered computers fell out of favour, however, when transistors and integrated circuits made electronic computers feasible. “We’ve therefore largely forgotten the history of computers that ran on things other than electricity. Hopefully, our new work will encourage more researchers to explore new applications for these devices.”

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
Research update Novel device contains a pneumatic logic circuit made from 21 microfluidic valves https://physicsworld.com/wp-content/uploads/2024/09/20-09-24-air-powered-circuit.jpg newsletter1
Quantum hackathon makes new connections https://physicsworld.com/a/quantum-hackathon-makes-new-connections/ Fri, 20 Sep 2024 08:40:32 +0000 https://physicsworld.com/?p=116848 The 2024 UK Quantum Hackathon set new standards for engagement and collaboration

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
Analysis The 2024 UK Quantum Hackathon set new standards for engagement and collaboration https://physicsworld.com/wp-content/uploads/2024/09/frontis-web.png newsletter
Rheo-electric measurements to predict battery performance from slurry processing https://physicsworld.com/a/rheo-electric-measurements-to-predict-battery-performance-from-slurry-processing/ Fri, 20 Sep 2024 06:58:33 +0000 https://physicsworld.com/?p=116835 Join the audience for a live webinar on 6 November 2024 sponsored by TA Instruments – Waters in partnership with The Electrochemical Society

The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 6 November 2024 sponsored by TA Instruments – Waters in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/09/2024-11-06-webinarimage.jpg
Simultaneous structural and chemical characterization with colocalized AFM-Raman https://physicsworld.com/a/simultaneous-structural-and-chemical-characterization-with-colocalized-afm-raman/ Thu, 19 Sep 2024 15:27:06 +0000 https://physicsworld.com/?p=116806 HORIBA explores how colocalized AFM-Raman enables dual structural and chemical analysis in a single scan, offering deeper insights across diverse applications

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>
Webinar HORIBA explores how colocalized AFM-Raman enables dual structural and chemical analysis in a single scan, offering deeper insights across diverse applications https://physicsworld.com/wp-content/uploads/2024/09/2024-10-22-webinar-image.jpg
Diagnosing and treating disease: how physicists keep you safe during healthcare procedures https://physicsworld.com/a/diagnosing-and-treating-disease-how-physicists-keep-you-safe-during-healthcare-procedures/ Thu, 19 Sep 2024 14:42:15 +0000 https://physicsworld.com/?p=116888 Two medical physicists talk about the future of treatment and diagnostic technologies

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features two medical physicists working at the heart of the UK’s National Health Service (NHS). They are Mark Knight, who is chief healthcare scientist at the NHS Kent and Medway Integrated Care Board, and Fiammetta Fedele, who is head of non-ionizing radiation at Guy’s and St Thomas NHS Foundation Trust in London.

They explain how medical physicists keep people safe during healthcare procedures – while innovating new technologies and treatments. They also discuss the role that artificial intelligence could play in medical physics and take a look forward to the future of healthcare.

This episode is supported by RaySearch Laboratories.

RaySearch Laboratories unifies industry solutions, empowering healthcare providers to deliver precise and effective radiotherapy treatment. RaySearch products transform scattered technologies into clarity, elevating the radiotherapy industry.

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
Podcasts Two medical physicists talk about the future of treatment and diagnostic technologies https://physicsworld.com/wp-content/uploads/2024/09/Mark-knight-Fiammetta-Fedele.jpg
RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia https://physicsworld.com/a/radcalc-qa-ensuring-safe-and-efficient-radiotherapy-throughout-australia/ Thu, 19 Sep 2024 12:45:15 +0000 https://physicsworld.com/?p=116746 Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
GenesisCare is the largest private radiation oncology provider in Australia, operating across five states and treating around 30,000 cancer patients each year. At the heart of this organization, ensuring the safety and efficiency of all patient radiotherapy treatments, lies a single server running LAP’s RadCalc quality assurance (QA) software.

RadCalc is a 100% software-based platform designed to streamline daily patient QA. The latest release, version 7.3.2, incorporates advanced 3D algorithms for secondary verification of radiotherapy plans, EPID-based pre-treatment QA and in vivo dosimetry, as well as automated 3D calculation based on treatment log files.

For GenesisCare, RadCalc provides independent secondary verification for 100 to 130 new plans each day, from more than 43 radiation oncology facilities across the country. The use of a single QA platform for all satellite centres helps to ensure that every patient receives the same high standard of care. “With everyone using the same software, we’ve got a single work instruction and we’re all doing things the same way,” says Leon Dunn, chief medical physicist at GenesisCare in Victoria.

“While the individual states operate as individual business units, the physics team operates as one, and the planners operate as one team as well,” adds Peter Mc Loone, GenesisCare’s head of physics for Australia. “We are like one team nationally, so we try to do things the same way. Obviously, it makes sense to make sure everyone’s checking the plans in the same way as well.”

User approved

GenesisCare implemented RadCalc more than 10 years ago, selected in part due to the platform’s impressive reputation amongst its users in Australia. “At that time, RadCalc was well established in radiotherapy and widely used,” explains Dunn. “It didn’t have all the features that it has now, but its basic features met the requirements we needed and it had a pretty solid user base.”

Today, GenesisCare’s physicists employ RadCalc for plan verification of all types of treatment across a wide range of radiotherapy platforms – including Varian and Elekta linacs, Gamma Knife and the Unity MR-linac, as well as superficial treatments and high dose-rate brachytherapy. They also use RadCalc’s plan comparison tool to check that the output from the treatment planning system matches what was imported to the MOSAIQ electronic medical record system.

“Before we had the plan comparison feature, our radiation therapists had to manually check control points in the plan against what was on the machine,” says Mc Loone. “RadCalc checks a wide range of values within the plan. It’s a very quick check that has saved us a lot of time, but also increased the safety aspect. We have certainly picked up errors through its use.”

Keeping treatments safe

The new feature that’s helping to make a big difference, however, is GenesisCare’s recent implementation of RadCalc’s 3D independent recalculation tool. Dunn explains that RadCalc previously performed a 2D comparison between the dose to a single point in the treatment planning system and the calculated dose to that point.

The new module, on the other hand, employs RadCalc’s collapsed-cone convolution algorithm to reconstruct 3D dose on the patient’s entire CT data set. Enabled by the introduction of graphics processing units, the algorithm performs a completely independent 3D recalculation of the treatment plan on the patient’s data.  “We’ve gone from a single point to tens of thousands of points,” notes Dunn.

Importantly, this 3D recalculation can discover any errors within a treatment plan before it gets to the point at which it needs to be measured. “Our priority is for every patient to have that second check done, thereby catching anything that is wrong with the treatment plan, hopefully before it is seen by the doctor. So we can fix things before they could become an issue,” Dunn says, pointing out that in the first couple of months of using this tool, it highlighted potentially suboptimal treatment plans to be improved.

Peter Mc Loone

In contrast, previous measurement-based checks had to be performed at the end of the entire planning process, after everyone had approved the plan and it had been exported to the treatment system. “Finding an error at that point puts a lot of pressure on the team to redo the plan and have everything reapproved,” Mc Loone explains. “By removing that stress and allowing checks to happen earlier in the piece, it makes the overall process safer and more efficient.”

Dunn notes that if the second check shows a problem with the plan, the plan can still be sent for measurements if needed, to confirm the RadCalc findings.

Increasing efficiency

As well as improving safety, the ability to detect errors early on in the planning process speeds up the entire treatment pathway. Operational efficiency is additionally helped by RadCalc’s high level of automation.

Once a treatment plan is created, the planning staff need to export it to RadCalc, with a single click. RadCalc then takes care of everything else, importing the entire data set, sending it to the server for recalculation and then presenting the results. “We don’t have to touch any of the processes until we get the quality checklist out, and that’s a real game changer for us,” says Dunn.

“We have one RadCalc system, that can handle five different states and several different treatment planning systems [Varian’s Eclipse and Elekta’s Monaco and GammaPlan],” notes Mc Loone. “We can have 130 different plans coming in, and RadCalc will filter them correctly and apply the right beam models using that automation that LAP has built in.”

Because RadCalc performs 100% software-based checks, it doesn’t require access to the treatment machine to run the QA (which usually means waiting until the day’s clinical session has finished). “We’re no longer waiting around to perform measurements on the treatment machine,” Dunn explains. “It’s all happening while the patients are being treated during the normal course of the day. That automation process is an important time saver for us.”

This shift from measurement- to software-based QA also has a huge impact on the radiation therapists. As they were already using the machines to treat patients, the therapists were tasked with delivering most of the QA cases – at the end of the day or in between treatment sessions – and informing the physicists of any failures.

“Since we’ve introduced RadCalc, they essentially get all that time back and can focus on doing what they do best, treating patients and making sure it’s all done safely,” says Dunn. “Taking that burden away from them is a great additional bonus.”

Looking to the future, GenesisCare next plans to implement RadCalc’s log file analysis feature, which will enable the team to monitor and verify the performance of the radiotherapy machines. Essentially, the log files generated after each treatment are brought back into RadCalc, which then verifies that what the machine delivered matched the original treatment plan.

“Because we have so many plans going through, delivered by many different accelerators, we can start to build a picture of machine performance,” says Dunn. “In the future, I personally want to look at the data that we collect through RadCalc. Because everything’s coming through that one system, we’ve got a real opportunity to examine safety and quality at a system level, from treatment planning system through to patient treatment.”

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
Analysis Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans https://physicsworld.com/wp-content/uploads/2024/09/RadCalc-Physics-World.jpg
The free-to-read Physics World Big Science Briefing 2024 is out now https://physicsworld.com/a/the-free-to-read-physics-world-big-science-briefing-2024-is-out-now/ Thu, 19 Sep 2024 12:00:09 +0000 https://physicsworld.com/?p=116843 Find out more about designs for a muon collider and why gender diversity in big science needs recognition

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Over the past decades, “big science” has become bigger than ever be it planning larger particle colliders, fusion tokamaks or space observatories. That development is reflected in the growth of the Big Science Business Forum (BSBF), which has been going from strength to strength following its first meeting in 2018 in Copenhagen.

This year, more than 1000 delegates from 500 organizations and 30 countries will descend on Trieste from 1 to 4 October for BSBF 2024. The meeting will see European businesses and organizations such as the European Southern Observatory, the CERN particle-physics laboratory and Fusion 4 Energy come together to discuss the latest developments and business trends in big science.

A key component of the event – as it was at the previous BSBF in Granada, Spain, in 2022 – is the Women in Big Science group, who will be giving a plenary session about initiatives to boost and help women in big science.

In this year’s Physics World Big Science Briefing, Elizabeth Pollitzer – co-founder and director of Portia, which seeks to improve gender equality in science, technology, engineering and mathematics.

She explains why we need gender equality in big science and what measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures.

One prime example of big science is particle physics. Some 70 years since the founding of CERN and a decade following the discovery of the Higgs boson at the lab’s Large Hadron Collider (LHC) in 2012, particle physics stands at a crossroads. While the consensus is that a “Higgs factory” should come next after the LHC, there is disagreement over what kind of machine it should be – a large circular collider some 91 km in circumference or a linear machine just a few kilometres long.

As the wrangling goes on, other proposals are also being mooted such as a muon collider. Despite needing new technologies, a muon collider has the advantage that it would only require a circular collider in a tunnel roughly the size of the LHC.

Another huge multinational project is the ITER fusion tokamak currently under construction in Cadarache, France. Hit by cost hikes and delays for decades, there was more bad news earlier this year when ITER said the tokamak will now not fire up until 2035. ”Full power” mode with deuterium and tritium won’t happen until 2039 some 50 years since the facility was first mooted.

Backers hope that ITER will lay the way towards fusion power plants delivering electricity to the grid, but huge technical challenges lie in store. After all, those reactors will have to breed their own tritium so they become fuel independent, as John Evans explains.

Big science also involves dedicated user facilities. In this briefing we talk to Gianluigi Botton from the Diamond Light Source in the UK and Mike Witherell from the Lawrence Berkeley National Laboratory on managing such large scale research infrastructures and their plans for the future.

We hope you enjoy the briefing and let us know your feedback on the issue.

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Blog Find out more about designs for a muon collider and why gender diversity in big science needs recognition https://physicsworld.com/wp-content/uploads/2019/09/cern-cms-crop.jpg 1
Vortex cannon generates toroidal electromagnetic pulses https://physicsworld.com/a/vortex-cannon-generates-toroidal-electromagnetic-pulses/ Thu, 19 Sep 2024 09:34:09 +0000 https://physicsworld.com/?p=116855 Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
electromagnetic cannons emit electromagnetic vortex pulses thanks to coaxial horn antennas

Toroidal electromagnetic pulses can be generated using a device known as a horn microwave antenna. This electromagnetic “vortex cannon” produces skyrmion topological structures that might be employed for information encoding or for probing the dynamics of light–matter interactions, according to its developers in China, Singapore and the UK.

Examples of toroidal or doughnut-like topology abound in physics – in objects such as Mobius strips and Klein bottles, for example. It is also seen in simpler structures like smoke rings in air and vortex rings in water, as well as in nuclear currents. Until now, however, no one had succeeded in directly generating this topology in electromagnetic waves.

A rotating electromagnetic wave structure

In the new work, a team led by Ren Wang from the University of Electronic Science and Technology of China, Yijie Shen from Nanyang Technological University in Singapore and colleagues from the University of Southampton in the UK employed wideband, radially polarized, conical coaxial horn antennas with an operating frequency range of 1.3–10 GHz. They used these antennas to create a rotating electromagnetic wave structure with a frequency in the microwave range.

The antenna comprises inner and outer metal conductors, with 3D-printed conical and flat-shaped dielectric supports at the bottom and top of the coaxial horn, respectively

“When the antenna emits, it generates an instantaneous voltage difference that forms the vortex rings,” explains Shen. “These rings are stable over time – even in environments with lots of disturbances – and maintain their shape and energy over long distances.”

Complex features such as skyrmions

The conical coaxial horn antenna generates an electromagnetic field in free space that rotates around the propagation direction of the wave structure. The researchers experimentally mapped the toroidal electromagnetic pulses at propagation distances of 5, 50 and 100 cm from the horn aperture, using a planar microwave anechoic chamber (a shielded room covered with electromagnetic absorbers) to measure the spatial electromagnetic fields of the antenna, using a scanning frame to move the antenna to the desired measurement area. They then connected a vector network analyser to the transmitting and receiving antennas to obtain the magnitude and phase characteristics of the electromagnetic field at different positions.

The researchers found that the toroidal pulses contained complex features such as skyrmions. These are made up of numerous electric field vectors and can be thought of as two-dimensional whirls (or “spin textures”). The pulses also evolved over time to more closely resemble canonical Hellwarth–Nouchi toroidal pulses. These structures, first theoretically identified by the two physicists they are named after, represent a radically different, non-transverse type of electromagnetic pulse with a toroidal topology. These pulses, which are propagating counterparts of localized toroidal dipole excitations in matter, exhibit unique electromagnetic wave properties, explain Shen and colleagues.

A wide range of applications

The researchers say that they got the idea for their new work by observing how smoke rings are generated from an air cannon. They decided to undertake the study because toroidal pulses in the microwave range have applications in a wide range of areas, including cell phone technology, telecommunications and global positioning. “Understanding both the propagation dynamics and characterizing the topological structure of these pulses is crucial for developing these applications,” says Shen.

The main difficulty faced in these experiments was generating the pulses in the microwave part of the electromagnetic spectrum. The researchers attempted to do this by adapting existing optical metasurface methodologies, but failed because a large metasurface aperture of several metres was required, which was simply too impractical to fabricate. They overcame the problem by making use of a microwave horn emitter that’s more straightforward to create.

Looking forward, the researchers now plan to focus on two main areas. The first is to develop communication, sensing, detection and metrology systems based on toroidal pulses, aiming to overcome the limitations of existing wireless applications. Secondly, they hope to generate higher-order toroidal pulses, also known as supertoroidal pulses.

“These possess unique characteristics such as propagation invariance, longitudinal polarization, electromagnetic vortex streets (organized patterns of swirling vortices) and higher-order skyrmion topologies,” Shen tells Physics World. “The supertoroidal pulses have the potential to drive the development of ground-breaking applications across a range of fields, including defence systems or space exploration.”

The study is detailed in Applied Physics Reviews.

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
Research update Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more https://physicsworld.com/wp-content/uploads/2024/09/19-09-24-electromagnetic-cannon-featured.jpg newsletter1
A comprehensive method for assembly and design optimization of single-layer pouch cells https://physicsworld.com/a/a-comprehensive-method-for-assembly-and-design-optimization-of-single-layer-pouch-cells/ Wed, 18 Sep 2024 14:08:38 +0000 https://physicsworld.com/?p=114420 The Electrochemical Society in partnership with BioLogic, EL-Cell and TA Instruments - Waters explains how to optimally test your lithium-ion battery electrode materials

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

]]>

For academic researchers, the cell format for testing lithium-ion batteries is often overlooked. However, choices in cell format and their design can affect cell performance more than one may expect. Coin cells that utilize either a lithium metal or greatly oversized graphite negative electrode are common but can provide unrealistic testing results when compared to commercial pouch-type cells. Instead, single-layer pouch cells provide a more similar format to those used in industry while not requiring large amounts of active material. Moreover, their assembly process allows for better positive/negative electrode alignment, allowing for assembly of single-layer pouch cells without negative electrode overhang. This talk presents a comparison between coin, single-layer pouch, and stacked pouch cells, and shows that single-layer pouch cells without negative electrode overhang perform best. Additionally, a careful study of the detrimental effects of excess electrode material is shown. The single-layer pouch cell format can also be used to measure pressure and volume in situ, something that is not possible in a coin cell. Last, a guide to assembling reproducible single-layer pouch cells without negative electrode overhang is presented.

An interactive Q&A session follows the presentation.

Matthew Garayt

Matthew D L Garayt is a PhD candidate in the Jeff Dahn, Michael Metzger, and Chongyin Yang Research Groups at Dalhousie University. His work focuses on materials for lithium- and sodium-ion batteries, with a focus on increased energy density and lifetime. Before this, he worked at E-One Moli Energy, the first rechargeable lithium battery company in the world, where he worked on high-power lithium-ion batteries, and completed a summer research term in the Obrovac Research Group, also at Dalhousie. He received a BSc (Hons) in applied physics from Simon Fraser University.

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

]]>
Webinar The Electrochemical Society in partnership with BioLogic, EL-Cell and TA Instruments - Waters explains how to optimally test your lithium-ion battery electrode materials https://physicsworld.com/wp-content/uploads/2024/05/2024-10-23ECSimage.jpg
Gallium-doped bioactive glass kills 99% of bone cancer cells https://physicsworld.com/a/gallium-doped-bioactive-glass-kills-99-of-bone-cancer-cells/ Wed, 18 Sep 2024 14:00:07 +0000 https://physicsworld.com/?p=116829 New therapy kills cancerous cells while stimulating growth of new healthy bone

The post Gallium-doped bioactive glass kills 99% of bone cancer cells appeared first on Physics World.

]]>
Osteosarcoma, the most common type of bone tumour, is a highly malignant cancer that mainly affects children and young adults. Patients are typically treated with an aggressive combination of resection and chemotherapy, but survival rates have not improved significantly since the 1970s. With alternative therapies urgently needed, a research team at Aston University has developed a gallium-doped bioactive glass that selectively kills over 99% of bone cancer cells.

The main objective of osteosarcoma treatment is to destroy the tumour and prevent recurrence. But over half of long-term survivors are left with bone mass deficits that can lead to fractures, making bone restoration another important goal. Bioactive glasses are already used to repair and regenerate bone – they bond with bone tissue and induce bone formation by releasing ions such as calcium, phosphorus and silicon. But they can also be designed to release therapeutic ions.

Team leader Richard Martin and colleagues propose that bioactive glasses doped with  gallium ions could address both tasks – helping to prevent cancer recurrence and lowering the  risk of fracture. They designed a novel biomaterial that provides targeted drug delivery to the tumour site, while also introducing a regenerative scaffold to stimulate the new bone growth.

“Gallium is a toxic ion that has been widely studied and is known to be effective for cancer therapy. Cancer cells tend to be more metabolically active and therefore uptake more nutrients and minerals to grow – and this includes the toxic gallium ions,” Martin explains. “Gallium is also known to inhibit bone resorption, which is important as bone cancer patients tend to have lower bone density and are more prone to fractures.”

Glass design

Starting with a silicate-based bioactive glass, the researchers fabricated six glasses doped with between 0 and 5 mol% of gallium oxide (Ga2O3). They then ground the glasses into powders with a particle size between 40 and 63 µm.

Martin notes that gallium is a good choice for incorporating into the glass, as it is effective in a variety of simple molecular forms. “Complex organic molecules would not survive the high processing temperatures required to make bioactive glasses, whereas gallium oxide can be incorporated relatively easily,” he says.

To test the cytotoxic effects of the bioactive glasses on cancer cells, the team created “conditioned media”, by incubating the gallium-doped glass particles in cell culture media at concentrations of 10 or 20 mg/mL.  After 24 h, the particles were filtered out to leave various levels of gallium ions in the media.

The researchers then exposed osteosarcoma cells, as well as normal osteoblasts as controls, to conditioned media from the six gallium-doped powders. Cell viability assays revealed significant cytotoxicity in cancer cells exposed to the conditioned media, with a reduction in cell viability correlating with gallium concentration.

After 10 days, cancer cells exposed to media conditioned with 10 mg/mL of 4 and 5% gallium-doped glass showed decreased cell viability, to roughly 60% and less than 10%, respectively. The 20 mg/mL of 4% and 5% gallium-doped glass were the most toxic to the cancer cells, causing 60% and more than 99% cell death, respectively, after 10 days.

Exposure to gallium-free bioglass did not significantly impact cell viability – confirming that the toxicity is due to gallium and not the other components of the glass (calcium, sodium, phosphorus and silicate ions).

While the glasses preferentially killed osteosarcoma cells compared with normal osteoblasts, some cytotoxic effects were also seen in the control cells. Martin believes that this slight toxicity to normal healthy cells is within safe limits, noting that the localized nature of the treatment should significantly reduce side effects compared with orally administered gallium.

“Further experiments are needed to confirm the safety of these materials,” he says, “but our initial studies show that these gallium-doped bioactive glasses are not toxic in vivo and have no effects on major organs such as the liver or kidneys.”

The researchers also performed live/dead assays on the osteosarcoma and control cells. The results confirmed the highly cytotoxic effect of gallium-doped bioactive glass on the cancer cells with relatively minor toxicity towards normal cells. They also found that exposure to the gallium-doped glass significantly reduced cancer cell proliferation and migration.

Bone regeneration

To test whether the bioactive glasses could also help to heal bone, the team exposed glass samples to simulated body fluid for seven days. Under these physiological conditions, the glasses gradually released calcium and phosphorous ions.

FTIR and energy dispersive X-ray spectroscopy revealed that these ions precipitated onto the glass surface to form an amorphous calcium phosphate/hydroxyapatite layer – indicating the initial stages of bone regeneration. For clinical use, the glass particles could be mixed into a paste and injected into the void created during tumour surgery.

“This bioactivity will help generate new bone formation and prevent bone mass deficits and potential future fractures,” Martin and colleagues conclude. “The results when combined strongly suggest that gallium-doped bioactive glasses have great potential for osteosarcoma-related bone grafting applications.”

Next, the team plans to test the materials on a wide range of bone cancers to ensure the treatment is effective against different cancer types, as well as optimizing the dosage and delivery before undertaking preclinical tests.

The researchers report their findings in Biomedical Materials.

The post Gallium-doped bioactive glass kills 99% of bone cancer cells appeared first on Physics World.

]]>
Research update New therapy kills cancerous cells while stimulating growth of new healthy bone https://physicsworld.com/wp-content/uploads/2024/09/18-09-24-gallium-Richard-Martin.jpg newsletter1
Adaptive deep brain stimulation reduces Parkinson’s disease symptoms https://physicsworld.com/a/adaptive-deep-brain-stimulation-reduces-parkinsons-disease-symptoms/ Wed, 18 Sep 2024 09:10:46 +0000 https://physicsworld.com/?p=116800 An intelligent self-adjusting brain pacemaker could improve the quality-of-life for those living with Parkinson’s disease

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

]]>
Deep brain stimulation (DBS) is an established treatment for patients with Parkinson’s disease who experience disabling tremors and slowness of movements. But because the therapy is delivered with constant stimulation parameters – which are unresponsive to a patient’s activities or variations in symptom severity throughout the day – it can cause breakthrough symptoms and unwanted side effects.

In their latest Parkinson’s disease initiative, researchers led by Philip Starr from the UCSF Weill Institute for Neurosciences have developed an adaptive DBS (aDBS) technique that may offer a radical improvement. In a feasibility study with four patients, they demonstrated that this intelligent “brain pacemaker” can reduce bothersome side effects by 50%.

The self-adjusting aDBS, described in Nature Medicine, monitors a patient’s brain activity in real time and adjusts the level of stimulation to curtail symptoms as they arise. Generating calibrated pulses of electricity, the intelligent aDBS pacemaker provides less stimulation when Parkinson’s medication is active, to ward off excessive movements, and increases stimulation to prevent slowness and stiffness as the drugs wear off.

Starr and colleagues conducted a blinded, randomized feasibility trial to identify neural biomarkers of motor signs during active stimulation, and to compare the effects of aDBS with optimized constant DBS (cDBS) during normal, unrestricted daily life.

The team recruited four male patients with Parkinson’s disease, ranging in age from 47 to 68 years, for the study. Although all participants had implanted DBS devices, they were still experiencing symptom fluctuations that were not resolved by either medication or cDBS therapy. They were asked to identify the most bothersome residual symptom that they experienced.

To perform aDBS, the researchers developed an individualized data-driven pipeline for each participant, which turns the recorded subthalamic or cortical field potentials into personalized algorithms that auto-adjust the stimulation amplitudes to alleviate residual motor fluctuations. They used both in-clinic and at-home neural recordings to provide the data.

“The at-home data streaming step was important to ensure that biomarkers identified in idealized, investigator-controlled conditions in the clinic could function in naturalistic settings,” the researchers write.

The four participants received aDBS alongside their existing DBS therapy. The team compared the treatments by alternating between cDBS and aDBS every two to seven days, with a cumulative period of one month per condition.

The researchers monitored motor symptoms using wearable devices plus symptom diaries completed daily by the participants. They evaluated the most bothersome symptoms, in most cases bradykinesia (slowness of movements), as well as stimulation-associated side effects such as dyskinesia (involuntary movements). To control for other unwanted side effects, participants also rated other common motor symptoms, their quality of sleep, and non-motor symptoms such as depression, anxiety, apathy and impulsivity.

The study revealed that aDBS improved each participant’s most bothersome symptom by roughly 50%. Three patients also reported improved quality-of-life using aDBS. This change was so obvious to these three participants that, even though they did not know which treatment was being delivered at any time, they could often correctly guess when they were receiving aDBS.

The researchers note that the study establishes the methodology for performing future trials in larger groups of males and females with Parkinson’s disease.

“There are three key pathways for future research,” lead author Carina Oehrn tells Physics World. “First, simplifying and automating the setup of these systems is essential for broader clinical implementation. Future work by Starr and Simon Little at UCSF, and Lauren Hammer (now at the Hospital of the University of Pennsylvania) will focus on automating this process to increase access to the technology. From a practicality standpoint, we think it necessary to develop an AI-driven smart device that can identify and auto-set treatment settings with a clinician-activated button.”

“Second, long-term monitoring for safety and sustained effectiveness is crucial,” Oehrn added. “Third, we need to expand these approaches to address non-motor symptoms in Parkinson’s disease, where treatment options are limited. I am studying aDBS for memory and mood in Parkinson’s at the University of California-Davis. Little is investigating aDBS for sleep disturbances and motivation.”

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

]]>
Research update An intelligent self-adjusting brain pacemaker could improve the quality-of-life for those living with Parkinson’s disease https://physicsworld.com/wp-content/uploads/2024/09/18-09-24-UCSF-DBS-Parkinsons-06.jpg
Dark-matter decay could have given ancient supermassive black holes a boost https://physicsworld.com/a/dark-matter-decay-could-have-given-ancient-supermassive-black-holes-a-boost/ Tue, 17 Sep 2024 15:19:39 +0000 https://physicsworld.com/?p=116799 Calculations suggest photons may have warmed gas clouds

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

]]>
The decay of dark matter could have played a crucial role in triggering the formation of supermassive black holes (SMBHs) in the early universe, according to a trio of astronomers in the US. Using a combination of gas-cloud simulations and theoretical dark matter calculations, Yifan Lu and colleagues at the University of California, Los Angeles, uncovered promising evidence that the decay of dark matter may have provided the radiation necessary to prevent primordial gas clouds from fragmenting as they collapsed.

SMBHs are thought to reside at the centres of most large galaxies, and can be hundreds of thousands to billions of times more massive than the Sun. For decades, astronomers puzzled over how such immense objects could have formed, and the mystery has deepened with recent observations by the James Webb Space Telescope (JWST).

Since 2023, JWST has detected SMBHs that existed less than one billion years after the birth of the universe. This is far too early to be the result of conventional stellar evolution, whereby smaller black holes coalesce to create a SMBH.

Fragmentation problem

An alternative explanation is that vast primordial gas clouds in the early universe collapsed directly into SMBHs. However, as Lu explains, this theory challenges our understanding of how matter behaves. “Detailed calculations show that, in the absence of any unusual radiation, the largest gas clouds tend to fragment and form a myriad of small halos, not a single supermassive black hole,” he says. “This is due to the formation of molecular hydrogen, which cools the rest of the gas by radiating away thermal energy.”

For SMBHs to form under these conditions, molecular hydrogen would have needed to be somehow suppressed, which would require an additional source of radiation from within these ancient clouds. Recent studies have proposed that this extra energy could have come from hypothetical dark-matter particles decaying into photons.

“This additional radiation could cause the dissociation of molecular hydrogen, preventing fragmentation of large gas clouds into smaller pieces,” Lu explains. “In this case, gravity forces the entire large cloud to collapse as a whole into a [SMBH].”

In several recent studies, researchers have used simulations and theoretical estimates to investigate this possibility. So far, however, most studies have either focused on the mechanics of collapsing gas clouds or on the emissions produced by decaying dark matter, with little overlap between the two.

Extra ingredient needed

“Computer simulations of clouds of gas that could directly collapse to black holes have been studied extensively by groups farther on the astrophysics side of things, and they had examined how additional sources of radiation are a necessary ingredient,” explains Lu’s colleague Zachary Picker.

“Simultaneously, people from the dark matter side had performed some theoretical estimations and found that it seemed unlikely that dark matter could be the source of this additional radiation,” adds Picker.

In their study, Lu, Picker, and Alexander Kusenko sought to bridge this gap by combining both approaches: simulating the collapse of a gas cloud when subjected to radiation produced by the decay of several different candidate dark-matter particles. As they predicted, some of these particles could indeed provide the missing radiation needed to dissociate molecular hydrogen, allowing the entire cloud to collapse into a single SMBH.

However, dark matter is a hypothetical substance that has never been detected directly. As a result, the trio acknowledges that there is currently no reliable way to verify their findings experimentally. For now, this means that their model will simply join a growing list of theories that aim to explain the formation of SMBHs. But if the situation changes in the future, the researchers hope their model could represent a significant step forward in understanding the early universe’s evolution.

“One day, hopefully in my lifetime, we’ll find out what the dark matter is, and then suddenly all of the papers written about that particular type will magically become ‘correct’,” Picker says. “All we can do until then is to keep trying new ideas and hope they uncover something interesting.”

The research is described in Physical Review Letters.

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

]]>
Research update Calculations suggest photons may have warmed gas clouds https://physicsworld.com/wp-content/uploads/2024/09/17-9-24-ancient-SMBH.jpg newsletter1
New superconductor has record breaking current density https://physicsworld.com/a/new-superconductor-has-record-breaking-current-density/ Tue, 17 Sep 2024 10:28:06 +0000 https://physicsworld.com/?p=116754 Rare-earth barium copper oxide structure also has the highest pinning force ever reported

The post New superconductor has record breaking current density appeared first on Physics World.

]]>
This article reports on research described in a paper in Nature Communications. The authors of this paper have requested that it be retracted due to an error in converting the magnetic units involved in calculating the current density of their material.

A superconducting wire segment based on rare-earth barium copper oxide (REBCO) is the highest performing yet in terms of current density, carrying 190 MA/cm2 in the absence of any external magnetic field at a temperature of 4.2 K. At warmer temperatures of 20 K (which is the proposed application temperature for magnets used in commercial nuclear fusion reactors), the wires can still carry over 150 MA/cm2. These figures mean that the wire, despite being only 0.2 micron thick, can carry a current comparable to that of commercial superconducting wires that are almost 10 times thicker, according to its developers at the University at Buffalo in the US.

High-temperature superconducting (HTS) wires could be employed in a host of applications, including energy generation, storage and transmission, transportation, and in the defence and medical sectors. They might also be used in commercial nuclear fusion, offering the possibility of limitless clean energy. Indeed, if successful, this niche application could help address the world’s energy supply issues, says Amit Goyal of the University at Buffalo’s School of Engineering and Applied Science, who co-led this new study.

Record-breaking critical current density and pinning force

Before such large-scale applications see the light of day, however, the performance of HTS wires must be improved – and their cost reduced. Goyal and colleagues’ new HTS wire has the highest values of critical current density reported to date. This is particularly true at lower operating temperatures ranging from 4.2–30 K, which is of interest for the fusion application. While still extremely cold, these are much higher than the absolute zero temperatures that traditional superconductors function at, says Goyal.

And that is not all, the wires also have the highest pinning force (that is, the ability to hold magnetic vortices) ever reported for such wires: around 6.4 TN/m3 per cubic metre at 4.2 K and about 4.2 TN/m3 at 20 K, both under a 7 T applied magnetic field.

“Prior to this work, we did not know if such levels of critical current density and pinning were possible to achieve,” says Goyal.

The researchers made their wire using a technique called pulsed laser deposition. Here, a laser beam impinges on a target material and ablates material that is deposited as a film on the substrate, explains Goyal. “This technique is employed by a majority of HTS wire manufacturers. In our experiment, the high critical current density was made possible thanks to a combination of pinning effects from rare-earth doping, oxygen-point defects and insulating barium zirconate nanocolumns as well as optimization of deposition conditions.”

This is a very exciting time for the HTS field, he tells Physics World. “We have a very important niche large-scale application – commercial nuclear fusion. Indeed, one company, Commonwealth Fusion, has invested $1.8bn in series B funding. And within the last 5 years, almost 20 new companies have been founded around the world to commercialize this fusion technology.”

Goyal adds that his group’s work is just the beginning and that “significant performance enhancements are still possible”. “If HTS wire manufacturers work on optimizing the conditions under which the wires are deposited, they should be able to achieve a much higher critical current density, which will result in much better price/performance metric for the wires and enable applications. Not just in fusion, but all other large-scale applications as well.”

The researchers say they now want to further enhance the critical current density and pinning force of their 0.2 micron-thick wires. “We also want to demonstrate thicker films that can carry much higher current,” says Goyal.

They describe their HTS wires in Nature Communications.

The post New superconductor has record breaking current density appeared first on Physics World.

]]>
Research update Rare-earth barium copper oxide structure also has the highest pinning force ever reported https://physicsworld.com/wp-content/uploads/2024/09/HTS-wire.jpg
Magnetically controlled prosthetic hand restores fine motion control https://physicsworld.com/a/magnetically-controlled-prosthetic-hand-restores-fine-motion-control/ Mon, 16 Sep 2024 15:30:27 +0000 https://physicsworld.com/?p=116791 The first user of a myokinetic prosthesis was able to perform everyday actions such as pouring water into a glass, opening a jar, tying shoelaces and grasping fragile objects

The post Magnetically controlled prosthetic hand restores fine motion control appeared first on Physics World.

]]>
A magnetically controlled prosthetic hand, tested for the first time in a participant with an amputated lower arm, provided fine control of hand motion and enabled the user to perform everyday actions and grasp fragile objects. The robotic prosthetic, developed by a team at Scuola Superiore Sant’Anna in Pisa, uses tiny implanted magnets to predict and carry out intended movements.

Losing a hand can severely affect a person’s ability to perform everyday work and social activities, and many researchers are investigating ways to restore lost motor function via prosthetics. Most available or proposed strategies rely on deciphering electrical signals from residual nerves and muscles to control bionic limbs. But this myoelectric approach cannot reproduce the dexterous movements of a human hand.

Instead, Christian Cipriani and colleagues developed an alternative technique that exploits the physical displacement of skeletal muscles to decode the user’s motor intentions. The new myokinetic interface uses permanent magnets implanted into the residual muscles of the user’s amputated arm to accurately control finger movements of a robotic hand.

“Standard myoelectric prostheses collect non-selective signals from the muscle surface and, due to that low selectivity, typically support only two movements,” explains first author Marta Gherardini. “In contrast, myokinetic control enables simultaneous and selective targeting of multiple muscles, significantly increasing the number of control sources and, consequently, the number of recognizable movements.”

First-in-human test

The first patient to test the new prosthesis was a 34-year-old named Daniel, who had recently lost his left hand and had started to use a myoelectric prosthesis. The team selected him as a suitable candidate because his amputation was recent and blunt, he could still feel the lost hand and the residual muscles in his arm moved in response to his intentions.

For the study, the team implanted six cylindrical (2 mm radius and height) neodymium magnets coated with a biocompatible shell into three muscles in Daniel’s residual forearm. In a minimally invasive procedure, the surgeon used plastic instruments to manipulate the magnets into the tip of the target muscles and align their magnetic fields, verifying their placement using ultrasound.

Daniel also wore a customized carbon fibre prosthetic arm containing all of the electronics needed to track the magnets’ locations in space. When he activates the residual muscles in his arm, the implanted magnets move in response to the muscle contractions. A grid of 140 magnetic field sensors in the prosthesis detect the position and orientation of these magnets and transmit the data to an embedded computing unit. Finally, a pattern recognition algorithm translates the movements into control signals for a Mia-Hand robotic hand.

Gherardini notes that the pattern recognition algorithm rapidly learnt to control the hand based on Daniel’s intended movements. “Training the algorithm took a few minutes, and it was immediately able to correctly recognize the movements,” she says.

In addition to the controlled hand motion arising from intended grasping, the team found that elbow movement activated other forearm muscles. Tissue near the elbow was also compressed by the prosthetic socket during elbow flexion, which caused unintended movement of nearby magnets. “We addressed this issue by estimating the elbow movement through the displacement of these magnets, and adjusting the position of the other magnets accordingly,” says Gherardini.

Robotic prosthesis user grasps a fragile plastic cup

During the six-week study, the team performed a series of functional tests commonly used to assess the dexterity of upper limb prostheses. Daniel successfully completed these tests, with comparable performance to that achieved using a traditional myoelectric prosthetic (in tests performed before the implantation surgery).

Importantly, he was able to control finger movements well enough to perform a wide range of everyday activities – such as unscrewing a water bottle cap, cutting with a knife, closing a zip, tying shoelaces and removing pills from a blister pack. He could also control the grasp force to manipulate fragile objects such as an egg and a plastic cup.

The researchers report that the myokinetic interface worked even better than they expected, with the results highlighting its potential to restore natural motor control in people who have lost limbs. “This system allowed me to recover lost sensations and emotions: it feels like I’m moving my own hand,” says Daniel in a press statement.

At the end of the six weeks, the team removed the magnets. Asides for low-grade inflammation around one magnet that had lost its protective shell, all of the surrounding tissue was healthy. “We are currently working towards a long-term solution by developing a magnet coating that ensures long-term biocompatibility, allowing users to eventually use this system at home,” Gherardini tells Physics World.

She adds that the team is planning to perform another test of the myokinetic prosthesis within the next two years.

The myokinetic prosthesis is described in Science Robotics.

The post Magnetically controlled prosthetic hand restores fine motion control appeared first on Physics World.

]]>
Research update The first user of a myokinetic prosthesis was able to perform everyday actions such as pouring water into a glass, opening a jar, tying shoelaces and grasping fragile objects https://physicsworld.com/wp-content/uploads/2024/09/16-09-24-prosthetic-hand.jpg newsletter1
NASA suffering from ageing infrastructure and inefficient management practices, finds report https://physicsworld.com/a/nasa-suffering-from-ageing-infrastructure-and-inefficient-management-practices-finds-report/ Mon, 16 Sep 2024 13:34:15 +0000 https://physicsworld.com/?p=116785 NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

]]>
NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives. That is the conclusion of a new report – NASA at a Crossroads: Maintaining Workforce, Infrastructure, and Technology Preeminence in the Coming Decades – that finds a space agency battling on many fronts including ageing infrastructure, China’s growing presence in space, and issues recruiting staff.

The report was requested by Congress and published by the National Academies of Sciences, Engineering, and Medicine. It was written by a 13-member committee, which included representatives from industry, academia and government, and was chaired by Norman Augustine, former chief executive of Lockheed Martin. Members visited all nine NASA centres and talked to about 400 employees to compile the report.

While the panel say that NASA had “motivate[ed] many of the nation’s youth to pursue careers in science and technology” and “been a source of inspiration and pride to all Americans”, they highlight a variety of problems at the agency. Those include out-of-date infrastructure, a pressure to prioritize short-term objectives, budget mismatches, inefficient management practices, and an unbalanced reliance on commercial partners. Yet according to Augustine, the agency’s main problem is “the more mundane tendency to focus on near-term accomplishments at the expense of long-term viability”.

As well as external challenges such as China’s growing role in space, the committee discovered that many were homegrown. They found that 83% of NASA’s facilities are past their design lifetimes. For example, the capacity of the Deep Space Network, which provides critical communications support for uncrewed missions, “is inadequate” to support future craft and even current missions such as the Artemis Moon programme “without disrupting other projects”.

There is also competition from private space firms in both technology development and recruitment. According to the report, NASA has strict hiring rules and salaries it can offer. It takes 81 days, on average, from the initial interview to an offer of employment. During that period, the subject will probably receive offers from private firms, not only in the space industry but also in the “digital world”, which offer higher salaries.

In addition, Augustine notes, the agency is giving its engineers less opportunity “to get their hands dirty” by carrying out their own research. Instead, they are increasingly managing outside contractors who are doing the development work. At the same time, the report identifies a “major reduction” over the past few decades in basic research that is financed by industry – a trend that the report says is “largely attributable to shareholders seeking near-term returns as opposed to laying groundwork for the future”.

Yet the committee also finds that NASA faces “internal and external pressure to prioritize short-term measures” without considering longer-term needs and implications. “If left unchecked these pressures are likely to result in a NASA that is incapable of satisfying national objectives in the longer term,” the report states. “The inevitable consequence of such a strategy is to erode those essential capabilities that led to the organization’s greatness in the first place and that underpin its future potential.”

Cash woes

Another concern is the US government budget process that operates year by year and is slowly reducing NASA’s proportional share of funding. The report finds that the budget is “often incompatible with the scope, complexity, and difficulty of [NASA’s] work” and the funding allocation “has degraded NASA’s capabilities to the point where agency sustainability is in question”. Indeed, during the agency’s lifetime, the proportion of the US budget devoted to government R&D has declined from 1.9% of gross domestic product to 0.7%. The panel also notes a trend of reducing investment in research and technology as a fraction of funds devoted to missions. “NASA is likely to face budgetary problems in the future that greatly exceed those we’ve seen in recent years,” Augustine told a briefing.

The panel now calls on NASA to work with Congress to establish “an annually replenished revolving fund – such as a working capital fund” to maintain and improve the agency’s infrastructure. It would be financed by the US government as well as users of NASA’s facilities and be “sufficiently capitalized to eliminate NASA’s current maintenance backlog over the next decade”. While it is unclear how the government and the agency will react to that proposal, as Augustine warned, for NASA, “this is not business as usual”.

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

]]>
News NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives https://physicsworld.com/wp-content/uploads/2024/09/2024-09-16-NASA-report.jpg newsletter
Stop this historic science site in St Petersburg from being sold https://physicsworld.com/a/stop-this-historic-science-site-in-st-petersburg-from-being-sold/ Mon, 16 Sep 2024 07:00:48 +0000 https://physicsworld.com/?p=116484 A historic scientific landmark may soon disappear, says Robert P Crease

The post Stop this historic science site in St Petersburg from being sold appeared first on Physics World.

]]>
In the middle of one of the most expensive neighbourhoods in St Petersburg, Russia, is a vacant and poorly kept lot about half an acre in size. It’s been empty for years for a reason: on it stood the first scientific research laboratory in Russia – maybe even the world – and for over two and a half centuries generations of Russian scientists hoped to restore it. But its days as an empty lot may be over, for the land could soon be sold to the highest bidder.

The laboratory was the idea of Mikhail Lomonosov (1711–1765), Russia’s first scientist in the modern sense. Born in 1711 into a shipping family on an island in the far north of Russia, Lomonosov developed a passion for science that saw him study in Moscow, Kyiv and St Petersburg. He then moved to Germany, where he got involved in the then revolutionary, mathematically informed notion that matter is made up of smaller elements called “corpuscles”.

In 1741, at the age of 30, Lomonosov returned to Russia, where he joined the St Petersburg Academy of Science. There he began agitating for the academy to set up a physico-chemistry laboratory of its own. Until then, experimental labs in Russia and elsewhere had been mainly applied institutions for testing and developing paints, dyes and glasses, and for producing medicines and chemicals for gunpowder. But Lomonosov wanted something very different.

His idea was for a lab devoted entirely to basic research and development that could engage and train students to do empirical research on materials. Most importantly, he wanted the academy to run the lab, but the state to pay for it. After years of agitating, Lomonosov’s plan was approved, and the St Petersburg laboratory opened in 1748 on a handy site in the centre of St Petersburg, just a 20-minute walk from the academy, near the university, museums and the city’s famous bridges.

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and other instruments for studying materials

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and various other instruments for studying materials and their properties. Lomonosov and his students used these to analyse ores, minerals, silicates, porcelain, silicates, glasses and mosaics. He also carried out experiments with implications for fundamental theory.

In 1756, for instance, Lomonosov found that certain experiments involving the oxidation of lead carried out by the British chemist Robert Boyle were in error. Indirectly, Lomonosov also suggested a general law of conservation covering the total weight of chemically reacting substances. The law is, these days, usually attributed to the French chemist Antoine Lavoisier, who also came up with the notion three decades later. But Lomonosov’s work had suggested it.

A symbol for science

Lomonosov left the formal leadership of the laboratory in 1757, after which it was headed by several other academy professors. The lab continued to serve the academy’s research until 1793 when several misfortunes, including a flood and a robbery, led to it running down. Still, the lab has had huge significance as a symbol that Russian scientists have appealed to ever since as a model for more state support. It also inspired the setting-up of other chemical laboratories, including a similar facility built at Moscow University in 1755.

For the last two and a half centuries, however, the laboratory’s allies have struggled to keep the site from becoming just real estate in a pricey St Petersburg neighbourhood. In 1793 an academician bought the land from the Academy of Sciences and rebuilt the lab as housing, although preserving its foundations and the old walls. Over the next century, a series of private owners owned the plot, again rebuilding the laboratory and associated house.

The area was levelled again during the Siege of Leningrad in the Second World War, though the lab’s foundations remained intact. After the war, the Soviet Union tried to reconstruct the lab, as did the Russian Academy of Sciences. More recently, advocates have tried to rebuild the lab in time for the 300th anniversary of the Russian Academy of Science, which takes place in 2024–2025.

Three photos of a disused plot of land in a city

All these attempts have failed. Meanwhile, ownership of the site was passed around several Russian administrative agencies, most recently to the Russian State Pedagogical University. Last March, the university put the land in the hands of a private real estate agent who advertised the site in a public notice with the statement that the land was “intended for scientific facilities”, without reference to the lab. The plot is supposed to open for bids this fall.

But scientists and historians worry about the vagueness of that phrase and are distrustful of its source. There is nothing to stop the university from succumbing to the extremely high market prices that developers would pay for its enticing location in the centre of St Petersburg.

The critical point

Money, wrote Karl Marx in his famous article on the subject, is “the confounding and confusing of all natural and human qualities”. As he saw it, money strips what it is used for of ties to human life and meaning. Monetizing Lomonosov’s lab makes us speak of it quantitatively in real-estate terms. In such language, the site is simply a flat, featureless half-acre plot of land that, one metre down, has pieces of stone that were once part of an earlier building.

It also encourages us to speak of the history of this plot as just a series of owners, buildings and events. Some might even say that we have already preserved the history of Lomonosov’s lab because much of its surviving contents are on display in a nearby museum called the Kunstkamera (or art chamber). What, therefore, could be the harm of selling the land?

The land is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks

Turning the history of science into nothing more than a tale of instruments promotes the view that science is all about clever individuals who use tools to probe the world for knowledge. But the places where scientists work are integral to science too. The plot of land on the 2nd avenue of Vasilevsky Island is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks.

It’s where humans, instruments, materials and funding came together in dynamic events that revealed new knowledge of how materials behave in different conditions. The lab is also historically important because it impressed academy and state authorities enough that they continued to support scientific research as essential to Russia’s future.

Sure, appreciating this dimension of science history requires more than restoring buildings. But preserving the places where science happens keeps alive important symbols of what makes science possible, then and now, in a world that needs more of it. Selling the site of Lomonosov’s lab for money amounts to repudiating the cultural value of science.

The post Stop this historic science site in St Petersburg from being sold appeared first on Physics World.

]]>
Opinion and reviews A historic scientific landmark may soon disappear, says Robert P Crease https://physicsworld.com/wp-content/uploads/2024/08/2024-09-CP-Lomonosov-Lab-model.jpg newsletter
What happens when a warp drive collapses? https://physicsworld.com/a/what-happens-when-a-warp-drive-collapses/ Sat, 14 Sep 2024 13:02:20 +0000 https://physicsworld.com/?p=116750 It emits gravitational waves, say physicists

The post What happens when a warp drive collapses? appeared first on Physics World.

]]>
Simulations of space–times that contain negative energies can help us to better understand wormholes or the interior of black holes. For now, however, the physicists who performed the new study, who admit to being big fans of Star Trek, have used their result to model the gravitational waves that would be emitted by a hypothetical failing warp drive.

Gravitational waves, which are ripples in the fabric of space–time, are emitted by cataclysmic events in the universe, like binary black hole and neutron star mergers. They might also be emitted by more exotic space–times such as wormholes or warp drives, which unlike black hole and neutron mergers, are still the stuff of science fiction.

First predicted by Albert Einstein in his general theory of relativity, gravitational waves were observed directly in 2015 by the Advanced LIGO detectors, which are laser interferometers comprising pairs of several-kilometre-long arms positioned at right angles to each other. As a gravitational wave passes through the detector, it slightly expands one arm while contracting the other. This creates a series of oscillations in the lengths of the arms that can be recorded as interference pattern variations.

The first detection by LIGO arose from the collision and merging of two black holes. These observations heralded the start of the era of gravitational-wave astronomy and viewing extreme gravitational events across the entire visible universe. Since then, astrophysicists have been asking themselves if signals from other strongly distorted regions of space–time could be seen in the future, beyond the compact binary mergers already detected.

Warp drives or bubbles

A “warp drive” (or “warp bubble”) is a hypothetical device that could allow space travellers to traverse space at faster-than-light speeds – as measured by some distant observer. Such a bubble contracts spacetime in front of it and expands spacetime behind it. It can do this, in theory, because unlike objects within space–time, space–time itself can bend, expand or contract at any speed. A spacecraft contained in such a drive could therefore arrive at its destination faster than light would in normal space without breaking Einstein’s cosmic speed limit.

The idea of warp drives is not new. They were first proposed in 1994 by the Mexican physicist Miguel Alcubierre who named them after the mode of travel used in the sci-fi series Star Trek. We are not likely to see such drives anytime soon, however, since the only way to produce them is by generating vast amounts of negative energy – perhaps by using some sort of undiscovered exotic matter.

A warp drive that is functioning normally, and travelling at a constant velocity, does not emit any gravitational waves. When it collapses, accelerates or decelerates, however, this should generate gravitational waves.

A team of physicists from Queen Mary University of London (QMUL), the University of Potsdam, the Max Planck Institute (MPI) for Gravitational Physics in Potsdam and Cardiff University decided to study the case of a collapsing warp drive. The warp drive is interesting, say the researchers, since it uses gravitational distortion of spacetime to propel a spaceship forward, rather than a usual kind of fuel/reaction system.

Decomposing spacetime

The team, led by Katy Clough of QMUL, Tim Dietrich from Potsdam and Sebastian Khan at Cardiff, began by describing the initial bubble by the original Alcubierre definition and gave it a fixed wall thickness. They then developed a formalism to describe the warp fluid and how it evolved. They varied its initial velocity at the point of collapse (which is related to the amplitude of the warp bubble). Finally, they analysed the resulting gravitational-wave signatures and quantified the radiation of energy from the space–time region.

While Einstein’s equations of general relativity treat space and time on an equal footing, we have to split the time and space dimensions to do a proper simulation of how the system evolves, explains Dietrich. This approach is normally referred to as the 3+1 decomposition of spacetime. “We followed this very common approach, which is routinely used to study binary black hole or binary neutron star mergers.”

It was not that simple, however: “given the particular spacetime that we were investigating, we also had to determine additional equations for the simulation of the material that is sustaining the warp bubble from collapse,” says Dietrich. “We also had to find a way to introduce the collapse that then triggers the emission of gravitational waves.”

Since they were solving Einstein’s field equation directly, the researchers say they could read off how spacetime evolves and the gravitational waves emitted from their simulation.

Very speculative work

Dietrich says that he and his colleagues are big Star Trek fans and that the idea for the project, which they detail in The Open Journal of Astrophysics, came to them a few years ago in Göttingen in Germany, where Clough was doing her postdoc. “Sebastian then had the idea of using the simulations that we normally use to help detect black holes to look for signatures of the Alcubierre warp drive metric,” recalls Dietrich. “We thought it would be a quick project, but it turned out to be much harder than we expected.”

The researchers found that, for warp ships around a kilometre in size, the gravitational waves emitted are of a high frequency and, therefore, not detectable with current gravitational-wave detectors. “While there are proposals for new gravitational-wave detectors at higher frequencies, our work is very speculative, and so it probably wouldn’t be sufficient to motivate anyone to build anything,” says Dietrich. “It does have a number of theoretical implications for our understanding of exotic spacetimes though,” he adds. “Since this is one of the few cases in which consistent simulations have been performed for spacetimes containing exotic forms of matter, namely negative energy, our work could be extended to also study wormholes, the inside of black holes, or the very early stages of the universe, where negative energy might prevent the formation of singularities.

Even though they “had a lot of fun” during this proof-of-principle project, the researchers say that they will now probably go back to their “normal” work, namely the study of compact binary systems.

The post What happens when a warp drive collapses? appeared first on Physics World.

]]>
Research update It emits gravitational waves, say physicists https://physicsworld.com/wp-content/uploads/2024/09/Gravitational-wave.jpg newsletter1
UK reveals next STEPs toward prototype fusion power plant https://physicsworld.com/a/uk-reveals-next-steps-toward-prototype-fusion-power-plant/ Fri, 13 Sep 2024 08:30:01 +0000 https://physicsworld.com/?p=116737 Engineers and physicists have met to discuss the challenges and opportunities of building a practical fusion power plant in the UK

The post UK reveals next STEPs toward prototype fusion power plant appeared first on Physics World.

]]>
“Fiendish”, “technically tough”, “difficult”, “complicated”. Those were just a few of the choice words used at an event last week in Oxfordshire, UK, to describe ambitious plans to build a prototype fusion power plant. Held at the UK Atomic Energy Authority (UKAEA) Culham campus, the half-day meeting on 5 September saw engineers and physicists discuss the challenges that lie ahead as well the opportunities that this fusion “moonshot” represents.

The prototype fusion plant in question is known as the Spherical Tokamak for Energy Production (STEP), which was first announced by the UK government in 2019 when it unveiled a £220m package of funding for the project. STEP will be based on “spherical” tokamak technology currently being pioneered at the UK’s Culham Centre for Fusion Energy (CCFE). In 2022 a site for STEP was chosen at the former coal-fired power station at West Burton in Nottinghamshire. Operations are expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.

A spherical tokamak is more compact than a traditional tokamak, such as the ITER experimental fusion reactor currently being built in Cadarache, France, which has been hit with cost hikes and delays in recent years. The compact nature of the spherical tokamak, which was first pioneered in the UK in the 1980s, is expected to minimize costs, maximise energy output and possibly make it easier to maintain when scaled up to a fully-fledged fusion power plant.

The current leading spherical tokamaks worldwide are the Mega Amp Spherical Tokamak (MAST-U) at the CCFE and the National Spherical Torus Experiment at the Princeton Plasma Physics Laboratory (PPPL) in the US, which is nearing the completion of an upgrade. Despite much progress, however, those tokamaks are yet to demonstrate fusion conditions through the use of the hydrogen isotope tritium in the fuel, which is necessary to achieve a “burning” plasma. This goal has, though, already been achieved in traditional tokamaks such as the Joint European Torus, which turned off in 2023.

“STEP is a big extrapolation from today’s machines,” admitted STEP chief engineer Chris Waldon at the event. “It is complex and complicated but we are now beginning to converge on a single design [for STEP]”.

A fusion ‘moonshot’

The meeting at Culham was held to mark the publication of 15 papers on the technical progress made on STEP over the past four years. They cover STEP’s plasma, its maintenance, magnets, tritium-breeding programme as well as pathways for fuel self-sufficiency (Philosophical Transactions A 382 20230416). Officials were keen to stress, however, that the papers were a snapshot of progress to date and that since then some aspects of the design have progressed.

One issue that crept up during the talks was the challenge of extrapolating every element of tokamak technology to STEP – a feat described by one panellist as being “so far off our graphs”. While theory and modelling have come a long way in the last decade, even the best models will not be a substitute for the real thing. “Until we do STEP we won’t know everything,” says physicist Steve Cowley, director of the PPPL. Those challenges involve managing potential instabilities and disruptions in the plasma – which at worst could obliterate the wall of a reactor – as well as operating high-temperature superconducting magnets to confine the plasma that have yet to be tested under the intensity of fusion conditions.

We need to produce a project that will deliver energy someone will buy

Ian Chapman

Another significant challenge is self-breeding tritium via neutron capture in lithium, which would be done in a roughly one-metre thick “blanket” surrounding the reactor. This is far from straightforward and the STEP team are still researching what technology might prevail – whether to use a solid pebble-bed or liquid lithium. While liquid lithium is good at producing tritium, for example, extracting the isotope to put back into the reactor is complex.

Howard Wilson, fusion pilot plant R&D lead at the Oak Ridge National Laboratory in the US, was keen to stress that STEP will not be a commercial power plant. Instead, its job rather is to demonstrate “a pathway towards commercialisation”. That is likely to come in several stages, the first being to generate 1 GW of power, which would result in 100 MW to the “grid” (the other 900 MW needed to power the systems). The second stage will be to test if that power production is sustainable via the self-breeding of tritium back into the reactor, what is known as a “closed fuel cycle”.

Ian Chapman, chief executive of the UKAEA, outlined what he called the “fiendish” challenges that lie ahead for fusion, even if STEP demonstrates that it is possible to deliver energy to the grid in a sustainable way. “We need to produce a project that will deliver energy someone will buy,” he said. That will be achieved in part via STEP’s third objective, which is to get a better understanding of the maintenance requirements of a fusion power plant and the impact that would have on reactor downtime. “We fail if there is not cost-effective solution,” added STEP engineering director Debbie Kempton.

STEP officials are now selecting industry partners — in engineering and construction — to work alongside the UKAEA to work on the design. Indeed, STEP is as much about physically building a plant as it is creating a whole fusion industry. A breathless two-minute pre-event promotional film — that loftily compared the development of fusion to the advent of the steam train and vaccines — was certainly given a much needed reality check.

The post UK reveals next STEPs toward prototype fusion power plant appeared first on Physics World.

]]>
Analysis Engineers and physicists have met to discuss the challenges and opportunities of building a practical fusion power plant in the UK https://physicsworld.com/wp-content/uploads/2024/09/STEP.jpeg newsletter1
Annular eclipse photograph bags Royal Observatory Greenwich prize https://physicsworld.com/a/annular-eclipse-photograph-bags-royal-observatory-greenwich-prize/ Thu, 12 Sep 2024 18:30:32 +0000 https://physicsworld.com/?p=116707 The image captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse

The post Annular eclipse photograph bags Royal Observatory Greenwich prize appeared first on Physics World.

]]>
US photographer Ryan Imperio has beaten thousands of amateur and professional photographers from around the world to win the 2024 Astronomy Photographer of the Year.

The image – Distorted Shadows of the Moon’s Surface Created by an Annular Eclipse – was taken during the 2023 annular eclipse.

It captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse. They are formed when sunlight shines through the valleys and craters of the Moon’s surface, breaking the eclipse’s well-known ring pattern.

“This is an impressive dissection of the fleeting few seconds during the visibility of the Baily’s beads,” noted meteorologist and competition judge Kerry-Ann Lecky Hepburn. “This image left me captivated and amazed. It’s exceptional work deserving of high recognition.”

As well as winning the £10,000 top prize, the image will go on display along with other selected pictures from the competition at an exhibition at the National Maritime Museum observatory that opens on 13 September.

The award – now in its 16th year – is run by the Royal Observatory Greenwich in association with insurer Liberty Specialty Markets and BBC Sky at Night Magazine.

The competition received over 3500 entries from 58 countries.

The post Annular eclipse photograph bags Royal Observatory Greenwich prize appeared first on Physics World.

]]>
Blog The image captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse https://physicsworld.com/wp-content/uploads/2024/09/Distorted-Shadows-of-the-Moons-Surface-Created-by-an-Annular-Eclipse-©-Ryan-Imperio.jpg
Looking to the future of statistical physics, how intense storms can affect your cup of tea https://physicsworld.com/a/looking-to-the-future-of-statistical-physics-how-intense-storms-can-affect-your-cup-of-tea/ Thu, 12 Sep 2024 14:47:40 +0000 https://physicsworld.com/?p=116741 In this podcast we chat about active matter, artificial intelligence and storm Ciarán

The post Looking to the future of statistical physics, how intense storms can affect your cup of tea appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast we explore two related areas of physics, statistical physics and thermodynamics.

First up we have two leading lights in statistical physics who explain how researchers in the field are studying phenomena as diverse as active matter and artificial intelligence.

They are Leticia Cugliandolo who is at Sorbonne University in Paris and Marc Mézard at Bocconi University in Italy.

Cugliandolo is also chief scientific director of Journal of Statistical Mechanics, Theory, and Experiment (JSTAT) and Mézard has just stepped down from that role. They both talk about how the journal and statistical physics have evolved over the past two decades and what the future could bring.

The second segment of this episode explores how intense storms can affect your cup of tea. Our guests are the meteorologists Caleb Miller and Giles Harrison, who measured the boiling point of water as storm Ciarán passed through the University of Reading in 2023. They explain the thermodynamics of what they found, and how the storm could have affected the quality of the millions of cups of tea brewed that day.

The post Looking to the future of statistical physics, how intense storms can affect your cup of tea appeared first on Physics World.

]]>
Podcasts In this podcast we chat about active matter, artificial intelligence and storm Ciarán https://physicsworld.com/wp-content/uploads/2024/09/12-9-24-maths-and-physics-equations-131120717-Shutterstock_agsandrew.jpg
Carbon defect in boron nitride creates first omnidirectional magnetometer https://physicsworld.com/a/carbon-defect-in-boron-nitride-creates-first-omnidirectional-magnetometer/ Thu, 12 Sep 2024 09:43:45 +0000 https://physicsworld.com/?p=116696 Quantum sensor can detect magnetic fields in any direction and monitor temperature changes in a sample at the same time

The post Carbon defect in boron nitride creates first omnidirectional magnetometer appeared first on Physics World.

]]>
A newly discovered carbon-based defect in the two-dimensional material hexagonal boron nitride (hBN) could be used as a quantum sensor to detect magnetic fields in any direction – a feat that is not possible with existing quantum sensing devices. Developed by a research team in Australia, the sensor can also detect temperature changes in a sample using the boron vacancy defect present in hBN. And thanks to its atomically thin structure, the sensor can conform to the shape of a sample, making it useful for probing structures that aren’t perfectly smooth.

The most sensitive magnetic field detectors available today exploit quantum effects to map the presence of extremely weak fields. To date, most of these have been made out of diamond and rely on the nitrogen vacancy (NV) centres contained within. NV centres are naturally occurring defects in the diamond lattice in which two carbon atoms are replaced with a single nitrogen atom, leaving one lattice site vacant. Together, the nitrogen atom and the vacancy can behave as a negatively charged entity with an intrinsic spin. NV centres are isolated from their surroundings, which means that their quantum behaviour is robust and stable.

When a photon hits an NV– centre, it can excite an electron to a higher-energy state. As it then decays back to the ground state, it may emit a photon of a different wavelength. The NV– centre has three spin sublevels, and the excited state of each sublevel has a different probability of emitting a photon when it decays.

By exciting an individual NV– centre repeatedly and collecting the emitted photons, researchers can detect its spin state. And since the spin state can be influenced by external variables such as magnetic field, electric field, temperature, force and pressure, NV– centres can therefore be used as atomic-scale sensors. Indeed, they are routinely employed today to study a wide variety of biological and physical systems.

There is a problem though – NV centres can only detect magnetic fields that are aligned in the same direction as the sensors. Devices must therefore contain many sensors placed at different alignment angles, which makes them difficult to use and limited to specific applications. What’s more, the fact that they are rigid (diamond being the hardest material known), means they cannot conform to the sample being studied.

A new carbon-based defect

Researchers recently discovered a new carbon-based defect in hBN, in addition to the boron vacancy that it is already known to contain. In this latest work, and thanks to a carefully calibrated Rabi experiment (a method for measuring nuclear spin), a team led by Jean-Philippe Tetienne of RMIT University and Igor Aharonovich of the University of Technology Sydney found that the carbon-based defect behaves as a spin-half system (S=1/2). In comparison, the spin in the boron defect is equal to one. And it’s this spin-half nature of the former that enables it to detect magnetic fields in any direction, say the researchers.

Team members Sam Scholten and Priya Singh

“Having two different independently addressable spin species within the same material at room temperature is unique, not even diamond has this capability,” explains Priya Singh from RMIT University, one of the lead authors of this study. “This is exciting because each spin species has its advantages and limitations, and so with hBN we can combine the best of both worlds. This is important especially for quantum sensing, where the spin half enables omnidirectional magnetometry, with no blind spot, while the spin one provides directional information when needed and is also a good temperature sensor.”

Until now, the spin multiplicity of the carbon defect was under debate in the hBN community, adds co-first author Sam Scholten from the University of Melbourne. “We have been able to unambiguously prove its spin-half nature, or more likely a pair of weakly coupled spin-half electrons.”

The new S=1/2 sensor can be controlled using light in the same way as the boron vacancy-based sensor. What’s more, the two defects can be tuned to interact with each other and thus used together to detect both magnetic fields and temperature at the same time. Singh points out that the carbon-based defects were also naturally present in pretty much every hBN sample the team studied, from commercially sourced bulk crystals and powders to lab-made epitaxial films. “To create the boron vacancy defects in the same sample, we had to perform just one extra step, namely irradiating the samples with high-energy electrons, and that’s it,” she explains.

To create the hBN sensor, the researchers simply drop casted a hBN powder suspension onto the target object or transferred an epitaxial film or an exfoliated flake. “hBN is very versatile and easy to work with,” says Singh. “It is also low cost and easy to integrate with various other materials so we expect lots of applications will emerge in nanoscale sensing – especially thanks to the omnidirectional magnetometry capability, unique for solid-state quantum sensors.”

The researchers are now trying to determine the exact crystallographic structure of the S=1/2 carbon defects and how they can engineer them on-demand in a few layers of hBN. “We are also planning sensing experiments that leverage the omnidirectional magnetometry capability,” says Scholten. “For instance, we can now image the stray field from a van der Waals ferromagnet as a function of the azimuthal angle of the applied field. In this way, we can precisely determine the magnetic anisotropy, something that has been a challenge with other methods in the case of ultrathin materials.”

The study is detailed in Nature Communications.

The post Carbon defect in boron nitride creates first omnidirectional magnetometer appeared first on Physics World.

]]>
Research update Quantum sensor can detect magnetic fields in any direction and monitor temperature changes in a sample at the same time https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_hBN-quantum-sensor-set.jpg newsletter1
Dancing humans embody topological properties https://physicsworld.com/a/dancing-humans-embody-topological-properties/ Wed, 11 Sep 2024 16:08:29 +0000 https://physicsworld.com/?p=116665 Choreographed high school students have fun simulating curious phase of matter

The post Dancing humans embody topological properties appeared first on Physics World.

]]>
High school students and scientists in the US have used dance to illustrate the physics of topological insulators. The students followed carefully choreographed instructions developed by scientists in what was a fun outreach activity that explained topological phenomena. The exercise demonstrates an alternative analogue for topologically nontrivial systems, which could be potentially useful for research.

“We thought that the way all of these phenomena are explained is rather contrived, and we wanted to, in some sense, democratize the notions of topological phases of matter to a broader audience,” says Joel Yuen-Zhou who is a theoretical chemist at the University of California, San Diego (UCSD). Yuen-Zhou led the research, which was done in collaboration with students and staff at Orange Glen High School near San Diego.

Topological insulators are a type of topological material where the bulk is an electrical  insulator but the surface or edges (depending on whether the system is 3D or 2D) conducts electricity. The conducting states arise due to a characteristic of the electronic band structure associated with the system as a whole, which means they persist despite defects or distortions in the system so long as the fundamental topology of the system is undisturbed. Topology can be understood in terms of a coffee mug being topologically equivalent to a ring doughnut, because they both have a hole all the way through. This is unlike a jam doughnut which does not have a hole and is therefore not topologically equivalent to a coffee mug.

Insulators without the conducting edge or surface states are “topologically trivial” and have insulating properties throughout. Yuen-Zhou explains that for topologically nontrivial properties to emerge, the system must be able to support wave phenomena and have something that fulfils the role of a magnetic field in condensed matter topological insulators. As such, analogues of topological insulators have been reported in systems ranging from oceanic and atmospheric fluids to enantiomeric molecules and active matter. Nonetheless, and despite the interest in topological properties for potential applications, they can still seem abstract and arcane.

Human analogue

Yuen-Zhou set about devising a human analogue of a topological insulator with then PhD student Matthew Du, who is now at the University of Chicago. The first step was to establish a Hamiltonian that defines how each site in a 2D lattice interacts with its neighbours and a magnetic field. They then formulated the Schrödinger equation of the system as an algorithm that updates after discrete steps in time and reproduces essential features of topological insulator behaviour. These are chiral propagation around the edges when initially excited at an edge; robustness to defects; propagation around the inside edge when the lattice has a hole in it; and an insulating bulk.

The USCD researchers then explored how this quantum behaviour could be translated into human behaviour. This was a challenge because quantum mechanics operates in the realm of complex numbers that have real and an imaginary components. Fortunately, they were able to identify initial conditions that lead to only real number values for the interactions at each time step of the algorithm. That way the humans, for whom imaginary interactions might be hard to simulate, could legitimately manifest only real numbers as they step through the algorithm. These real values were either one (choreographed as waving flags up), minus one (waving flags down) or zero (standing still).

“The structure isn’t actually specific just to the model that we focus on,” explains Du. “There’s actually a whole class of these kinds of models, and we demonstrate this for another example – actually a more famous model – the Haldane model, which has a honeycomb lattice.”

The researchers then created a grid on a floor at Orange Glen High School, with lines in blue or red joining neighbouring squares. They defined whether the interaction between those sites was parallel or antiparallel (that is, whether the occupants of the squares should wave the flags in the same or opposite direction to each other when prompted).

Commander and caller

A “commander” acts as the initial excitation that starts things off. This is prompted by someone who is not part of the 2D lattice, whom the researchers liken to a caller in line, square or contra dancing. The caller then prompts the commander to come to a standstill, at which point all those who have their flags waving determine if they have a “match”, that is, if they are dancing in kind or opposite to their neighbours as designated by the blue and red lines. Those with a match then stop moving, after which the “commander” or excitation moves to the one site where there is no such match.

Yuen-Zhou and Du taught the choreography to second and third year high school students. The result was that excitations propagated around the edge of the lattice, but bulk excitations fizzled out. There was also a resistance to “defects”.

“The main point about topological properties is that they are characterized by mathematics that are insensitive to many details,” says Yuen-Zhou. “While we choreograph the dance, even if there are imperfections and the students mess up, the dance remains and there is the flow of the dance along the edges of the group of people.”

The researchers were excited about showing that even a system as familiar as a group of people could provide an analogue of a topological material, since so far these properties have been “restricted to very highly engineered systems or very exotic materials,” as Yuen-Zhou points out.

“The mapping of a wave function to real numbers then to human movements clearly indicates the thought process of the researchers to make it more meaningful to students as an outreach activity,” says Shanti Pise, a principal technical officer at the Indian Institute of Science, Education and Research in Pune. She was not involved in this research project but specializes in using dance to teach mathematical ideas. “I think this unique integration of wave physics and dance would also give a direction to many researchers, teachers and the general audience to think, experiment and share their ideas!”

The research is described in Science Advances.

The post Dancing humans embody topological properties appeared first on Physics World.

]]>
Research update Choreographed high school students have fun simulating curious phase of matter https://physicsworld.com/wp-content/uploads/2024/09/crowd-of-people-connections-1166492679-iStock_gremlin.jpg newsletter1
Almost 70% of US students with an interest in physics leave the subject, finds survey https://physicsworld.com/a/almost-70-of-us-students-with-an-interest-in-physics-leave-the-subject-finds-survey/ Wed, 11 Sep 2024 11:30:40 +0000 https://physicsworld.com/?p=116659 The survey followed almost 4000 first-year students taking introductory physics courses at four US universities

The post Almost 70% of US students with an interest in physics leave the subject, finds survey appeared first on Physics World.

]]>
More than two-thirds of college students in the US who initially express an interest in studying physics drop out to pursue another degree. That is according to a five-year-long survey by the American Institute of Physics, which found that students often quit due to a lack of confidence in mathematics or having poor experiences within physics departments and instructors. Most students, however, ended up in another science, technology, engineering and mathematics (STEM) field.

Carried out by AIP Statistical Research, the survey initially followed almost 4000 students in their first year of high school or college who were doing an introductory physics course at four large, predominantly white universities.

Students highlighted “learning about the universe”, “applying their problem-solving and maths skills”, “succeeding in a challenging subject” and “pursuing a satisfying career” as reasons why they choose to study physics.

Anne Marie Porter and her colleagues Raymond Chu and Rachel Ivie concentrated on the 745 students who had expressed interest in  pursuing physics, following them for five academic years.

Over that period, only 31% graduated with a physics degree, with most of those switching to another degree during their first or second year. Under-represented groups, including women, African-American and Hispanic students, were the most likely to avoid physics degree courses.

Pull and push

While many who quit physics enjoyed their experience, they left due to “issues with poor teaching quality and large class sizes” as well as “negative perceptions that physics employment consists only of academic positions and desk jobs”. Self-appraisal played a role in the decision to leave too. “They may feel unable to succeed because they lack the necessary skills in physics,” Porter says. “That’s a reason for concern.”

Porter adds that intervention early in college is essential to retain physicists with introductory physics courses being “incredibly important”. Indeed, the survey comes at a time when the number of bachelor’s degrees in physics offered by US universities is growing more slowly than in other STEM fields.

Meanwhile, a separate report published by the National Academies of Science, Engineering, and Medicine has called on the US government to adopt a new strategy to recruit and retain talent in STEM subjects. In particular, the report urges Congress to smooth the path to permanent residency and US citizenship for foreign-born individuals working in STEM fields.

The post Almost 70% of US students with an interest in physics leave the subject, finds survey appeared first on Physics World.

]]>
News The survey followed almost 4000 first-year students taking introductory physics courses at four US universities https://physicsworld.com/wp-content/uploads/2024/09/student-in-library-24422647-iStock_Sam-Edwards.jpg newsletter
Improved antiproton trap could shed more light on antimatter-matter asymmetry https://physicsworld.com/a/improved-antiproton-trap-could-shed-more-light-on-antimatter-matter-asymmetry/ Wed, 11 Sep 2024 08:30:14 +0000 https://physicsworld.com/?p=116667 Maxwell's demon cooling trap measures the magnetic moment of antiprotons with higher precision than ever before

The post Improved antiproton trap could shed more light on antimatter-matter asymmetry appeared first on Physics World.

]]>
The "Maxwell’s demon cooling double trap" developed by the BASE collaboration can cool antiprotons very quickly

A novel particle trap invented at CERN could allow physicists to measure the magnetic moments of antiprotons with higher precision than ever before. The experiment, carried out by the international BASE collaboration, revealed that the magnetic moments of the antiparticles differ by a maximum of 10–9 from those of their matter counterparts.

One of the biggest mysteries in physics today is why the universe appears to be made up almost entirely of matter and contains only tiny amounts of antimatter. According to the Standard Model, our universe should be largely matter-less. This is because when the universe formed nearly 14 billion years ago, equal amounts of antimatter and matter were generated. When pairs of these antimatter and matter particles collided, they annihilated and produced a burst of energy. This energy created new antimatter and matter particles, which annihilated each other again, and so on.

Physicists have been trying to solve this enigma by looking for tiny differences between a particle (such as a proton) and its antiparticle. If successful, such differences (even if extremely small) would shed more light on antimatter–matter asymmetry and perhaps even reveal physics beyond the Standard Model.

The aim of the BASE (Baryon Antibaryon Symmetry Experiment) collaboration is to measure the magnetic moment of the antiproton to extremely high precision and compare it with the magnetic moment of the proton. To do this, the researchers are using Penning traps, which employ magnetic and electric fields to hold a negatively charged antiproton, and can store antiprotons for years.

Quicker cooling

Preparing individual antiprotons so that their spin quantum states can be measured, however, involves cooling them down to extremely cold temperatures of 200 mK. Previous techniques took 15 h to achieve this, but BASE has now shortened this cooling time to just eight minutes.

The BASE team achieved this feat by joining two Penning traps to make a so-called “Maxwell’s demon cooling double trap”. The first trap cools the antiprotons. The second (referred to as the analysis trap in this study) has the highest magnetic field gradient for a device of its kind, as well as improved noise-protection electronics, a cryogenic cyclotron motion detector and ultrafast transport between the two traps.

The new instrument allowed the researchers to prepare only the coldest antiprotons for measurement, while at the same time rejecting any that had a higher temperature. This means that they did not have to waste time cooling down these warmer particles.

“With our new trap we need a measurement time of around one month, compared with almost 10 years using the old technique, which would be impossible to realize experimentally,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine University Düsseldorf and a researcher at CERN and RIKEN.

Ulmer says that he and his colleagues have already been able to measure that the magnetic moments of protons and antiprotons differ by a maximum of one billionth (10–9). They have also improved the error rate in identifying the antiproton’s spin by more than a factor of 1000. Reducing this error rate was one of the team’s main motivations for this project.

The new cooling device could be of benefit to the Penning trap community at large, since colder particles generally result in more precise measurements. For example, it could be used for phase sensitive detection methods or spin state analysis, says Barbara Maria Latacz, CERN team member and lead author of this study. “Our trap is particularly interesting because it is relatively simple and robust compared to laser cooling systems,” she tells Physics World. “Specifically, it allows us to cool a single proton or antiproton to temperatures below 200 mK in less than eight minutes, which is not achievable with other cooling methods.”

The new device will now be a key element of the BASE experimental set-up, she says.

Looking forward, the researchers hope to improve the detection accuracy of the antiproton magnetic moment to 10–10 in their next measurement campaign. They report their current work in Physical Review Letters.

The post Improved antiproton trap could shed more light on antimatter-matter asymmetry appeared first on Physics World.

]]>
Research update Maxwell's demon cooling trap measures the magnetic moment of antiprotons with higher precision than ever before https://physicsworld.com/wp-content/uploads/2024/09/11-09-24-antiproton-trap-featured.jpg newsletter1
Vacuum for physics research https://physicsworld.com/a/vacuum-for-physics-research/ Wed, 11 Sep 2024 07:28:38 +0000 https://physicsworld.com/?p=116423 Available to watch now. Gain an understanding of vacuum to better enable your critical research, with Agilent Technologies.

The post Vacuum for physics research appeared first on Physics World.

]]>

Your research can’t happen without vacuum! If you’re pushing the boundaries of science or technology, you know that creating a near-perfect empty space is crucial. Whether you’re exploring the mysteries of subatomic particles, simulating the harsh conditions of outer space, or developing advanced materials, mastering ultra-high (UHV) and extreme-high vacuum (XHV) is necessary.

In this live webinar:

  • You will learn how vacuum enables physics research, from quantum computing, to fusion, to the fundamental nature of the universe.
  • You will discover why ultra-low-pressure environments directly impact the success of your experiments.
  • We will dive into the latest techniques and technologies for creating and maintaining UVH and XHV.

Join us to gain practical insights and stay ahead in your field ­– because in your research, vacuum isn’t just important; it’s critical.

John Screech graduated in 1986 with a BA in physics and has worked in analytical instrumentation ever since. His career has spanned general mass spectrometry, vacuum system development, and contraband detection. John joined Agilent in 2011 and currently leads training and education programmes for the Vacuum Products division. He also assists Agilent’s sales force and end-users with pre- and post-sales applications support. He is based near Toronto, Canada.

The post Vacuum for physics research appeared first on Physics World.

]]>
Webinar Available to watch now. Gain an understanding of vacuum to better enable your critical research, with Agilent Technologies. https://physicsworld.com/wp-content/uploads/2024/08/2024-09-19-webinar-image.jpg
Flagship journal Reports on Progress in Physics marks 90th anniversary with two-day celebration https://physicsworld.com/a/flagship-journal-reports-on-progress-in-physics-marks-90th-anniversary-with-two-day-celebration/ Tue, 10 Sep 2024 16:05:02 +0000 https://physicsworld.com/?p=116670 A new future lies in store for Reports on Progress in Physics as the journal turns 90

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
When the British physicist Edward Andrade wrote a review paper on the structure of the atom in the first volume of the journal Reports on Progress in Physics (ROPP) in 1934, he faced a problem familiar to anyone seeking to summarize the latest developments in a field. So much exciting research had happened in atomic physics that Andrade was finding it hard to cram everything in. “It is obvious, in view of the appalling number of papers that have appeared,” he wrote, “that only a small fraction can receive reference.”

Review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature

Apologizing that “many elegant pieces of work have been deliberately omitted” due to a lack of space, Andrade pleaded that he had “honestly tried to maintain a just balance between the different schools [of thought]”. Nine decades on, Andrade’s struggles will be familiar to anyone has ever tried to write a review paper, especially of a fast-moving area of physics. Readers, however, appreciate the efforts authors put in because review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature.

Writing review papers also benefits authors because such articles are usually widely read and cited by other scientists – much more in fact than a paper containing new research findings. As a result, most review journals have an extraordinarily high “impact factor”, which is the yearly mean number of citations received by articles published in the last two years in the journal. ROPP, for example, has an impact factor of 19.0. While there are flaws with using impact factor to judge the quality of a journal, it’s still a well-respected metric in many parts of the world. And who wouldn’t want to appear in a journal with that much influence?

New dawn for ROPP

Celebrating its 90th anniversary this year, ROPP is the flagship journal of IOP Publishing, which also publishes Physics World. As a learned-society publisher, IOP Publishing does not have shareholders, with any financial surplus ploughed back into the Institute of Physics (IOP) to support everyone from physics students to physics teachers. In contrast to journals owned by commercial publishers, therefore, ROPP has the international physics community at its heart.

Over the last nine decades, ROPP has published over 2500 review papers. There have been more than 20 articles by Nobel-prize-winning physicists, including famous figures from the past such as Hans Bethe (stellar evolution), Lawrence Bragg (protein crystallography) and Abdus Salam (field theory). More recently, ROPP has published papers by still-active Nobel laureates including Konstantin Novoselov (2D materials), Ferenc Krausz (attosecond physics) and Isamu Akasaki (blue LEDS) – see the box below for a full list.

Subhir Sachdev

But the journal isn’t resting on its laurels. ROPP has recently started accepting articles containing new scientific findings for the first time, with the plan being to publish 150–200 very-high-quality primary-research papers each year. They will be in addition to the usual output of 50 or so review papers, most of which will still be commissioned by ROPP’s active editorial board. IOP Publishing hopes the move will cement the journal’s place at the pinnacle of its publishing portfolio.

“ROPP will continue as before,” says Subir Sachdev, a condensed-matter physicist from Harvard University, who has been editor-in-chief of the journal since 2022. “There’s no change to the review format, but what we’re doing is really more of an expansion. We’re adding a new section containing original research articles.” The journal is also offering an open-access option for the first time, thereby increasing the impact of the work. In addition, authors have the option to submit their papers for “double anonymous” and transparent peer review.

Maintaining high standards

Those two new initiatives – publishing primary research and offering an open-access option – are probably the biggest changes in the journal’s 90-year history. But Sachdev is confident the journal can cope. “Of course, we want to maintain our high standards,” he says. “ROPP has over the years acquired a strong reputation for very-high-quality articles. With the strong editorial board and the support we have from referees, we hope we will be able to maintain that.”

Early signs are promising. Among the first primary-research papers in ROPP are CERN’s measurement of the speed of sound in a quark–gluon plasma (87 077801), a study into flaws in the Earth’s gravitational field (87 078301), and an investigation into whether supersymmetry could be seen in 2D materials (10.1088/1361-6633/ad77f0). A further paper looks into creating an overarching equation of state for liquids based on phonon theory (87 098001).

The idea is to publish a relatively small number of papers but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing

David Gevaux

David Gevaux, ROPP’s chief editor, who is in charge of the day-to-day running of the journal, is pleased with the quality and variety of primary research published so far. “The idea is to publish a relatively small number of papers – no more than 200 max – but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing,” he says. “Our first papers have covered a broad range of physics, from condensed matter to astronomy.”

Another benefit of ROPP only publishing a select number of papers is that each article can have, as Gevaux explains, “a little bit more love” put into it. “Traditionally, publishers were all about printing journals and sending them around the world – it was all about distribution,” he says. “But with the Internet, everything’s immediately available and researchers almost have too many papers to trawl through. As a flagship journal, ROPP gives its published authors extra visibility, potentially through a press release or coverage in Physics World.”

Nobel laureates who have published in ROPP

Since its launch in 1934, Reports on Progress in Physics has published papers by numerous top scientists, including more than 20 current or future Nobel-prize-winning physicists. A selection of those papers written or co-authored by Nobel laureates over the journal’s first 90 years is given chronologically below. For brevity, papers by multiple authors list only the contributing Nobel winner.

Nevill Mott 1938 “Recent theories of the liquid state” (5 46) and 1939 “Reactions in solids” (6 186)
Hans Bethe 1939 “The physics of stellar interiors and stellar evolution” (6 1)
Max Born 1942 “Theoretical investigations on the relation between crystal dynamics and x-ray scattering” (9 294)
Martin Ryle 1950 “Radio astronomy” (13 184)
Willis Lamb 1951 “Anomalous fine structure of hydrogen and singly ionized helium” (14 19)
Abdus Salam 1955 “A survey of field theory” (18 423)
Alexei Abrikosov 1959 “The theory of a fermi liquid” (22 329)
David Thouless 1964 “Green functions in low-energy nuclear physics” (27 53)
Lawrence Bragg 1965 “First stages in the x-ray analysis of proteins” (28 1)
Melvin Schwartz 1965 “Neutrino physics” (28 61)
Pierre-Gilles de Gennes 1969 “Some conformation problems for long macromolecules” (32 187)
David Gabor 1969 Progress in holography” (32 395)
John Clauser 1978 “Bell’s theorem. Experimental tests and implications” (41 1881)
Norman Ramsey 1982 “Electric-dipole moments of elementary particles” (45 95)
Martin Perl 1992 “The tau lepton” (55 653)
Charles Townes 1994 “The nucleus of our galaxy” (57 417)
Pierre Agostini 2004 “The physics of attosecond light pulses” (67 813)
Takaaki Kajita 2006 “Discovery of neutrino oscillations” (69 1607)
Konstantin Novoselov 2011 “New directions in science and technology: two-dimensional crystals” (7 082501)
John Michael Kosterlitz 2016 “Physics: a review of key issues” (2016 79 026001)
Anthony Leggett 2016 “Liquid helium-3: a strongly correlated but well understood Fermi liquid” (79 054501)
Ferenc Krausz 2017 “Attosecond physics at the nanoscale” (80 054401)
Isamu Akasaki 2018 GaN-based vertical-cavity surface-emitting lasers with AlInN/GaN distributed Bragg reflectors” (82 012502)

An event for the community

As another reminder of its place in the physics community, ROPP is hosting a two-day event at the IOP’s headquarters in London and online. Taking place on 9–10 October 2024, the hybrid event will present the latest cutting-edge condensed-matter research, from fundamental work to applications in superconductivity, topological insulators, superfluids, spintronics and beyond. Confirmed speakers at Progress in Physics 2024 include Piers Coleman (Rutgers University), Susannah Speller (University of Oxford), Nandini Trivedi (Ohio State University) and many more.

artist's impression of a superconductive cube levitiating

“We’re taking the journal out into the community,” says Gevaux. “IOP Publishing is very heavily associated with the IOP and of course the IOP has a large membership of physicists in the UK, Ireland and beyond. With the meeting, the idea is to bring that community and the journal together. This first meeting will focus on condensed-matter physics, with some of the ROPP board members giving plenary talks along with lectures from invited, external scientists and a poster session too.”

Longer-term, IOP Publishing plans to put ROPP at the top of a wider series of journals under the “Progress in” brand. The first of those journals is Progress in Energy, which was launched in 2019 and – like ROPP – has now also expanded its remit to included primary- research papers. Other, similar spin-off journals in different topic areas will be launched over the next few years, giving IOP Publishing what it hopes is a series of journals to match the best in the world.

For Sachdev, publishing with ROPP is all about having “the stamp of approval” from the academic community. “So if you think your field is now reached a point where a scholarly assessment of recent advances is called for, then please consider ROPP,” he says. “We have a very strong editorial board to help you produce a high-quality, impactful article, now with the option of open access and publishing really high-quality primary research papers too.”

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
Analysis A new future lies in store for Reports on Progress in Physics as the journal turns 90 https://physicsworld.com/wp-content/uploads/2024/09/superconductor-cube-levitiating-1663588811-iStock_koto_feja.jpg newsletter
Quantum growth drives investment in diverse skillsets https://physicsworld.com/a/quantum-growth-drives-investment-in-diverse-skillsets/ Tue, 10 Sep 2024 13:20:05 +0000 https://physicsworld.com/?p=116590 Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
The meteoric rise of quantum technologies from research curiosity to commercial reality is creating all the right conditions for a future skills shortage, while the ongoing pursuit of novel materials continues to drive demand for specialist scientists and engineers. Within the quantum sector alone, headline figures from McKinsey & Company suggest that less than half of available quantum jobs will be filled by 2025, with global demand being driven by the burgeoning start-up sector as well as enterprise firms that are assembling their own teams to explore the potential of quantum technologies for transforming their businesses.

While such topline numbers focus on the expertise that will be needed to design, build and operate quantum systems, a myriad of other skilled professionals will be needed to enable the quantum sector to grow and thrive. One case in point is the diverse workforce of systems engineers, measurement scientists, service engineers and maintenance technicians who will be tasked with building and installing the highly specialized equipment and instrumentation that is needed to operate and monitor quantum systems.

“Quantum is an incredibly exciting space right now, and we need to prepare for the time when it really takes off and explodes,” says Matt Martin, Managing Director of Oxford Instruments NanoScience, a UK-based company that manufactures high-performance cryogenics systems and superconducting magnets. “But for equipment makers like us the challenge is not just about quantum, since we are also seeing increased demand from both academia and industry for emerging applications in scientific measurement and condensed-matter physics.”

Martin points out that Oxford Instruments already works hard to identify and nurture new talent. Within the UK the company has for many years sponsored doctoral students to foster a deeper understanding of physics in the ultracold regime, and it also offers placements to undergraduates to spark an early interest in the technology space. The firm is also dialled into the country’s apprenticeship scheme, which offers an effective way to train young people in the engineering skills needed to manufacture and maintain complex scientific instruments.

Despite these initiatives, Martin acknowledges that NanoScience faces the same challenges as other organizations when it comes to recruiting high-calibre technical talent. In the past, he says, a skilled scientist would have been involved in all stages of the development process, but now the complexity of the systems and depth of focus required to drive innovation across multiple areas of science and engineering has led to the need for greater specialization. While collaboration with partners and sister companies can help, the onus remains on each business to develop a core multidisciplinary team.

Building ultracold and measurement expertise

The key challenge for companies like Oxford Nanoscience is finding physicists and engineers who can create the ultracold environments that are needed to study both quantum behaviour and the properties of novel materials. Compounding that issue is the growing trend towards providing the scientific community with more automated solutions, which has made it much easier for researchers to configure and conduct experiments at ultralow temperatures.

Harriet van der Vliet

“In the past PhD students might have spent a significant amount of time building their experiments and the hardware needed for their measurements,” explains Martin. “With today’s push-button solutions they can focus more on the science, but that changes their knowledge because there’s no need for them to understand what’s inside the box. Today’s measurement scientists are increasingly skilled in Python and integration, but perhaps less so in hardware.”

Developing such comprehensive solutions demands a broader range of technical specialists, such as software programmers and systems engineers, that are in short supply across all technology-focused industries. With many other enticing sectors vying for their attention, such as the green economy, energy and life sciences, and the rise of AI-enabled robotics, Martin understands the importance of inspiring young people to devote their energies to the technologies that underpin the quantum ecosystem. “We’ve got to be able to tell our story, to show why this new and emerging market is so exciting,” he says. “We want them to know that they could be part of something that will transform the future.”

To raise that awareness Oxford Instruments has been working to establish a series of applications centres in Japan, the US and the UK. One focus for the centres will be to provide training that helps users to get to the most out of the company’s instruments, particularly for those without direct experience of building and configuring an ultracold system. But another key objective is to expose university-level students to research-grade technology, which in turn should help to highlight future career options within the instrumentation sector.

To build on this initiative Oxford Instruments is now actively discussing opportunities to collaborate with other companies on skills development and training in the US. “We all want to provide some hands-on learning for students as they progress through their university education, and we all want to find ways to work with government programmes to stimulate this training,” says Martin. “It’s better for us to work together to deliver something more substantial rather than doing things in a piecemeal way.”

That collaboration is likely to centre around an initiative launched by US firm Quantum Design back in 2015. Under the scheme, now badged Discovery Teaching Labs, the company has donated one of its commercial systems for low-temperature material analysis, the PPMS VersaLab, to several university departments in the US. As part of the initiative the course professors are also asked to create experimental modules that enable undergraduate students to use this state-of-the-art technology to explore key concepts in condensed-matter physics.

“Our initial goal was to partner with universities to develop a teaching curriculum that uses hands-on learning to inspire students to become more interested in physics,” says Quantum Design’s Barak Green, who has been a passionate advocate for the scheme. “By enabling students to become confident with using these advanced scientific instruments, we have also found that we have equipped them with vital technical skills that can open up new career paths for them.”

One of the most successful partnerships has been with California State University San Marcos (CSUSM), a small college that mainly attracts students from communities with no prior tradition of pursuing a university education. “There is no way that the students at CSUSM would have been able to access to this type of equipment in their undergraduate training, but now they have a year-long experimental programme that enhances their scientific learning and makes them much more comfortable with using such an advanced system,” says Green. “Many of these students can’t afford to stay in school to study for a PhD, and this programme has given them the knowledge and experience they need to get a good job.”

California State University San Marcos (CSUSM)

Indeed, Quantum Design has already hired around 20 students from CSUSM and other local programmes. “We didn’t start the initiative with that in mind, but over the years we discovered that we had all these highly skilled people who could come and work for us,” Green continues. “Students who only do theory are often very nervous around these machines, but the CSUSM graduates bring a whole extra layer of experience and know-how. Not everyone needs to have a PhD in quantum physics, we also need people who can go into the workforce and build the systems that the scientists rely on.”

This overwhelming success has given greater impetus to the programme, with Quantum Design now seeking to bring in other partners to extend its reach and impact. LakeShore Cryotronics, a long-time collaborator that designs and builds low-temperature measurement systems that can be integrated into the VersaLab, was the first company to make the commitment. In 2023 the US-based firm donated one of its M91 FastHall measurement platforms to join the VersaLab already installed at CSUSM, and the two partners are now working together to establish an undergraduate teaching lab at Stony Brook University in New York.

“It’s an opportunity for like-minded scientific companies to give something back to the community, since most of our products are not affordable for undergraduate programmes,” says LakeShore’s Chuck Cimino, who has now joined the board of advisors for the Discovery Teaching Labs programme. “Putting world-class equipment into the hands of students can influence their decisions to continue in the field, and in the long term will help to build a future workforce of skilled scientists and engineers.”

Conversations with other equipment makers at the 2024 APS March Meeting also generated significant interest, potentially paving the way for Oxford Instruments to join the scheme. “It’s a great model to build on, and we are now working to see how we might be able to commit some of our instruments to those training centres,” says Martin, who points out that the company’s Proteox S platform offers the ideal entry-level system for teaching students how to manage a cold space for experiments with qubits and condensed-matter systems. “We’ve developed a lot of training on the hardware and the physicality of how the systems work, and in that spirit of sharing there’s lots of useful things we could do.”

While those discussions continue, Martin is also looking to a future when quantum-powered processors become a practical reality in compute-intensive settings such as data centres. “At that point there will be huge demand for ultracold systems that are capable of hosting and operating large-scale quantum computers, and we will suddenly need lots of people who can install and service those sorts of systems,” he says. “We are already thinking about ways to set up training centres to develop that future workforce, which will primarily be focused around service engineers and maintenance technicians.”

Martin believes that partnering with government labs could offer a solution, particularly in the US where various initiatives are already in place to teach technical skills to college-level students. “It’s about taking that forward view,” he says. “We have already built a product that can be used for training purposes, and we have started discussions with US government agencies to explore how we could work together to build the workforce that will be needed to support the big industrial players.”

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
Analysis Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement https://physicsworld.com/wp-content/uploads/2024/09/web-Matt-Martin-headshot.jpg newsletter
Quantum brainwave: using wearable quantum technology to study cognitive development https://physicsworld.com/a/quantum-brainwave-using-wearable-quantum-technology-to-study-cognitive-development/ Tue, 10 Sep 2024 10:00:04 +0000 https://physicsworld.com/?p=116530 Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Though she isn’t a physicist or an engineer, Margot Taylor has spent much of her career studying electrical circuits. As the director of functional neuroimaging at the Hospital for Sick Children in Toronto, Canada, Taylor has dedicated her research to the most complex electrochemical device on the planet – the human brain.

Taylor uses various brain imaging techniques including MRI to understand cognitive development in children. One of her current projects uses a novel quantum sensing technology to map electrical brain activity. Magnetoencephalography with optically pumped magnetometry (OPM-MEG) is a wearable technology that uses quantum spins to localize electrical impulses coming from different regions of the brain.

Physics World’s Hamish Johnston caught up with Taylor to discover why OPM-MEG could be a breakthrough for studying children, and how she’s using it to understand the differences between autistic and non-autistic people.

The OPM-MEG helmets Taylor uses in this research were developed by Cerca Magnetics, a company founded in 2020 as a spin-out from the University of Nottingham‘s Sir Peter Mansfield Imaging Centre in the UK. Johnston also spoke to Cerca’s chief executive David Woolger, who explained how the technology works and what other applications they are developing.

Margot Taylor: understanding the brain

What is magnetoencephalography, and how is it used in medicine?

Magnetoencephalography (MEG) is the most sensitive non-invasive means we have of assessing brain function. Specifically, the technique gives us information about electrical activity in the brain. It doesn’t give us any information about the structure of the brain, but the disorders that I’m interested in are disorders of brain function, rather than disorders of brain structure. There are some other techniques, but MEG gives us amazing temporal and spatial resolution, which makes it very valuable.

So you’re measuring electrical signals. Does that mean that the brain is essentially an electrical device?

Indeed, they are hugely complex, electrical devices. Technically it’s electrochemical, but we are measuring the electrical signals that are the product of the electrochemical reactions in the brain.

When you perform MEG, how do you know where that signal’s coming from?

We usually get a structural MRI as well, and then we have very good source localization approaches so that we can tell exactly where in the brain different signals are coming from. We can also get information about how the signals are connecting with each other, the interactions among different brain regions, and the timing of those interactions.

Three complex-looking helmets on shelves next to a fun child-friendly mural

Why does quantum MEG make it easier to do brain scans on children?

The quantum technology is called optically pumped magnetometry (OPM) and it’s a wearable system, where the sensors are placed in a helmet. This means there is allowed movement because the helmet moves with the child. We’re able to record brain signals in very young children because they can move or sit on their parents’ laps, they don’t have to be lying perfectly still.

Conventional MEG uses cryogenic technology and is typically one size fits all. It’s designed for an adult male head and if you put in a small child, their head is a long way from the sensors. With OPM, however, the helmet can be adapted for different sized heads. We have little tiny helmets up to bigger helmets. This is a game changer in terms of recording signals in little children.

Can you tell us more about the study you’re leading at the Hospital for Sick Children in Toronto using a quantum MEG system from the UK’s Cerca Magnetics?

We are looking at early brain function in autistic and non-autistic children. Autism is usually diagnosed by about three years of age, although sometimes it’s not diagnosed until they’re older. But if a child could be diagnosed with autism earlier, then interventions could start earlier. And so we’re looking at autistic and non-autistic children as well as children that have a high likelihood of being autistic to see if we can get brain signals that will predict whether they will go on to get a diagnosis or not.

How do the responses you measure using quantum MEG differ between autistic and non-autistic people, or those with a high likelihood of developing autism?

We don’t have that data yet because we’re looking at the children who have a high likelihood of being autistic, so we have to wait until they grow up and for another year or so to see if they get a diagnosis. For the children who do have a diagnosis of autism already, it seems like the responses are atypical, but we haven’t fully analysed that data. We think that there is a signal there that we’ll be able to report in the foreseeable future, but we have only tested 32 autistic children so far, and we’d like to get more data before we publish.

A woman sits with her back to the camera wearing a helmet covered with electronics. Two more women stand either side

Do you have any preliminary results or published papers based on this data yet?

We’re still analysing data. We’re seeing beautiful, age-related changes in our cohort of non-autistic children. Because nobody has been able to do these studies before, we have to establish the foundational datasets with non-autistic children before we can compare it to autistic children or children who have a high likelihood of being autistic. And those will be published very shortly.

Are you using the quantum MEG system for anything else at the moment?

With the OPM system, we’re also setting up studies looking at children with epilepsy. We want to compare the OPM technology with the cryogenic MEG and other imaging technologies and we’re working with our colleagues to do that. We’re also looking at children who have a known genetic disorder to see if they have brain signals that predict whether they will also go on to experience a neurodevelopmental disorder. We’re also looking at children who are born to mothers with HIV to see if we can get an indication of what is happening in their brains that may affect their later development.

David Woolger: expanding applications

Can you give us a brief description of Cerca Magnetics’ technology and how it works?

When a neuron fires, you get an electrical current and a corresponding magnetic field. Our technology uses optically pumped magnetometers (OPMs), which are very sensitive to magnetic fields. Effectively, we’re sensing magnetic fields 500 million times lower than the Earth’s magnetic field.

To enable us to do that, as well as the quantum sensors, we need to shield against the Earth’s magnetic field, so we do this in a shielded environment with both active and passive shielding. We are then able to measure the magnetic fields from the brain, which we can use to understand functionally what’s going on in that area.

Are there any other applications for this technology beyond your work with Margot Taylor?

There’s a vast number of applications within the field of brain health. For example, we’re working with a team in Oxford at the moment, looking at dementia. So that’s at the other end of the life cycle, studying ways to identify the disease much earlier. If you can do that you can potentially start treatment with drugs or other interventions earlier.

Outside brain health, there are a number of groups who are using this quantum technology in other areas of medical science. One group in Arkansas is looking at foetal imaging during pregnancy, using it to see much more clearly than has previously been possible.

There’s another group in London looking at spinal imaging using OPM. Concussion is another potential application of these sensors, for sports or military injuries. There’s a vast range of medical-imaging applications that can be done with these sensors.

Have you looked at non-medical applications?

Cerca is very much a medical-imaging company, but I am aware of other applications of the technology. For example, applications with car batteries have potential to be a big market. When they make car batteries, there’s a lot of electrochemistry that goes into the cells. If you can image those processes during production, you can effectively optimize that production cycle, and therefore reduce the costs of the batteries. This has a real potential benefit for use in electric cars.

What’s next for Cerca Magnetics’ technology?

We are in a good position in that we’ve been able to deliver our initial systems to the research market and actually earn revenue. We have made a profit every year since we started trading. We have then reinvested that profit back into further development. For example, we are looking at scanning two people at once, looking at other techniques that will continue to develop the product, and most importantly, working on medical device approval. At the moment, our system is only sold to research institutes, but we believe that if the product were available in every hospital and every doctor’s surgery, it could have an incredible societal impact across the human lifespan.

Magnetoencephalography with optically pumped magnetometers

Schematic showing the working principle behind optically pumped magnetometry

Like any electrical current, signals transmitted by neurons in the brain generate magnetic fields. Magnetoencephalography (MEG) is an imaging technique that detects these signals and locates them in the brain. MEG has been used to plan brain surgery to treat epilepsy. It is also being developed as a diagnostic tool for disorders including schizophrenia and Alzheimer’s disease.

MEG traditionally uses superconducting quantum interference devices (SQUIDs), which are sensitive to very small magnetic fields. However, SQUIDs must be cryogenically cooled, which makes the technology bulky and immobile. Magnetoencephalography with optically pumped magnetometers (OPM-MEG) is an alternative technology that operates at room temperature. Optically pumped magnetometers (OPMs) are small quantum devices that can be integrated into a helmet, which is an advantage for imaging children’s brains.

The key components of an OPM device are a cloud of alkali atoms (generally rubidium), a laser and a photodetector. Initially, the spins of the atoms point in random directions (top row in figure), but applying a polarized laser of the correct frequency aligns the spins along the direction of the light (middle row in figure). When the atoms are in this state, they are transparent to the laser so the signal reaching the photodetector is at a maximum.

However, in the presence of a magnetic field, such as that from a brain wave, the spins of the atoms are perturbed and they are no longer aligned with the laser (bottom row in figure). The atoms can now absorb some of the laser light, which reduces the signal reaching the photodetector.

In OPM-MEG, these devices are placed around the patient’s head and integrated into a helmet. By measuring the signal from the devices and combining this with structural images and computer modelling, it’s possible to work out where in the brain the signal came from. This can be used to understand how electrical activity in different brain regions is linked to development, brain disorders and neurodivergence.

Katherine Skipper

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Interview Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Cerca-frontis.jpg 1
Electro-active material ‘learns’ to play Pong https://physicsworld.com/a/electro-active-material-learns-to-play-pong/ Tue, 10 Sep 2024 08:45:39 +0000 https://physicsworld.com/?p=116611 Memory-like behaviour emerges in a polymer gel

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
An electro-active polymer hydrogel can be made to “memorize” experiences in the same way as biological neurons do, say researchers at the University of Reading, UK. The team demonstrated this finding by showing that when the hydrogel is configured to play the classic video game Pong, it improves its performance over time. While it would be simplistic to say that the hydrogel truly learns like humans and other sentient beings, the researchers say their study has implications for studies of artificial neural networks. It also raises questions about how “simple” such a system can actually be, if it is capable of such complex behaviour.

Artificial neural networks are machine-learning algorithms that are configured to mimic structures found in biological neural networks (BNNs) such as human brains. While these forms of artificial intelligence (AI) can solve problems through trial and error without being explicitly programmed with pre-defined rules, they are not generally regarded as being adaptive, as BNNs are.

Playing Pong with neurons

In a previous study, researchers led by neuroscientist Karl Friston of University College London, UK and Brett Kagan of Cortical Labs in Melbourne, Australia, integrated a BNN with computing hardware by growing a large cluster of human neurons on a silicon chip. They then connected this chip to a computer programmed to play a version of Pong, a table-tennis-like game that originally involved a player and the computer bouncing an electronic ball between two computerized paddles. In this case, however, the researchers simplified the game so that there was only a single paddle on one side of the virtual table.

To find out whether this paddle had contacted the ball, Friston, Kagan and colleagues transmitted electrical signals to the neuronal network via the chip. At first, the neurons did not play Pong very well, but over time, they hit the ball more frequently and made more consecutive hits, allowing for longer rallies.

In this earlier work, the researchers described the neurons as being able to “learn” the game thanks to the concept of free energy as defined by Friston in 2010. He argued that neurons endeavour to minimize free energy, and therefore “choose” the option that allows them to do this most efficiently.

An even simpler version

Inspired by this feat and by the technique employed, the Reading researchers wondered whether such an emergent memory function could be generated in media that were even simpler than neurons. For their experiments, they chose to study a hydrogel (a complex polymer that jellifies when hydrated) that contains free-floating ions. These ions make the polymer electroactive, meaning that its behaviour is influenced by an applied electric field. As the ions move, they draw water with them, causing the gel to swell in the area where the electric field is applied.

The time it takes for the hydrogel to swell is much greater than the time it takes to de-swell, explains team member Vincent Strong. “This means there is a form of hysteresis in the ion motion because each consecutive stimulation moves the ions less and less as they gather,” says Strong, a robotics engineer at Reading and the first author of a paper in Cell Reports Physical Science on the new research. “This acts as a form of memory since the result of each stimulation on the ion’s motion is directly influenced by previous stimulations and ion motion.”

This form of memory allows the hydrogel to build up experience about how the ball moves in Pong, and thus to move its paddle with greater accuracy, he tells Physics World. “The ions within the gel move in a way that maps a memory of the ball’s motion not just at any given point in time but over the course of the entire game.”

The researchers argue that their hydrogel represents a different type of “intelligence”, and one that could be used to develop algorithms that are simpler than existing AI algorithms, most of which are derived from neural networks.

“We see this work as an example of how a much simpler system, in the form of an electro-active polymer hydrogel, can perform similar complex tasks to biological neural networks,” Strong says. “We hope to apply this as a stepping stone to finding the minimum system required for such tasks that require memory and improvement over time, looking both into other active materials and tasks that could provide further insight.

“We’ve shown that memory is emergent within the hydrogels, but the next step is to see whether we can also show specifically that learning is occurring.”

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
Research update Memory-like behaviour emerges in a polymer gel https://physicsworld.com/wp-content/uploads/2024/09/hydrogel-pong.jpg
Fusion’s public-relations drive is obscuring the challenges that lie ahead https://physicsworld.com/a/fusions-public-relations-drive-is-obscuring-the-challenges-that-lie-ahead/ Mon, 09 Sep 2024 10:00:40 +0000 https://physicsworld.com/?p=116472 Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” So stated the Nobel laureate Richard Feynman during a commission hearing into NASA’s Challenger space shuttle disaster in 1986, which killed all seven astronauts onboard.

Those famous words have since been applied to many technologies, but they are becoming especially apt to nuclear fusion where public relations currently appears to have the upper hand. Fusion has recently been successful in attracting public and private investment and, with help from the private sector, it is claimed that fusion power can be delivered in time to tackle climate change in the coming decades.

Yet this rosy picture hides the complexity of the novel nuclear technology and plasma physics involved. As John Evans – a physicist who has worked at the Atomic Energy Research Establishment in Harwell, UK – recently highlighted in Physics World, there is a lack of proven solutions for the fusion fuel cycle, which involves breeding and reprocessing unprecedented quantities of radioactive tritium with extremely low emissions.

Unfortunately, this is just the tip of the iceberg. Another stubborn roadblock lies in instabilities in the plasma itself – for example, so-called Edge Localised Modes (ELMs), which originate in the outer regions of tokamak plasmas and are akin to solar flares. If not strongly suppressed they could vaporize areas of the tokamak wall, causing fusion reactions to fizzle out. ELMs can also trigger larger plasma instabilities, known as disruptions, that can rapidly dump the entire plasma energy and apply huge electromagnetic forces that could be catastrophic for the walls of a fusion power plant.

In a fusion power plant, the total thermal energy stored in the plasma needs to be about 50 times greater than that achieved in the world’s largest machine, the Joint European Torus (JET). JET operated at the Culham Centre for Fusion Energy in Oxfordshire, UK, until it was shut down in late 2023. I was responsible for upgrading JET’s wall to tungsten/beryllium and subsequently chaired the wall protection expert group.

JET was an extremely impressive device, and just before it ceased operation it set a new world record for controlled fusion energy production of 69 MJ. While this was a scientific and technical tour de force, in absolute terms the fusion energy created and plasma duration achieved at JET were minuscule. A power plant with a sustained fusion power of 1 GW would produce 86 million MJ of fusion energy every day. Furthermore, large ELMs and disruptions were a routine feature of JET’s operation and occasionally caused local melting. Such behaviour would render a power plant inoperable, yet these instabilities remain to be reliably tamed.

Complex issues

Fusion is complex – solutions to one problem often exacerbate other problems. Furthermore, many of the physics and technology features that are essential for fusion power plants and require substantial development and testing in a fusion environment were not present in JET. One example being the technology to drive the plasma current sustainably using microwaves. The purpose of the international ITER project, which is currently being built in Cadarache, France, is to address such issues.

ITER, which is modelled on JET, is a “low duty cycle” physics and engineering experiment. Delays and cost increases are the norm for large nuclear projects and ITER is no exception. It is now expected to start scientific operation in 2034, but the first experiments using “burning” fusion fuel – a mixture of deuterium and tritium (D–T) – is only set to begin in 2039. ITER, which is equipped with many plasma diagnostics that would not be feasible in a power plant, will carry out an extensive research programme that includes testing tritium-breeding technologies on a small scale, ELM suppression using resonant magnetic perturbation coils and plasma-disruption mitigation systems.

The challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed

Yet the challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed following any successful first demonstration of substantial fusion-energy production. Indeed, EUROfusion’s Research Roadmap, which the UK co-authored when it was still part of ITER, sees fusion as only making a significant contribution to global energy production in the course of the 22nd century. This may be politically unpalatable, but it is a realistic conclusion.

The current UK strategy is to construct a fusion power plant – the Spherical Tokamak for Energy Production (STEP) – at West Burton, Nottinghamshire, by 2040 without awaiting results from intermediate experiments such as ITER. This strategy would appear to be a consequence of post-Brexit politics. However, it looks unrealistic scientifically, technically and economically. The total thermal energy of the STEP plasma needs to be about 5000 times greater than has so far been achieved in the UK’s MAST-U spherical tokamak experiment. This will entail an extreme, and unprecedented, extrapolation in physics and technology. Furthermore, the compact STEP geometry means that during plasma disruptions its walls would be exposed to far higher energy loads than ITER, where the wall protection systems are already approaching physical limits.

I expect that the complexity inherent in fusion will continue to provide its advocates, both in the public and private sphere, with ample means to obscure both the severity of the many issues that lie ahead and the timescales required. Returning to Feynman’s remarks, sooner or later reality will catch up with the public relations narrative that currently surrounds fusion. Nature cannot be fooled.

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
Opinion and reviews Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-fusion-ITER.jpg newsletter
To make Mars warmer, just add nanorods https://physicsworld.com/a/to-make-mars-warmer-just-add-nanorods/ Mon, 09 Sep 2024 08:00:38 +0000 https://physicsworld.com/?p=116609 Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
If humans released enough engineered nanoparticles into the atmosphere of Mars, the planet could become more than 30 K warmer – enough to support some forms of microbial life. This finding is based on theoretical calculations by researchers in the US, and it suggests that “terraforming” Mars to support temperatures that allow for liquid water may not be as difficult as previously thought.

“Our finding represents a significant leap forward in our ability to modify the Martian environment,” says team member Edwin Kite, a planetary scientist at the University of Chicago.

Today, Mars is far too cold for life as we know it to thrive there. But it may not have always been this way. Indeed, streams may have flowed on the red planet as recently as 600 000 years ago. The idea of returning Mars to this former, warmer state – terraforming – has long kindled imaginations, and scientists have proposed several ways of doing it.

One possibility would be to increase the levels of artificial greenhouse gases, such as chlorofluorocarbons, in Mars’ currently thin atmosphere. However, this would require volatilizing roughly 100 000 megatons of fluorine, an element that is scarce on the red planet’s surface. This means that essentially all the fluorine required would need to be transported to Mars from somewhere else – something that is not really feasible.

An alternative would be to use materials already present on Mars’ surface, such as those in aerosolized dust. Natural Martian dust is mainly made of iron-rich minerals distributed in particles roughly 1.5 microns in radius, which are easily lofted to altitudes of 60 km and more. In its current form, this dust actually lowers daytime surface temperatures by attenuating infrared solar radiation. A modified form of dust might, however, experience different interactions. Could this modified dust make the planet warmer?

Nanoparticles designed to trap escaping heat and scatter sunlight

In a proof-of-concept study, Kite and colleagues at the University of Chicago, the University of Central Florida and Northwestern University analysed the atmospheric effects of nanoparticles shaped like short rods about nine microns long, which is about the same size as commercially available glitter. These particles have an aspect ratio of around 60:1, and Kite says they could be made from readily-available Martian materials such as iron or aluminium.

Calculations using finite-difference time domains showed that such nanorods, which are randomly oriented due to Brownian motion, would strongly scatter and absorb upwelling thermal infrared radiation in certain spectral windows. The nanorods would also scatter sunlight down towards the surface, adding to the warming, and would settle out of the atmosphere and onto the Martian surface more than 10 times more slowly than natural dust. This implies that, once airborne, the nanorods would be lofted to high altitudes and remain in the atmosphere for long periods.

More efficient than previous Martian warming proposals

These factors give the nanorod idea several advantages over comparable schemes, Kite says. “Our approach is over 5000 times more efficient than previous global warming proposals (on a per-unit-mass-in-the-atmosphere basis) because it uses much less mass of material to achieve significant warming,” he tells Physics World. “Previous schemes required importing large amounts of gases from Earth or mining rare Martian resources, [but] we find that nanoparticles can achieve similar warming with a much smaller total mass.”

However, Kite stresses that the comparison only applies to approaches that aim to warm Mars’ atmosphere on a global scale. Other approaches, including one developed by researchers at Harvard University and NASA’s Jet Propulsion Laboratory (JPL) that uses silica aerogels, would be better suited for warming the atmosphere locally, he says, adding that a recent workshop on Mars terraforming provides additional context.

While the team’s research is theoretical, Kite believes it opens new avenues for exploring planetary climate modification. It could inform future Mars exploration or even long-term plans for making Mars more habitable for microbes and plants. Extensive further research would be required, however, before any practical efforts in this direction could see the light of day. In particular, more work is needed to assess the very long-term sustainability of a warmed Mars. “Atmospheric escape to space would take at least 300 million years to deplete the atmosphere at the present-day rate,” he observes. “And nanoparticle warming, by itself, is not sufficient to make the planet’s surface habitable again either.”

Kite and colleagues are now studying the effects of particles of different shapes and compositions, including very small carbon nanoparticles such as graphene nanodisks. They report their present work in Science Advances.

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
Research update Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K https://physicsworld.com/wp-content/uploads/2024/09/Mars.jpg newsletter1
Taking the leap – how to prepare for your future in the quantum workforce https://physicsworld.com/a/taking-the-leap-how-to-prepare-for-your-future-in-the-quantum-workforce/ Fri, 06 Sep 2024 15:16:22 +0000 https://physicsworld.com/?p=116506 Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
It’s official: after endorsement from 57 countries and the support of international physics societies, the United Nations has officially declared that 2025 is the International Year of Quantum Science and Technology (IYQ).

The year has been chosen as it marks the centenary of Werner Heisenberg laying out the foundations of quantum mechanics – a discovery that would earn him the Nobel Prize for Physics in 1932. As well as marking one of the most significant breakthroughs in modern science, the IYQ also reflects the recent quantum renaissance. Applications that use the quantum properties of matter are transforming the way we obtain, process and transmit information, and physics graduates are uniquely positioned to make their mark on the industry.

It’s certainly big business these days. According to estimates from McKinsey, in 2023 global quantum investments were valued at $42bn. Whether you want to build a quantum computer, an unbreakable encryption algorithm or a high-precision microscope, the sector is full of exciting opportunities. With so much going on, however, it can be hard to make the right choices for your career.

To make the quantum landscape easier to navigate as a jobseeker, Physics World has spoken to Abbie Bray, Araceli Venegas-Gomez and Mark Elo – three experts in the quantum sector, from academia and industry. They give us their exclusive perspectives and advice on the future of the quantum marketplace; job interviews; choosing the right PhD programme; and managing risk and reward in this emerging industry.

Quantum going mainstream: Abbie Bray

According to Abbie Bray, lecturer in quantum technologies at University College London (UCL) in the UK, the second quantum revolution has broadened opportunities for graduates. Until recently, there was only one way to work in the quantum sector – by completing a PhD followed by a job in academia. Now, however, more and more graduates are pursuing research in industry, where established companies such as Google, Microsoft and BT – as well as numerous start-ups like Rigetti and Universal Quantum – are racing to commercialize the technology.

Abbie Bray

While a PhD is generally needed for research, Bray is seeing more jobs for bachelor’s and master’s graduates as quantum goes mainstream. “If you’re an undergrad who’s loving quantum but maybe not loving the research or some of the really high technical skills, there’s other ways to still participate within the quantum sphere,” says Bray. With so many career options in industry, government, consulting or teaching, Bray is keen to encourage physics graduates to consider these as well as a more traditional academic route.

She adds that it’s important to have physicists involved in all parts of the industry. “If you’re having people create policies who maybe haven’t quite understood the principles or impact or the effort and time that goes into research collaboration, then you’re lacking that real understanding of the fundamentals. You can’t have that right now because it’s a complex science, but it’s a complex science that is impacting society.”

So whether you’re a PhD student or an undergraduate, there are pathways into the quantum sector, but how can you make yourself stand out from the crowd? Bray has noticed that quantum physics is not taught in the same way across universities, with some students getting more exposure to the practical applications of the field than others. If you find yourself in an environment that isn’t saturated with quantum technology, don’t panic – but do consider getting additional experience outside your course. Bray highlights PennyLane, which is a Python library for programming quantum computers, that also produces learning resources.

Consider your options

Something else to be aware of, particularly for those contemplating a PhD, is that “quantum technologies” is a broad umbrella term, and while there is some crossover between, say, sensing and computing, switching between disciplines can be a challenge. It’s therefore important to consider all your options before committing to a project and Bray thinks that Centres for Doctoral Training (CDTs) are a step in the right direction. UCL has recently launched a quantum computing and quantum communications CDT where students will undergo a six-month training period before writing their project proposal. She thinks this enables them to get the most out of their research, particularly if they haven’t covered some topics in their undergraduate degree. “It’s very important that during a PhD you do the research that you want to do,” Bray says.

When it comes to securing a job, PhD position or postdoc, non-technical skills can be just as valuable as quantum know-how. Bray says it’s important to demonstrate that you’re passionate and deeply knowledgeable about your favourite quantum topic, but graduates also need to be flexible and able to work in an interdisciplinary team. “If you think you’re a theorist, understand that it also does sometimes mean looking at and working with experimental data and computation. And if you’re an experimentalist, you’ve got to understand that you need to have a rigorous understanding of the theory before you can make any judgements on your experimentation.” As Bray summarises: “theorists and experimentalists need to move at the same time”.

The ability to communicate technical concepts effectively is also vital. You might need to pitch to potential investors, apply for grants or even communicate with the HR department so that they shortlist the best candidates. Bray adds that in her experience, physicists are conditioned to communicate their research very directly, which can be detrimental in interviews where panels want to hear narratives about how certain skills were demonstrated. “They want to know how you identified a situation, then you identified the action, then the resolution. I think that’s something that every single student, every single person right now should focus on developing.”

The quantum industry is still finding its feet and earlier this year it was reported that investment has fallen by 50% since a high in 2022. However, Bray argues that “if there has been a de-investment, there’s still plenty of money to go around” and she thinks that even if some quantum technologies don’t pan out, the sector will continue to provide valuable skills for graduates. “No matter what you do in quantum, there are certain skills and experiences that can cross over into other parts of tech, other parts of science, other parts of business.”

In addition, quantum research is advancing everything from software to materials science and Bray thinks this could kick-start completely new fields of research and technology. “In any race, there are horses that will not cross the finish line, but they might run off and cross some other finish line that we didn’t know existed,” she says.

Building the quantum workforce: Araceli Venegas-Gomez

While working in industry as an aerospace engineer, Araceli Venegas-Gomez was looking for a new challenge and decided to pursue her passion for physics, getting her master’s degree in medical physics alongside her other duties. Upon completing that degree in 2016, she decided to take on a second master’s followed by a PhD in quantum optics and simulation at the University of Strathclyde, UK. By the time the COVID-19 pandemic hit in 2020, she had defended her thesis, registered her company, and joined the University of Bristol Quantum Technology Enterprise Centre as an executive fellow.

Araceli Venegas-Gomez

It was during her studies at Strathclyde that Venegas-Gomez decided to use her vast experience across industry and academia, as well as her quantum knowledge. Thanks to a fellowship from the Optica Foundation, she was able to launch QURECA (Quantum Resources and Careers). Today, it’s a global company that helps to train and recruit individuals, while also providing business development advice for for both individuals and companies in the quantum sphere. As founder and chief executive of the firm, her aims were to link the different stakeholders in the quantum ecosystem and to raise the quantum awareness of the general public. Crucially, she also wanted to ease the skills bottleneck in the quantum workforce and to bring newcomers into the quantum ecosystem.

As Venegas-Gomez points out, there is a significant scarcity of skilled quantum professionals for the many roles that need filling. This shortage is exacerbated by the competition between academia and industry for the same pool of talent. “Five or ten years ago, it was difficult enough to find graduate students who would like to pursue a career in quantum science, and that was just in academia,” explains Venegas-Gomez. “With the quantum market booming, industry is also looking to hire from the same pool of candidates, so you have more competition, for pretty much the same number of people.”

Slow progress

Venegas-Gomez highlights that the quantum arena is very broad. “You can have a career in research, or work in industry, but there are so many different quantum technologies that are coming onto the market, at different stages of development. You can work on software or hardware or engineering; you can do communications; you can work on developing the business side; or perhaps even in patent law.” While some of these jobs are highly technical and would require a master’s or a PhD in that specific area of quantum tech, there are plenty of roles that would accept graduates with only an MSc in physics or even a more interdisciplinary experience. “If you have a background in physics and business, everyone is looking for you,” she adds.

From what she sees in the quantum recruitment market today, there is no job shortage for physicists – instead there is a dearth of physicists with the right skills for a specific role. Venegas-Gomez explains that graduates with a physics degree in many fields have transferable skills that allow them to work in “absolutely any sector that you could imagine”. But depending on the specific area of academia or industry within the quantum marketplace that you might be interested in, you will likely require some specific competences.

As Bray also stated, Venegas-Gomez acknowledges that the skills and knowledge that physicists pick up can vary significantly between universities – making it challenging for employers to find the right candidates. To avoid picking the wrong course for you, Venegas-Gomez recommends that potential master’s and PhD students speak to a number of alumni from any given institute to find out more about the course, and see what areas they work in today. This can also be a great networking strategy, especially as some cohorts can have as few as 10–15 students all keen work with these companies or university departments in the future.

Despite the interest and investment in the quantum industry, new recruits should note that it is is still in its early stages. This slow progress can lead to high expectations that are not met, causing frustration for both employers and potential employees. “Only today, we had an employer approach us (QURECA) saying that they wanted someone with three to four years’ experience in Python, and a bachelor’s or master’s degree – it didn’t have to be quantum or even physics specifically,” reveals Venegas-Gomez. “This means that [to get this particular job] you could have a background in computer science or software engineering. Having an MSc in quantum per se is not going to guarantee that you get a job in quantum technologies, unless that is something very specific that employer is looking for.”

So what specific competencies are employers across the board looking for? If an company isn’t looking for a specific technical qualification, what happens if they get two similar CVs for the same role? Do they look at an applicant’s research output and publications, or are they looking for something different? “What I find is that employers are looking for candidates who can show that, alongside their academic achievements, they have been doing outreach and communication activities,” says Venegas-Gomez. “Maybe you took on a business internship and have a good idea of how the industry works beyond university – this is what will really stand out.”

She adds that so-called soft-skills – such as demonstrating good leadership, teamwork, and excellent communication skills – are very valued. “This is an industry where highly skilled technical people need to be able to work with people vastly beyond their area of expertise. You need to be able to explain Hamiltonians or error corrections to someone who is not quantum-literate and explain the value of what you are working on.”

Venegas-Gomez is also keen that job-seekers realize that the chances of finding a role at a large firm such as Google, IBM or Microsoft are still slim-to-none for most quantum graduates. “I have seen a lot of people complete their master’s in a quantum field and think that they will immediately find the perfect job. The reality is that they likely need to be patient and get some more experience in the field before they get that dream job.” Her main advice to students is to clearly define their career goals, within the context of the booming and ever-growing quantum market, before pursuing a specific degree. The skills you acquire with a quantum degree are also highly transferable to other fields, meaning there are lots of alternatives out there even if you can’t find the right job in the quantum sphere. For example, experience in data science or software development can complement quantum expertise, making you a versatile and coveted contender in today’s job market.

Approaching “quantum advantage”: Mark Elo

Last year, IBM broke records by building the first quantum chip with more than 1000 qubits. The project represents millions of dollars of investment and the company is competing with the likes of Intel and Google to achieve “quantum advantage”, which refers to a quantum computer that can solve problems that are out of reach for classical machines.

Despite the hype, there is work to be done before the technology becomes widespread – a commercial quantum computer needs millions of qubits, and challenges in error correction and algorithm efficiency must be addressed.

Mark Elo

“We’re trying to move it away from a science experiment to something that’s more an industrial product,” says Mark Elo, chief marketing officer at Tabor Electronics. Tabor has been building electronic signal equipment for over 50 years and recently started applying this technology to quantum computing. The company’s focus is on control systems – classical electronic signals that interact with quantum states. At the 2024 APS March Meeting, Tabor, alongside its partners FormFactor and QuantWare, unveiled the first stage of the Echo-5Q project, a five-qubit quantum computer.

Elo describes the five years he’s worked on quantum computing as a period of significant change. Whereas researchers once relied on “disparate pieces of equipment” to build experiments, he says that the industry has changed such that “there are [now] products designed specifically for quantum computing”.

The ultimate goal of companies like Tabor is a “full-stack” solution where software and hardware are integrated into a single platform. However, the practicalities of commercializing quantum computing require a workforce with the right skills. Two years ago the consultancy company McKinsey reported that companies were already struggling to recruit, and they predicted that by 2025, half of the jobs in quantum computing will not be filled. Like many in the industry, Elo sees skills gaps in the sector that must be addressed to realize the potential of quantum technology.

Elo’s background is in solid-state electronics, and he worked for nearly three decades on radio-frequency engineering for companies including HP and Keithley. Most quantum-computing control systems use radio waves to interface with the qubits, so when he moved to Tabor in 2019, Elo saw his career come “full circle”, combining the knowledge from his degree with his industry experience. “It’s been like a fusion of two technologies” he says.

It’s at this interface between physics and electronic engineering where Elo sees a skills shortage developing. “You need some level of electrical engineering and radio-frequency knowledge to lay out a quantum chip,” he explains. “The most common qubit is a transmon, and that is all driven by radio waves. Deep knowledge of how radio waves propagate through cables, through connectors, through the sub-assemblies and the amplifiers in the refrigeration unit is very important.” Elo encourages physics students interested in quantum computing to consider adding engineering – specifically radio-frequency electronics – courses to their curricula.

Transferable skills

The Tabor team brings together engineers and physicists, but there are some universal skills it looks for when recruiting. People skills, for example, are a must. “There are some geniuses in the world, but if they can’t communicate it’s no good in an industrial environment,” says Elo.

Elo describes his work as “super exciting” and says “I feel lucky in the career and the technology I’ve been involved in because I got to ride the wave of the cellular revolution all the way up to 5G and now I’m on to the next new technology.” However, because quantum is an emerging field, he thinks that graduates need to be comfortable with some risk before embarking on a career. He explains that companies don’t always make money right now in the quantum sector – “you spend a lot to make a very small amount”. But, as Elo’s own career shows, the right technical skills will always allow you to switch industries if needed.

Like many others, Elo is motivated by the excitement of competing to commercialize this new technology. “It’s still a market that’s full of ideas and people marketing their ideas to raise money,” he says. “The real measure of success is to be able to look at when those ideas become profitable. And that’s when we know we’ve crossed a threshold.”

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
Feature Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market https://physicsworld.com/wp-content/uploads/2024/09/2024-09-GRADCAREERS-computing-abstract-1190168517-iStock_blackdovfx.jpg newsletter1
BepiColombo takes its best images yet of Mercury’s peppered landscape https://physicsworld.com/a/bepicolombo-takes-its-best-images-yet-of-mercurys-peppered-landscape/ Fri, 06 Sep 2024 10:18:45 +0000 https://physicsworld.com/?p=116616 The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
The BepiColombo mission to Mercury – Europe’s first craft to the planet – has successfully completed its fourth gravity-assist flyby as it uses the planet’s gravity to enter orbit around Mercury in November 2026. As it did so, the craft captured its best images yet of some of Mercury’s largest impact craters.

BepiColombo, which launched in 2018, comprises two science orbiters that will circle Mercury – the European Space Agency’s Mercury Planetary Orbiter (MPO) and the Japan Aerospace Exploration Agency’s Mercury Magnetospheric Orbiter (MMO).

The two spacecraft are travelling to Mercury as part of a coupled system. When they reach the planet, the MMO will study Mercury’s magnetosphere while the MPO will survey the planet’s surface and internal composition.

The aim of the BepiColombo mission is to provide information on the composition, geophysics, atmosphere, magnetosphere and history of Mercury.

The closest approach so far for the mission – about 165 km above the planet’s surface – took place at on 4 September. For the first time, the spacecraft had a clear view of Mercury’s south pole.

Mercury by BepiColombo

One image (top), taken by the craft’s M-CAM2 camera, features a large “peak ring basin” inside a crater measuring 210 km across, which is named after the famous Italian composer Antonio Vivaldi. The visible gap in the peak ring is thought to be where more recent lava flows have entered and flooded the crater.

BepiColombo will now conduct a fifth and sixth flyby of the planet on 1 December and 8 January 2025, respectively, before arriving in November 2025. The mission is planned to operate until 2029.

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
Blog The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby https://physicsworld.com/wp-content/uploads/2024/09/Mercury_reveals_its_Four_Seasons-small.jpg
Hybrid quantum–classical computing chips and neutral-atom qubits both show promise https://physicsworld.com/a/hybrid-quantum-classical-computing-chips-and-neutral-atom-qubits-both-show-promise/ Thu, 05 Sep 2024 15:33:03 +0000 https://physicsworld.com/?p=116604 Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast looks at quantum computing from two different perspectives.

Our first guest is Elena Blokhina, who is chief scientific officer at Equal1 – an award-winning company that is developing hybrid quantum–classical computing chips. She explains why Equal1 is using quantum dots as qubits in its silicon-based quantum processor unit.

Next up is Brandon Grinkemeyer, who is a PhD student at Harvard University working in several cutting-edge areas of quantum research. He is a member of Misha Lukin’s research group, which is active in the fields of quantum optics and atomic physics and is at the forefront of developing  quantum processors that use arrays of trapped atoms as qubits.

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
Podcasts Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests https://physicsworld.com/wp-content/uploads/2024/09/Brandon-and-Elena.jpg
Researchers with a large network of unique collaborators have longer careers, finds study https://physicsworld.com/a/researchers-with-a-large-network-of-unique-collaborators-have-longer-careers-finds-study/ Thu, 05 Sep 2024 15:00:02 +0000 https://physicsworld.com/?p=116555 Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
Are you keen to advance your scientific career? If so, it helps to have a big network of colleagues and a broad range of unique collaborators, according to a new analysis of physicists’ publication data. The study also finds that female scientists tend to work in more tightly connected groups than men, which can hamper their career progression.

The study was carried out by a team led by Mingrong She, a data analyst at Maastricht University in the Netherlands. It examined the article history of more than 23,000 researchers who had published at least three papers in American Physical Society (APS) journals. Each scientist’s last paper had been published before 2015, suggesting their research career had ended (arXiv:2408.02482).

To measure “collaboration behaviour”, the study noted the size of each scientist’s collaborative network, the reoccurrence of collaborations, the “interconnectivity” of the co-authors and the average number of co-authors per publication. Physicists with larger networks and a greater number of unique collaborators were found to have had longer careers and been more likely to become principal investigators, as given by their position in the author list.

On the other hand, publishing repeatedly with the same highly interconnected co-authors is associated with shorter careers and a lower chance of achieving principal investigator status, as is having a larger average number of coauthors.

The team also found that the more that physicists publish with the same co-authors, the more interconnected their networks become. Conversely, as network size increases, networks tended to be less dense and repeat collaboration less frequent.

Close-knit collaboration

In terms of gender, the study finds that women have more interconnected networks and a higher average number of co-authors than men. Female physicists are also more likely to publish repeatedly with the same co-authors, with women therefore being less likely than men to become principal investigators. Male scientists also have longer overall careers and stay in science longer after achieving principal investigator status than women, the study finds.

Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities

Mingrong She

“Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities [and] increases the probability of establishing connections with prominent researchers and institutions,” She told Physics World. Diverse collaboration also “mitigates the risk of being confined to a narrow niche and enhances adaptability” she adds,”both of which are indispensable for long-term career growth”.

Close-knit collaboration networks can be good for fostering professional support, the study authors state, but they reduce opportunities for female researchers to form new professional connections and lower their visibility within the broader scientific community. Similarly, larger numbers of co-authors dilute individual contributions, making it harder for female researchers to stand out.

She says the study “highlights how the structure of collaboration networks can reinforce existing inequalities, potentially limiting opportunities for women to achieve career longevity and progression”. Such issues could be improved with policies that help scientists to engage a wider array of collaborators, rewarding and encouraging small-team publications and diverse collaboration. Policies could include adjustments to performance evaluations and grant applications, and targeted training programmes.

The study also highlights lower mobility as a major obstacle for female scientists, suggesting that better childcare support, hybrid working and financial incentives could help improve the mobility and network size of female scientists.

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
News Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers https://physicsworld.com/wp-content/uploads/2024/09/social-network-505782242-iStock_Ani_Ka.jpg
Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner https://physicsworld.com/a/shrinivas-kulkarni-curiosity-and-new-technologies-inspire-shaw-prize-in-astronomy-winner/ Thu, 05 Sep 2024 10:58:03 +0000 https://physicsworld.com/?p=116565 "No shortage of phenomena to explore," says expert on variable and transient objects

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
What does Shrinivas Kulkarni finds fascinating? When I asked him that question I expected an answer related to his long and distinguished career in astronomy. Instead, he talked about how the skin of sharks has a rough texture, which seems to reduce drag – allowing the fish to swim faster. He points out that you might not win a Nobel prize for explaining the hydrodynamics of shark skin, but it is exactly the type of scientific problem that captivates Kulkarni’s inquiring mind.

But don’t think that Kulkarni – who is George Ellery Hale Professor of Astronomy and Planetary Sciences at the California Institute of Technology (Caltech) – is whimsical when it comes to his research interests. He says that he is an opportunist, especially when it comes to technology, which he says makes some research questions more answerable than others. Indeed, the scientific questions he asks are usually guided by his ability to build technology that can provide the answers.

Kulkarni won the 2024 Shaw Prize in Astronomy for his work on variable and transient astronomical objects. He says that the rapid development of new and powerful technologies has meant that the last few decades been a great time to study such objects. “Thirty years ago, the technology was just not there,” he recalls, “optical sensors were too expensive and the necessary computing power was not available.

Transient and variable objects

Kulkarni told me that there are three basic categories of transient and variable objects. One category covers objects that change position in the sky – with examples including planets and asteroids. A second category includes objects that oscillate in terms of their brightness.

“About 10% of stars in the sky do not shine steadily like the Sun,” he explains. “We are lucky that the Sun is an extremely steady star. If its output varied by just 1% it would have a huge impact on Earth – much larger than the current global warming. But many stars do vary at the 1% level for a variety of reasons.” These can be rotating stars with large sunspots or stars eclipsing in binary systems, he explains.

It might surprise you that every second, somewhere in the universe, there is a supernova

The third and most spectacular category involve stars that undergo rapid and violent changes such as stars that explode as supernovae. “It might surprise you that every second, somewhere in the universe, there is a supernova. Some are very faint, so we don’t see all of them, but with the Zwicky Transient Facility (ZTF) we see about 20,000 supernovae per year.” Kulkarni is principal investigator for the ZTF, and his leadership at that facility is mentioned in his Shaw Prize citation.

Kulkarni explains that astronomers are interested in transient and variable objects for many different reasons. Closer to home, scientists monitor the skies for asteroids that may be on collision courses with Earth.

“In 1908 there was a massive blast in Siberia called the Tunguska event,” he says. This is believed to be the result of the air explosion of a rocky meteor that was about 55 m in diameter. Because it happened in a remote part of the world, only three people are known to have been killed. Kulkarni points out that if such a meteor struck a populated area like Southern California, it would be catastrophic. By studying and cataloguing asteroids that could potentially strike Earth, Kulkarni believes that we could someday launch space missions that nudge away objects on collision courses with Earth.

Zwicky Transient Facility

At the other end of the mass and energy range, Kulkarni says that studying spectacular events such as supernovae provides important insights into origins of many of the elements that make up the Earth and indeed ourselves. He says that over the past 70 years astronomers have made “amazing progress” in understanding how different elements are created in these explosions.

Kulkarni was born in1956 in Kurundwad, which is in the Indian state of Maharashtra. In 1978, he graduated with an MS degree in physics from the Indian Institute of Technology in New Delhi. His next stop was the University of California, Berkeley, where he completed a PhD in astronomy in 1983. He joined Caltech in 1985 and has been there ever since.

You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time

A remarkable aspect of Kulkarni’s career is his ability to switch fields every 5–10 years, something that he puts down to his curious nature. “After I understand something to a reasonable level, I lose interest because the curiosity is gone,” he says. Kulkarni adds that his choice of a new project is guided by his sense of whether rapid progress can be made in the field. “You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time”.

He gives the example of his work on gamma-ray bursts, which are some of the most powerful explosions in the universe. He says that this was a very fruitful field when astronomers were discovering about one burst per month. But then the Neil Gehrels Swift Observatory was launched in 2004 and it was able to detect 100 or so gamma-ray bursts per year.

Looking for new projects

At this point, Kulkarni says that studying bursts became a “little industry” and that’s why he left the field. “All the low-hanging fruit had been picked – and when the fruit is higher on the tree, that is when I start looking for new projects”.

It is this restlessness that first got him involved in the planning and operation of two important instruments, the Palomar Transient Factory (PTF) and its successor the Zwicky Transient Facility (ZTF). These are wide-field sky astronomical surveys that look for rapid changes in the brightness or position of astronomical objects. The PTF began observing in 2009 and the ZTF took over in 2018.

Kulkarni says that he is fascinated by the engineering aspects of astronomy and points out that technological advances in sensors, electronics, computing and automation continue to transform how observational astronomy is done. He explains that all of these technological factors came together in the design and operation of the PTF and the ZTF.

His involvement with PTF and ZTF allowed Kulkarni to make many exciting discoveries during his career. However, his favourite variable object is one that he discovered in 1982 while doing a PhD under Donald Backer. Called PSR B1937+21, it is the first millisecond pulsar ever to be to observed. It is a neutron star that rotates more than 600 times per second while broadcasting a beam of radio waves much like a lighthouse.

“I was there [at the Arecibo Observatory] all alone… it was very thrilling,” he says. The discovery provided insights into the density of neutron stars and revitalized the study of pulsars, leading to large-scale surveys that target pulsars.

When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something

Another important moment for Kulkarni occurred in 1994, when he and his graduate students were the first to observe a cool brown dwarf. These are objects that weigh in between gas-giant planets (like Jupiter) and small main-sequence stars. “When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something. That was kind of fun.”

Kulkarni is proud of his early achievements, but don’t think that he dwells on the past. “This is a fantastic time to do astronomy. The instruments that we’re building today have an enormous capacity for information delivery.”

First brown dwarf

He mentions images released by the European Space Agency’s Euclid space telescope, which launched last year. He describes them as “gorgeous pictures” but points out that the real wonder is that he could zoom in on the images by a factor of 10 before the pixels became apparent. “It was just so rich, a single image is maybe a square degree of the sky. The resolution is just amazing.”

And when it comes to technology, Kulkarni is adamant that it’s not only bigger and more expensive telescopes that are pushing the frontiers of astronomy. “There is more room sideways,” he says, meaning that much progress can be made by repurposing existing facilities.

Indeed, ZTF and PTF both use (used)  the  Samuel Oschin telescope at the Palomar Observatory in California. This is a 48-inch (1.3 metre) facility that saw first light 75 years ago. With new instruments, old telescopes can be used to study the sky “ferociously” he says.

Kulkarni told me that even he was surprised at the number of papers that ZTF data have spawned since the facility came online in 2018. One important reason, says Kulkarni, is that ZTF immediately shares its data freely with astronomers around the world. Indeed, it is the explosion in data from facilities like the ZTF along with rapid improvements in data processing that Kulkarni believes has put us in a  golden age of astronomy.

Beyond the technology, Kulkarni says that the very nature of the cosmos means that there will always be opportunities for astronomers. He muses that the universe has been around for nearly 14 billion years and has had “many opportunities to do some very strange things – and a very long time to cook up those things – so there’s no shortage of phenomena to explore”.

Great time to be an astronomer

So it is a great time to consider a career in astronomy and Kulkarni’s advice to aspiring astronomers is to be pragmatic about how they approach the field. “Figure out who you are and not you want to be,” he says. “If you want to be an astronomer. There are roughly three categories open to you. You can be a theorist who puts a lot of time understand the physics, and especially the mathematics, that are used to make sense of astronomical observations.”

At the other end of the spectrum are the astronomers who build the “gizmos” that are used to scan the heavens – generating the data that the community rely on. The third category, says Kulkarni, falls somewhere between these two extremes and includes the modellers. These are the people who take the equations developed by the theorists and create computer models that help us understand observational data.

“Astronomy is a fantastic field and things are really happening in a very big way.” He asks new astronomers to, “Bring a fresh perspective, bring energy, and work hard”. He also says that success comes to those who are willing to reflect on their strengths and weaknesses. “Life is a process of continual improvement, continual education, and continual curiosity.”

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
Analysis "No shortage of phenomena to explore," says expert on variable and transient objects https://physicsworld.com/wp-content/uploads/2024/09/5-9-24-Kulkarni_Shri-Faculty.jpg newsletter
Twisted fibres capture more water from fog https://physicsworld.com/a/twisted-fibres-capture-more-water-from-fog/ Wed, 04 Sep 2024 14:00:41 +0000 https://physicsworld.com/?p=116579 New finding could allow more fresh water to be harvested from the air

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Twisted fibres are more efficient at capturing and transporting water from foggy air than straight ones. This finding, from researchers at the University of Oslo, Norway, could make it possible to develop advanced fog nets for harvesting fresh water from the air.

In many parts of the world, fresh water is in limited supply and not readily accessible. Even in the driest deserts, however, the air still contains some humidity, and with the right materials, it is possible to retrieve it. The simplest way of doing this is to use a net to catch water droplets that condense on the material for later release. The most common types of net for this purpose are made from steel extruded into wires; plastic fibres and strips; or woven poly-yarn. All of these have uniform cross-sections and are therefore relatively smooth and straight.

Nature, however, abounds with slender, grooved and bumpy structures that plants and animals have evolved to capture water from ambient air and quickly transport droplets where they need to go. Cactus spines, nepenthes plants, spider spindle silk and Namib desert beetle shells are just a few examples.

From “barrel” to “clamshell”

Inspired by these natural structures, Vanessa Kern and Andreas Carlson of the mechanics section in Oslo’s Department of Mathematics placed water droplets on two vertical threads that they had mechanically twisted together. They then recorded the droplets’ flow paths using high-speed imaging.

By changing the tightness, or wavelength, of the twist, the researchers were able to control when the droplet changed from its originally symmetric “barrel” shape to an asymmetric “clamshell” configuration. This allowed the researchers to speed up or slow down the droplets’ flow. While this is not the first time that scientists have succeeded in changing the shapes of droplets sliding down fibres, most previous work focused on perfectly wetting liquids, rather than partially wetting ones as was the case here.

Once they understood the droplets’ dynamics, Kern and Carlson designed nets that could be pre-programmed with anti-clogging properties. They then analysed the twisted fibres’ ability to collect water from fog flowing through an experimental wind tunnel, plotting the fibres’ water yield as a function of how much they were twisted.

Grooves that work as a water slide

The Oslo team found that the greater the number of twists, the more water the fibres captured. Notably, the increase was greater than would be expected from an increase in surface area alone. The team say this implies that the geometry of the twists is more important than area in increasing fog capture.

“Introducing a twist allowed us to effectively form grooves that work as a water slide as it stabilises a liquid film,” Kern explains. “This alleviates the well-known problem of straight fibres, where droplets would get stuck/pinned.”

The twisted fibres would make good fog nets, adds Carlson. “Fog nets are typically made up of plastic fibres and used to harvest fresh water from fog in arid regions such as in Morocco. Our results indicate that these twisted fibres could indeed be beneficial in terms of increasing the efficiency of such nets compared to straight fibres.”

The researchers are now working on testing their twisted fibres in a wider range of wind and fog conditions. They hope these tests will show which environments the fibres work best in, and where they might be most suitable for water harvesting. “We also want to move towards conditions closer to those found in the field,” they say. “There are still many open questions about the small-scale physics of the flow inside the grooves between these fibres that we want to answer too.”

The study is detailed in PNAS.

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Research update New finding could allow more fresh water to be harvested from the air https://physicsworld.com/wp-content/uploads/2024/09/24-02252-1.jpg
Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us https://physicsworld.com/a/robot-cooked-pizza-delivered-to-your-door-heres-what-zumes-failure-tells-us/ Wed, 04 Sep 2024 10:00:33 +0000 https://physicsworld.com/?p=116354 James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
A red truck and small car behind it, from the Zume Pizza company

“The $500 million robot pizza start-up you never heard of has shut down, report says.”

Click-bait headlines don’t always tell the full story and this one is no exception. It appeared  last year on the Business Insider website and concerned Zume – a Silicon Valley start-up backed by Japanese firm SoftBank, which once bought chip-licensing firm Arm Holdings. Zume proved to be one of the biggest start-up failures in 2023, burning through nearly half a billion dollars of investment (yes, half a billion dollars) before closing down.

Zume was designed to deliver pizzas to customers in vans, with the food prepared by robots and cooked in GPS-equipped automated ovens. The company was founded in 2015 as Zume Pizza, delivering its first pizzas the year after. But according to Business Insider, which retold a story from The Information, Zume struggled with problems like “stopping melted cheese from sliding off its pizzas while they cooked in moving trucks”.

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story. Unless you’ve set up your own firm, you probably won’t understand the passion, dedication and faith needed to found or join a start-up team. Still, from a journalistic point of view, the headline did the trick in that it encouraged me to delve further into the story. Here’s what I think we can learn from the episode.

A new spin on pizza

On the face of it, Zume is a brilliant and compelling idea. You’re taking the two largest costs of the pizza-delivery business – chefs to cook the food and premises to house the kitchen – and removing them from the equation. Instead, you’re replacing them with delivery vehicles that make the food automatically en-route, potentially even using autonomous vehicles that don’t need human drivers either. What could possibly go wrong?

Zume, which quickly raised $6m in Series A investment funding in 2016, delivered its first pizzas in September of that year. The company secured a patent on cooking during delivery, which included algorithms to predict customer choices. It also planned to work with other firms to provide further robot-prepared food, such as salads and desserts.

By 2018 the concept had captured the imagination of major investors such as SoftBank, which saw the potential for Zume to become the “Amazon of pizza”. The scope was huge: the overall global pizza market was worth $197bn in 2021 and is set to grow to $551bn by 2031, according to market research firm Business Research Insights. So it should be possible to grab a piece of the pie with enough funding and focused, disruptive innovation.

But with customers complaining about the robotic pizzas, the company said in 2018 it was moving in new directions. Instead, it now planned to use artificial intelligence (AI) and its automated production technology for automated food trucks and would form a larger umbrella company – Zume, Inc. It also planned to start licensing its automation technology.

In November 2018 the company raised $375m from SoftBank, now making it worth an eye-popping $2.25bn. It then started focusing on automated production and packaging for other food firms, including buying Pivot – a company that made sustainable, plant-based packaging. By 2020 it was concentrating fully on compostable food packaging and then laid off more than 500 staff, including its entire robotics and delivery truck teams.

Sadly, Zume, Inc was unable to sustain enough sales or bring in enough external funding. Investment cash started running precariously low and in June 2023 the firm was eventually shut down, leaving “joke” headlines about cheese sliding off. How very sad for all involved, but this was only a small part of the issues the company faced.

Inside Zume

Many have speculated where it all went wrong for Zume. To me, the problem seemed to be execution and understanding of the market. The food industry is dominated by lots of dominant established brands, big advertising budgets and huge promotions. When faced with these kinds of challenges, any new business must work out how to compete, break into and disrupt this kind of established market.

Once I looked into what happened at Zume, it wasn’t quite as amazing as I initially thought. To my mind, the logical thing would have been to have all the operations on the truck. But according to a video released by the company in 2016, that’s not what they did. Instead, Zume built an overly complex robot production line in a larger space than a traditional pizza outlet to make the pizzas.

The food was then loaded onto trucks and cooked en-route in a van equipped with 56 automated ovens. Each was timed so that the pizza would be ready shortly before it arrived at the customer’s address. Zume had an app and aimed to cut the delivery time from order to delivery to 22 minutes – which was pretty good. But the app in itself wasn’t a big innovation; after all Domino’s first had one as far back as 2010.

In American start-up culture, failure is not an embarrassment. It’s seen as a learning experience, and looking at the mistakes of history can yield some valuable insights. But then I stumbled upon a really great article by a firm called Legendary Supply Chain that spelled out clearly what happened. Turns out, what really went wrong was Zume’s lack of understanding of the drivers and economics of the pizza-delivery business.

The 3Ps of pizza

Pizzas have a tiny profit margin. But Zume created massive capital costs by developing automation systems, which meant they’d have to sell loads of pizza to make enough return on investment. Worse still, using FedEx-sized trucks to deliver individual pizzas was inherently wasteful and impractical. That’s why you’ll usually see most pizza delivery drivers on bicycles, mopeds or cars, which are a far more cost-effective means of delivery.

You could say that Zume re-invented the wheel by re-creating – at great cost – the automation you find in frozen-pizza factories and applying it to a much smaller scale operation. It also seems that the firm didn’t focus enough on the product or what the customers wanted – and instead seemed to solve problems that didn’t exist. In short, the execution was poor and the $400m raised rather went to managers’ heads.

Countless successful companies prove what’s vital are the “3Ps”: product, price and promotion. People buy pizza on an impulse. For me, whenever the idea of a pizza pops into my head, I want something that’s yummy, saves me from cooking and perhaps reminds me of Italian holidays. According to customer feedback, Zume’s pizza was only “okay”. Apart from the cheese occasionally sliding off, it wasn’t any better or worse than anything else.

As far as price was concerned, Zume’s pizzas ought to have been cheaper to make and deliver than rival firms. However, Zume charged a premium price on account of the food being slightly fresher as it was cooked while being delivered. Customers, unfortunately, didn’t buy into this argument sufficiently. I’m not sure what Zume did to promote their products, but with all that money sloshing around, they certainly had more than enough to create a brand.

Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help

I’m sure Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help. In fact, I can see why putting sufficiency low-cost automation on a fleet of small vans – coupled with low-cost, central supply depots – might make the economics more favourable. But anyone wanting to revolutionize pizza delivery will have to map out the costs and economics of pizza delivery to get funded and have some good answers to where Zume went wrong.

The odds for start-up success are not good. As I’ve mentioned before, almost 90% of start-ups in the UK survive their first year, but fewer than half make it beyond five years. To get there – whether you’re making pizzas or photodetectors – you’ll need a good plan, a great team, a degree of luck and good timing to compete in the market. But if you do succeed, the rewards are clear.

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
Opinion and reviews James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business https://physicsworld.com/wp-content/uploads/2024/08/2024-09-Transactions-Pizzaslice_feature.jpg newsletter
Quark distribution in light–heavy mesons is mapped using innovative calculations https://physicsworld.com/a/quark-distribution-in-light-heavy-mesons-is-mapped-using-innovative-calculations/ Wed, 04 Sep 2024 07:27:27 +0000 https://physicsworld.com/?p=116554 Form factors can be tested by collider experiments

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
The distribution of quarks inside flavour-asymmetric mesons has been mapped by Yin-Zhen Xu of the University of Huelva and Pablo de Olavide University in Spain. These mesons are strongly interacting particles composed of a quark and an antiquark, one heavy and one light.

Xu employed the Dyson–Schwinger/Bethe–Salpeter equation technique to calculate the heavy–light meson electromagnetic form factors, which can be measured in collider experiments. These form factors provide invaluable information about the properties of the strong interactions as described by quantum chromodynamics.

“The electromagnetic form factors, which describe the response of composite particles to electromagnetic probes, provide an important tool for understanding the structure of bound states in quantum chromodynamics,” explains Xu. “In particular, they can be directly related to the charge distribution inside hadrons.”

From numerous experiments, we know that particles that interact via the strong force (such as protons and neutrons) consist of quarks bound together by gluons. This similar to how nuclei and electrons are bound into atoms through the exchange of photons as described by quantum electrodynamics. However, doing precise calculations in quantum chromodynamics is nearly impossible, and this makes predicting the internal structure of hadrons extremely challenging.

Approximation techniques

To address this challenge, scientists have developed several approximation techniques. One such method is the lattice approach, which replaces the infinite number of points in real space with a finite grid, making calculations more manageable. Another effective method involves solving the Dyson–Schwinger/Bethe–Salpeter equations. They ignore certain subtle effects in the strong interactions of quarks with gluons, as well as the virtual quark–antiquark pairs that are constantly being born and disappearing in the vacuum.

Xu’s new study is described in the Journal for High Energy Physics, utilized the Dyson-Schwinger/Bethe-Salpeter approach to investigate the properties of hadrons made of quarks and antiquarks of different types (or flavors) with significant mass differences. For instance, K-mesons are composed of a strange antiquark with a mass of around 100 MeV and an up or down quark with a mass of only a few megaelectronvolts. The substantial difference in quark masses simplifies their interaction, which allows the author to extract more information about the structure of flavour-asymmetric mesons.

Xu began his study by calculating the masses of mesons and compared these results with experimental data. He found that the Dyson–Schwinger/Bethe–Salpeter method produced results comparable to the best previously used methods, validating his approach.

Deducing quark distributions

Xu’s next step was to deduce the distribution of quarks within the mesons. Quantum effects prevent particles from being localized in space, so he calculated the probability of their presence in certain regions, whose size depends on the properties of the quarks and their interactions with surrounding particles.

Xu discovered that the heavier the quark, the more localized it is within the meson with the difference in the distribution range reaching more than ten times. For instance, in B-mesons, the distribution range for a bottom antiquark is much smaller (0.07 fm) compared to that for the much lighter up or down quarks (0.80 fm). In contrast, the distribution range for two light quarks inside π-mesons is almost equal.

Using these quark distributions, Xu then computed the electromagnetic form factors, which encode the details of charge and current distribution within the mesons. The values he obtained closely matched the available experimental data.

In his work, Xu has shown that the Dyson–Schwinger/Bethe–Salpeter technique is particularly well-suited for studying heavy-light mesons, often surpassing even the most sophisticated and resource-intensive methods used previously.

Room for refinement

Although Xu’s results are promising, he admits that there is room for refinement. On the experimental side, measuring some currently unknown form factors could allow comparisons with his computed values to further verify the method’s consistency.

From a theoretical perspective, more details about strong interactions within mesons could be incorporated into the Dyson–Schwinger/Bethe–Salpeter method to enhance computational accuracy. Additionally, other meson parameters can be computed using this approach, allowing more extensive comparisons with experimental data.

“Based on the theoretical framework applied in this work, other properties of heavy–light mesons, such as various decay rates, can be further investigated,” concludes Xu.

The study also provides a powerful tool for exploring the intricate world of strongly interacting subatomic particles, potentially opening new avenues in particle physics research.

The calculations are described in The Journal of High Energy Physics.

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
Research update Form factors can be tested by collider experiments https://physicsworld.com/wp-content/uploads/2024/09/3-09-24-quantum-entanglement-web-465535389-iStock_Traffic-Analyzer.jpg newsletter1
Estonia becomes first Baltic state to join CERN https://physicsworld.com/a/estonia-becomes-first-baltic-state-to-join-cern/ Tue, 03 Sep 2024 12:45:08 +0000 https://physicsworld.com/?p=116548 The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
Estonia is the first Baltic state to become a full member of the CERN particle-physics lab near Geneva. The country, which has a population of 1.3 million, formally became the 24th CERN member state on 30 August. Estonia is now expected to pay around €1.5m each year in membership fees.

Celebrating its 70th anniversary this year, CERN’s member countries, which include France, Germany and the UK, pay costs towards CERN’s programmes and sit on the lab’s governing council. Full membership also allows a country’s nationals to become CERN staff and for its firms to bid for CERN contracts. The lab also has 10 “associate member” and four countries or organizations with “observer” status, such as the US.

Accelerating collaborations

A first cooperation agreement between Estonia and CERN was signed in 1996, which was followed by a second agreement in 2010 with the country paying about €300,000 each year to the lab. Estonia formally applied for CERN membership in 2018 and on 1 February 2021 the country became an associate member state “in the pre-stage” to fully joining CERN.

Physicists in Estonia are already part of the CMS collaboration at the lab’s Large Hadron Collider (LHC) and they participate in data analysis and the Worldwide LHC Computing Grid (WLCG), in which a “tier 2” centre is located in Tallinn. Scientists from Estonia also contribute to other CERN experiments including CLOUD, COMPASS, NA66 and TOTEM, as well as work on future collider designs.

Estonia’s president, Alar Karis, who trained as a bioscientist, says he is “delighted” with the country’s full membership. “CERN accelerates more than tiny particles, it also accelerates international scientific collaboration and our economies,” Karis adds. “We have seen this potential during our time as associate member state and we are keen to begin our full contribution.”

CERN director general Fabiola Gianotti says she is “very pleased to welcome Estonia” as a full member. “I am sure the country and its scientific community will benefit from increased opportunities in fundamental research, technology development, and education and training.”

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
News The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab https://physicsworld.com/wp-content/uploads/2024/09/Estonia-flag-1495336833-iStock_Peter-Ekvall.jpg
Akiko Nakayama: the Japanese artist skilled in fluid mechanics https://physicsworld.com/a/akiko-nakayama-the-japanese-artist-skilled-in-fluid-mechanics/ Tue, 03 Sep 2024 10:00:11 +0000 https://physicsworld.com/?p=116458 Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Any artist who paints is intuitively an expert in empirical fluid mechanics, manipulating liquid and pigment for aesthetic effect. The paint is usually brushed onto a surface material, although it can also be splattered onto a horizontal canvas in a technique made famous by Jackson Pollock or even layered on with a palette knife, as in the works of Paul Cezanne or Henri Matisse. But however the paint is delivered, once it dries, the result is always a fixed, static image.

Japanese artist Akiko Nakayama is different. Based in Tokyo, she makes the dynamic real-time flow of paint, ink and other liquids the centre of her work. Using a variety of colours, she encourages the fluids to move and mix, creating gorgeous, intricate patterns that transmute into unexpected forms and shades.

What also sets Nakayama apart is that she doesn’t work in private. Instead, she performs public “Alive painting” sessions, projecting her creations onto large surfaces, to the accompaniment of music. Audiences see the walls of the venue covered with coloured shapes that arise from natural processes modified by her intervention. The forms look abstract, but in their mutations often resemble living creatures in motion.

Inspired by ink

Born in 1988, Nakayama was trained in conventional techniques of Eastern and Western painting, earning degrees in fine art from Tokyo Zokei University in 2012 and 2014. Her interest in dynamic art goes back to a childhood calligraphy class, where she found herself enthralled by the beauty of the ink flowing in the water while washing her brush.

“It was more beautiful than the characters [I had written],” she recalls, finding herself “fascinated by the freedom of the ink”. Later, while learning to draw, she always preferred to capture a “moment of time” in her sketches. Eventually, Nakayama taught herself how to make patterns from moving fluids, motivated by Johann Wolfgang von Goethe’s treatise Theory of Colours (1810).

Best known as a writer, Goethe also took a keen interest in science and his book critiques Isaac Newton’s work on the physical properties of light. Goethe instead offered his own more subjective insights into his experiments with colour and the beauty they produce. Despite its flaws as a physical theory of light, reading the book encouraged Nakayama to develop methods to pour and agitate various paints in Petri dishes, and to project the results in real time using a camera designed for close-up viewing.

Akiko Nakayama stands bottom right of a large screen that displays the artwork she is creating on stage

She started learning about liquids, reading research papers and even began examining the behaviour of water droplets under strobe lights. Nakayama also looked into studies of zero gravity on liquids by JAXA, the Japanese space agency. After finding a 10 ml sample of ferrofluid – a nanoscale ferromagnetic colloidal liquid – in a student science kit, she started using the material in her presentations, manipulating it with a small, permanent magnet.

Nakayama’s art has an unexpected link with space science because ferrofluids were invented in 1963 by NASA engineer Steve Papell, who sought a way to pump liquid rocket fuel in microgravity environments. By putting tiny iron oxide particles into the fuel, he found that the liquid could be drawn into the rocket engine by an electromagnet. Ferrofluids were never used by NASA, but they have many applications in industry, medicine and consumer products.

Secret science of painting

Having presented dozens of live performances, exhibitions and commissioned works in Japan and internationally over the last decade, other scientific connections have emerged for Nakayama. She has, for example, mixed acrylic ink with alcohol, dropping the fluid onto a thin layer of acrylic paint to create wonderfully intricate branched, tree-like dendritic forms.

In 2023 her painting caught the attention of materials scientists San To Chan and Eliot Fried at the Okinawa Institute of Science and Technology in Japan. They ended up working with Nakayama to analyse dendritic spreading in terms of the interplay of the different viscosities and surface tensions of the fluids (figure 1).

1 Magic mixtures

Images of 15 ink blots that have spread different amounts

When pure ink is dropped onto an acrylic resin substrate 400 microns thick, it remains fairly static over time (top). But if isopropanol (IPA) is mixed into the ink, the combined droplet spreads out to yield intricate, tree-like dendritic patterns. Shown here are drops with IPA at two different volume concentrations: 16.7% (middle) and 50% (bottom).

Chan and Fried published their findings, concluding that the structures have a fractal dimension of 1.68, which is characteristic of “diffusion-limited aggregation” – a process that involves particles clustering together as they diffuse through a medium (PNAS Nexus 3 59).

The two researchers also investigated the liquid parameters so that an experimentalist or artist could tune the arrangement to vary the dendritic results. Nakayama calls this result a “map” that allows her to purposefully create varied artistic patterns rather than “going on an adventure blindly”. Chan and Fried have even drawn up a list of practical instructions so that anyone inclined can make their own dendritic paintings at home.

Another researcher who has also delved into the connection between fluid dynamics and art is Roberto Zenit, a mechanical engineer at Brown University in the US. Zenit has shown that Jackson Pollock created his famous abstract compositions by carefully controlling the motion of viscous filaments (Phys. Rev. Fluids 4 110507). Pollock also avoided hydrodynamic instabilities that would have otherwise made the paint break up before it hit the canvas (PLOS One 14 e0223706).

Deeper meanings

Although Nakayama likes to explore the science behind her artworks, she has not lost sight of the deeper meanings in art. She told me, for example, that the bubbles that sometimes arise as she creates liquid shapes have a connection with the so-called “Vanitas” tradition in art that emerged in western Europe in the 16th and 17th centuries.

Derived from the Latin word for “vanity”, this kind of art was not about having an over-inflated belief in oneself as the word might suggest. Instead, these still-life paintings, largely by Dutch artists, would often have symbols and images that indicate the transience and fragility of life, such as snuffed-out candles with wisps of smoke, or fragile soap bubbles blown from a pipe.

A large screen showing a bubble in a field of blue

The real bubbles in Nakayama’s artworks always stay spherical thanks to their strong surface tension, thereby displaying – in her mind – a human-like mixture of strength and vulnerability. It’s not quite the same as the fragility of the Vanitas paintings, but for Nakayama – who acknowledges that she’s not a scientist – her works are all about creating “a visual conversation between an artist and science”.

Asked about her future directions in art, however, Nakayama’s response makes immediate sense to any scientist. “Finding universal forms of natural phenomena in paintings is a joy and discovery for me,” she says. “I would be happy to continue to learn about the physics and science that make up this world, and to use visual expression to say ‘the world is beautiful’.”

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Feature Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Perkowitz-Nagayama-EternalArt.jpg newsletter
Open problem in quantum entanglement theory solved after nearly 25 years https://physicsworld.com/a/open-problem-in-quantum-entanglement-theory-solved-after-nearly-25-years/ Tue, 03 Sep 2024 08:30:44 +0000 https://physicsworld.com/?p=116537 Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
A quarter of a century after it was first posed, a fundamental question about the nature of quantum entanglement finally has an answer – and that answer is “no”. In a groundbreaking study, Julio I de Vicente from the Universidad Carlos III de Madrid, Spain showed that so-called maximally entangled mixed states for a fixed spectrum do not always exist, challenging long-standing assumptions in quantum information theory in a way that has broad implications for quantum technologies.

Since the turn of the millennium, the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna, Austria, has maintained a conspicuous list of open problems in the quantum world. Number 5 on this list asks: “Is it true that for arbitrary entanglement monotones one gets the same maximally entangled states among all density operators of two qubits with the same spectrum?” In simpler terms, this question is essentially asking whether a quantum system can maintain its maximally entangled state in a realistic scenario, where noise is present.

This question particularly suited de Vicente, who has long been fascinated by foundational issues in quantum theory and is drawn to solving well-defined mathematical problems. Previous research had suggested that such a maximally entangled mixed state might exist for systems of two qubits (quantum bits), thereby maximizing multiple entanglement measures. In a study published in Physical Review Letters, however, de Vicente concludes otherwise, demonstrating that for certain rank-2 mixed states, no state can universally maximize all entanglement measures across all states with the same spectrum.

“I had tried other approaches to this problem that turned out not to work,” de Vicente tells Physics World. “However, once I came up with this idea, it was very quick to see that this gave the solution. I can say that I felt very excited seeing that such a relatively simple argument could be used to answer this question.”

Importance of entanglement

Mathematics aside, what does this result mean for real-world applications and for physics? Well, entanglement is a unique quantum phenomenon with no classical counterpart, and it is essential for various quantum technologies. Since our present experimental reach is limited to a restricted set of quantum operations, entanglement is also a resource, and a maximally entangled state (meaning one that maximizes all measures of entanglement) is an especially valuable resource.

One example of a maximally entangled state is a Bell state, which is one of four possible states for a system of two qubits that are each in a superposition of 0 and 1. Bell states are pure states, meaning that they can, in principle, be known with complete precision. This doesn’t necessarily mean they have definite values for properties like energy and momentum, but it distinguishes them from a statistical mixture of different pure states.

Maximally entangled mixed states

The concept of maximally entangled mixed states (MEMS) is a departure from the traditional view of entanglement, which has been primarily associated with pure states. Conceptually, when we talk about a pure state, we imagine a scenario where a device consistently produces the same quantum state through a specific preparation process. However, practical scenarios often involve mixed states due to noise and other factors.

In effect, MEMS are a bridge between theoretical models and practical applications, offering robust entanglement even in less-than-ideal conditions. This makes them particularly valuable for technologies like quantum encryption and quantum computing, where maintaining entanglement is crucial for performance.

What next?

de Vicente’s result relies on an entanglement measure that is constructed ad hoc and has no clear operational meaning. A more relevant version of this result for applications, he says, would be to “identify specific quantum information protocols where the optimal state for a given level of noise is indeed different”.

While de Vicente’s finding addresses an existing question, it also introduces several new ones, such as the conditions needed to simultaneously optimize various entanglement measures within a system. It also raises the possibility of investigating whether de Vicente’s theorems hold under other notions of “the same level of noise”, particularly if these arise in well-defined practical contexts.

The implications of this research extend beyond theoretical physics. By enabling better control and manipulation of quantum states, MEMS could revolutionize how we approach problems in quantum mechanics, from computing to material science. Now that we understand their limitations better, researchers are poised to explore their potential applications, including their role in developing quantum technologies that are robust, scalable, and practical.

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
Research update Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies https://physicsworld.com/wp-content/uploads/2024/09/entanglement_4132506_iStock_Kngkyle21.jpg newsletter1
Metasurface makes thermal sources emit laser-like light https://physicsworld.com/a/metasurface-makes-thermal-sources-emit-laser-like-light/ Mon, 02 Sep 2024 10:41:54 +0000 https://physicsworld.com/?p=116527 Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Incandescent light bulbs and other thermal radiation sources can produce coherent, polarized and directed emissions with the help of a structured thin film known as a metasurface. Created by Andrea Alù and colleagues at the City University of New York (CUNY), US, the new metasurface uses a periodic structure with tailored local perturbations to transform ordinary thermal emissions into something more like a laser beam – an achievement heralded as “just the beginning” for thermal radiation control.

Scientists have previously shown that metasurfaces can perform tasks such as wavefront shaping, beam steering, focusing and vortex beam generation that normally require bulky traditional optics. However, these metasurfaces only work with the highly coherent light typically emitted by lasers. “There is a lot of hype around compactifying optical devices using metasurfaces,” says Alù, the founding director of CUNY’s Photonics Initiative. “But people tend to forget that we still need a bulky laser that is exciting them.”

Unlike lasers, most light sources – including LEDs as well as incandescent bulbs and the Sun – produce light that is highly incoherent and unpolarized, with spectra and propagation directions that are hard to control. While it is possible to make thermal emissions coherent, doing so requires special silicon carbide materials, and the emitted light has several shortcomings. Notably, a device designed to emit light to the right will also emit it to the left – a fundamental symmetry known as reciprocity.

Some researchers have argued that reciprocity fundamentally limits how asymmetric the wavefront emitted from such structures can be. However, in 2021 members of Alù’s group showed theoretically that a metasurface could produce coherent thermal emission for any polarization, travelling in any direction, without relying on special materials. “We found that the reciprocity constraint could be overcome with a sufficiently complicated geometry,” Alù says.

Smart workarounds

The team’s design incorporated two basic elements. The first is a periodic array that interacts with the light in a highly non-local way, creating a long-range coupling that forces the random oscillations of thermal emission to become coherent across long time scales and distances. The second element is a set of tailored local perturbations to this periodic structure that make it possible to break the symmetry in emission direction.

The only problem was that this structure proved devilishly difficult to construct, as it would have required aligning two independent nanostructured arrays within a 10 nm tolerance. In the latest work, which is described in Nature Nanotechnology, Alù and colleagues found a way around this by backing one structured film with a thin layer of gold. This metallic backing effectively creates an image of the structure, which breaks the vertical symmetry as needed to realize the effect. “We were surprised this worked,” Alù says.

The final structure was made from silicon and structured as an array of rectangular pillars (for the non-local interactions) interspersed with elliptical pillars (for the asymmetric emission). Using this structure, the team demonstrated coherent directed emission for six different polarizations, at frequencies of their choice. They also used it to send circularly polarized light in arbitrary directions, and to split thermal emissions into orthogonally polarized components travelling in different directions. While this so-called photonic Rashba effect has been demonstrated before in circularly polarized light, the new thermal metasurface produces the same effect for arbitrary polarizations – something not previously thought possible.

According to Alù, the new metasurface offers “interesting opportunities” for lighting, imaging, and thermal emission management and control, as well as thermal camouflaging. George Alexandropoulos, who studies metasurfaces for informatics and telecommunication at the National and Kapodistrian University of Athens, Greece but was not involved in the work, agrees. “Metasurfaces controlling thermal radiation could direct thermal emission to energy-harvesting wireless devices,” he says.

Riccardo Sapienza, a physicist at Imperial College London, UK, who also studies metamaterials and was also not involved in this research, agrees that communication could benefit and suggests that infrared sensing could, too. “This is a very exciting result which brings closer the dream of complete control of thermal radiation,” he says. “I am sure this is just the beginning.”

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Research update Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation https://physicsworld.com/wp-content/uploads/2024/09/02-09-2024-Thermal-metasurface-artwork.png newsletter1
Researchers cut to the chase on the physics of paper cuts https://physicsworld.com/a/researchers-cut-to-the-chase-on-the-physics-of-paper-cuts/ Sun, 01 Sep 2024 09:00:22 +0000 https://physicsworld.com/?p=116517 A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
If you have ever been on the receiving end of a paper cut, you will know how painful they can be.

Kaare Jensen from the Technical University of Denmark (DTU), however, has found intrigue in this bloody occurrence. “I’m always surprised that thin blades, like lens or filter paper, don’t cut well, which is unexpected because we usually consider thin blades to be efficient,” Jensen told Physics World.

To find out why paper is so successful at cutting skin, Jensen and fellow DTU colleagues carried out over 50 experiments with a range of paper thicknesses to make incisions into a piece of gelatine at various angles.

Through these experiments and modelling, they discovered that paper cuts are a competition between slicing and “buckling”. Thin paper with a thickness of about 30 microns, or 0.03 mm, doesn’t cut so well because it buckles – a mechanical instability that happens when a slender object like paper is compressed. Once this occurs, the paper can no longer transfer force to the tissue, so is unable to cut.

Thick paper, with a thickness greater than around 200 microns, is also ineffective at making an incision. This is because it distributes the load over a greater area, resulting in only small indentations.

The team found, however, a paper cut “sweet spot” at around 65 microns and when the incision was made at an angle of about 20 degrees from the surface. This paper thickness just happens to be close to that of the paper used in print magazines, which goes some way to explain why it annoyingly happens so often.

Using the results from the work, the researchers created a 3D-printed scalpel that uses scrap paper for the cutting edge. Using this so-called “papermachete” they were able to slice through apple, banana peel, cucumber and even chicken.

Jensen notes that the findings are interesting for two reasons. “First, it’s a new case of soft-on-soft interactions where the deformation of two objects intertwines in a non-trivial way,” he says. “Traditional metal knives are much stiffer than biological tissues, while paper is still stiffer than skin but around 100 times weaker than steel.”

The second is that it is a “great way” to teach students about forces given that the experiments are straightforward to do in the classroom. “Studying the physics of paper cuts has revealed a surprising potential use for paper in the digital age: not as a means of information dissemination and storage, but rather as a tool of destruction,” the researchers write.

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
Blog A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-papermachete2-small.jpg newsletter
LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs https://physicsworld.com/a/lux-zeplin-puts-new-limit-on-dark-matter-mass/ Sat, 31 Aug 2024 13:09:34 +0000 https://physicsworld.com/?p=116521 Announcement makes us pine for the Black Hills

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
This article has been updated to correct a misinterpretation of this null result.  

Things can go a bit off-topic at Physics World and recent news about dark matter got us talking about the beauty of the Black Hills of South Dakota. This region of forest and rugged topography is smack dab in the middle of the Great Plains of North America and is most famous for the giant sculpture of four US presidents at Mount Rushmore.

A colleague from Kansas fondly recalled a family holiday in the Black Hills – and as an avid skier, I was pleased to learn that the region is home to the highest ski lift between the Alps and the Rockies.

The Black Hills also have a special place in the hearts of physicists – especially those who are interested in dark matter and neutrinos. The region is home to the Sanford Underground Research Facility, which is located 1300 m below the hills in a former gold mine. It was there that Ray Davis and colleagues first detected neutrinos from the Sun, for which Davis shared the 2002 Nobel Prize for Physics.

Today, the huge facility is home to nearly 30 experiments that benefit from the mine’s low background radiation. One of the biggest experiments is LUX–ZEPLIN, which is searching for dark-matter particles.

Hypothetical substance

Dark matter is a hypothetical substance that is invoked to explain the dynamics of galaxies, the large-scale structure of the cosmos, and more. While dark matter is believed to account for 85% of mass in the universe, physicists have little understanding of what it is – or indeed if it actually exists.

So far, the best that experiments like LUX–ZEPLIN have done is to tell physicists what dark matter isn’t. Now, the latest result from LUX–ZEPLIN places the best-ever limits on the nature of dark-matter particles called WIMPs.

The measurement involved watching several tonnes of liquid xenon for 280 days, looking for flashes of light that would be created when a WIMP collides with a xenon nuclei. However no evidence was seen for collisions with WIMPs heavier than 9 GeV/c2 – which is about 10 times the mass of the proton.

The team says that the result is “nearly five times better” than previous WIMP searches. “These are new world-leading constraints by a sizable margin on dark matter and WIMPs,” explains Chamkaur Ghag, who speaks for the LUX–ZEPLIN team and is based at University College London.

Digging for treasure

“If you think of the search for dark matter like looking for buried treasure, we’ve dug almost five times deeper than anyone else has in the past,” says Scott Kravitz of the University of Texas at Austin who is the deputy physics coordinator for the experiment.

This will not be the last that we hear from LUX–ZEPLIN, which will collect a total of 1000 days of data before it switches off in 2028. And it’s not only dark matter that the experiment is looking for. Because it is in a low background environment, LUX–ZEPLIN is also being used to search for other rare or hypothetical events such as the radioactive decay of xenon, neutrinoless double beta decay and neutrinos from the beta decay of boron nuclei in the Sun.

LUX–ZEPLIN is not the only experiment at Sanford that is looking for neutrinos. The Deep Underground Neutrino Experiment (DUNE) is currently under construction at the lab and is expected to be completed in 2028. DUNE will detect neutrinos in four huge tanks that will each be filled with 17,000 tonnes of liquid argon. Some neutrinos will be beamed from 1300 km away at Fermilab near Chicago and together the facilities will comprise the Long-Baseline Neutrino Facility.

One aim of the facility is to study the flavour oscillation of neutrinos as they travel over long distances. This could help explain why there is much more matter than antimatter in the universe. By detecting neutrinos from exploding stars, DUNE could also shed light on the nuclear processes that occur during supernovae. And, it might even detect the radioactive decay of the proton, a hypothetical process that could point to physics beyond the Standard Model.

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
Blog Announcement makes us pine for the Black Hills https://physicsworld.com/wp-content/uploads/2024/08/31-8-24-Sanford-vista.jpg newsletter
Gold nanoparticles could improve radiotherapy of pancreatic cancer https://physicsworld.com/a/gold-nanoparticles-could-improve-radiotherapy-of-pancreatic-cancer/ Fri, 30 Aug 2024 09:49:16 +0000 https://physicsworld.com/?p=116502 Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Dose distributions for pancreatic radiotherapy

The primary goal of radiotherapy is to effectively destroy the tumour while minimizing side effects to nearby normal tissues. Focusing on the challenging case of pancreatic cancer, a research team headed up at Toronto Metropolitan University in Canada has demonstrated that gold nanoparticles (GNPs) show potential to optimize this fine balance between tumour control probability (TCP) and normal tissue complication probability (NTCP).

GNPs are under scrutiny as candidates for improving the effectiveness of radiation therapy by enhancing dose deposition within the tumour. The dose enhancement observed when irradiating GNP-infused tumour tissue is mainly due to the Auger effect, in which secondary electrons generated within the nanoparticles can damage cancer cells.

“Nanoparticles like GNPs could be delivered to the tumour using targeting agents such as [the cancer drug] cetuximab, which can specifically bind to the epidermal growth factor receptor expressed on pancreatic cancer cells, ensuring a high concentration of GNPs in the tumour site,” says first author Navid Khaledi, now at CancerCare Manitoba.

This increased localized energy deposition should improve tumour control; but it’s also crucial to consider possible toxicity to normal tissues due to the presence of GNPs. To investigate this further, Khaledi and colleagues simulated treatment plans for five pancreatic cancer cases, using CT images from the Cancer Imaging Archive database.

Plan comparison

For each case, the team compared plans generated using a 2.5 MV photon beam in the presence of GNPs with conventional 6 MV plans. “We chose a 2.5 MV beam due to the enhanced photoelectric effect at this energy, which increases the interaction probability between the beam and the GNPs,” Khaledi explains.

The researchers created the treatment plans using the MATLAB-based planning program matRad. They first determined the dose enhancement conferred by 50-nm diameter GNPs by calculating the relative biological effectiveness (RBE, the ratio of dose without to dose with GNPs for equal biological effects) using custom MATLAB codes. The average RBE for the 2.5 MV beam, using α and β radiosensitivity values for pancreatic tumour, was 1.19. They then applied RBE values to each tumour voxel to calculate dose distributions and TCP and NTCP values.

The team considered four treatment scenarios, based on a prescribed dose of 40 Gy in five fractions: 2.5 MV plus GNPs, designed to increase TCP (using the prescribed dose, but delivering an RBE-weighted dose of 40 Gy x 1.19); 2.5 MV plus GNPs, designed to reduce NTCP (lowering the prescribed dose to deliver an RBE-weighted dose of 40 Gy); 6 MV using the prescribed dose; and 6 MV with the prescribed dose increased to 47.6 Gy (40 Gy x 1.19).

The analysis showed that the presence of GNPs significantly increased TCP values, from around 59% for the standard 6 MV plans to 93.5% for the 2.5 MV plus GNPs (increased TCP) plans. Importantly, the GNPs helped to maintain low NTCP values of below 1%, minimizing the risk of complications in normal tissues. Using a conventional 6 MV beam with an increased dose also resulted in high TCP values, but at the cost of raising NTCP to 27.8% in some cases.

Minimizing risks

The team next assessed the dose to the duodenum, the main dose-limiting organ for pancreatic radiotherapy. The mean dose to the duodenum was highest for the increased-dose 6 MV photon beam, and lowest for the 2.5 MV plus GNPs plans. Similarly, D2%, the maximum dose received by 2% of the volume, was highest with the increased-dose 6 MV beam, and lowest with 2.5 MV plus GNPs.

It’s equally important to consider dose to the liver and kidney, as these organs may also uptake GNPs. The analysis revealed relatively low doses to the liver and left kidney for all treatment options, with mean dose and D2% generally below clinically significant thresholds. The highest mean doses to the liver and left kidney for 2.5 MV plus GNPs were 3.3 and 7.7 Gy, respectively, compared with 2.3 and 8 Gy for standard 6 MV photons.

The researchers conclude that the use of GNPs in radiation therapy has potential to significantly improve treatment outcomes and benefit cancer patients. Khaledi notes, however, that although GNPs have shown promise in preclinical studies and animal models, they have not yet been tested for radiotherapy enhancement in human subjects.

Next, the team plans to investigate new linac targets that could potentially enable therapeutic applications. “One limitation of the current 2.5 MV beam is its low dose rate (60 MU/min) on TrueBeam linacs, primarily due to the copper target’s heat tolerance,” Khaledi tells Physics World. “Increasing the dose rate could make the beam clinically useful, but it risks melting the copper target. Future work will evaluate the beam spectrum for different target designs and materials.”

The researchers report their findings in Physics in Medicine & Biology.

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Research update Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-PMB-GNP-featured.jpg newsletter1
The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? https://physicsworld.com/a/the-wow-signal-did-a-telescope-in-ohio-receive-an-extraterrestrial-communication-in-1977/ Thu, 29 Aug 2024 14:37:42 +0000 https://physicsworld.com/?p=116495 This podcast features an astrobiologist who has identified similar radio signals

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
On 15 August 1977 the Big Ear radio telescope in the US was scanning the skies in a search for signs of intelligent extraterrestrial life. Suddenly, it detected a strong, narrow bandwidth signal that lasted a little longer than one minute – as expected if Big Ear’s field of vision swept across a steady source of radio waves. That source, however, had vanished 24 hours later when the Ohio-based telescope looked at the same patch of sky.

This was the sort of technosignature that searches for extraterrestrial intelligence (SETI) were seeking. Indeed, one scientist wrote the word “Wow!” next to the signal on a paper print-out of the Big Ear data.

Ever since, the origins of the Wow! signal have been debated – and now, a trio of scientists have an astrophysical explanation that does not involve intelligent extraterrestrials. One of them, Abel Méndez, is our guest in this episode of the Physics World Weekly podcast.

Méndez is an astrobiologist at the University of Puerto Rico at Arecibo and he explains how observations made at the Arecibo Telescope have contributed to the trio’s research.

  • Abel Méndez, Kevin Ortiz Ceballos and Jorge I Zuluaga describe their research in a preprint on arXiv.

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
Podcasts This podcast features an astrobiologist who has identified similar radio signals https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-Wow-signal.jpg newsletter
Heavy exotic antinucleus gives up no secrets about antimatter asymmetry https://physicsworld.com/a/heavy-exotic-antinucleus-gives-up-no-secrets-about-antimatter-asymmetry/ Thu, 29 Aug 2024 13:08:31 +0000 https://physicsworld.com/?p=116491 Antihyperhydrogen-4 is observed by the Star Collaboration

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
An antihyperhydrogen-4 nucleus – the heaviest antinucleus ever produced – has been observed in heavy ion collisions by the STAR Collaboration at Brookhaven National Laboratory in the US. The antihypernucleus contains a strange quark, making it a heavier cousin of antihydrogen-4. Physicists hope that studying such antimatter particles could shed light on why there is much more matter than antimatter in the visible universe – however in this case, nothing new beyond the Standard Model of particle physics was observed.

In the first millionth of a second after the Big Bang, the universe is thought to have been too hot for quarks to have been bound into hadrons. Instead it comprised a strongly interacting fluid called a quark–gluon plasma. As the universe expanded and cooled, bound baryons and mesons were created.

The Standard Model forbids the creation of matter without the simultaneous creation of antimatter, and yet the universe appears to be made entirely of matter. While antimatter is created by nuclear processes – both naturally and in experiments – it is swiftly annihilated on contact with matter.

The Standard Model also says that matter and antimatter should be identical after charge, parity and time are reversed. Therefore, finding even tiny asymmetries in how matter and antimatter behave could provide important information about physics beyond the Standard Model.

Colliding heavy ions

One way forward is to create quark–gluon plasma in the laboratory and study particle–antiparticle creation. Quark–gluon plasma is made by smashing together heavy ions such as lead or gold. A variety of exotic particles and antiparticles emerge from these collisions. Many of them decay almost immediately, but their decay products can be detected and compared with theoretical predictions.

Quark–gluon plasma can include hypernuclei, which are nuclei containing one or more hyperons. Hyperons are baryons containing one or more strange quarks, making hyperons the heavier cousins of protons and neutrons. These hypernuclei are thought to have been present in the high-energy conditions of the early universe, so physicists are keen to see if they exhibit any matter/antimatter asymmetries.

In 2010, the STAR collaboration unveiled the first evidence of an antihypernucleus, which was created by smashing gold nuclei together at 200 GeV. This was the antihypertriton, which is the antimatter version of an exotic counterpart to tritium in which one of the down quarks in one of the neutrons is replaced by a strange quark.

Now, STAR physicists have created a heavier antihypernucleus. They recorded over 6 billion collisions using pairs of uranium, ruthenium, zirconium and gold ions moving at more than 99.9% of the speed of light. In the resulting quark–gluon plasma, the researchers found evidence of antihyperhydrogen-4 (antihypertriton with an extra antineutron). Antihyperhydrogen-4 decays almost immediately by the emission of a pion, producing antihelium-4. This was detected by the researchers in 2011. The researchers therefore knew what to look for among the debris of their collisions.

Sifting through the collisions

Sifting through the collision data, the researchers found 22 events that appeared to be antihyperhydrogen-4 decays. After subtracting the expected background, they were left with approximately 16 events, which was statistically significant enough to claim that they had observed antihyperhydrogen-4.

The researchers also observed evidence of the decays of hyperhydrogen-4, antihypertriton and hypertriton. In all cases, the results were consistent with the predictions of charge–parity–time (CPT) symmetry. This is a central tenet of modern physics that says that if the charge and internal quantum numbers of a particle are reversed, the spatial co-ordinates are reversed and the direction of time is reversed, the outcome of an experiment will be identical.

STAR member Hao Qiu of the Institute of Modern Physics at the Chinese Academy of Sciences says that, in his view, the most important feature of the work is the observation of the hyperhydrogen-4. “In terms of the CPT test, it’s just that we’re able to do it…The uncertainty is not very small compared with some other tests.”

Qiu says that he, personally, hopes the latest research may provide some insight into violation of charge–parity symmetry (i.e. without flipping the direction of time). This has already been shown to occur in some systems. “Ultimately, though, we’re experimentalists – we look at all approaches as hard as we can,” he says; “but if we see CPT symmetry breaking we have to throw out an awful lot of current physics.”

“I really do think it’s an incredibly impressive bit of experimental science,” says theoretical nuclear physicist Thomas Cohen of University of Maryland, College Park; “The idea that they make thousands of particles each collision, find one of these in only a tiny fraction of these events, and yet they’re able to identify this in all this really complicated background – truly amazing!”

He notes, however, that “this is not the place to look for CPT violation…Making precision measurements on the positron mass versus the electron mass or that of the proton versus the antiproton is a much more promising direction simply because we have so many more of them that we can actually do precision measurements.”    

The research is described in Nature.

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
Research update Antihyperhydrogen-4 is observed by the Star Collaboration https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-antihypernucleus.jpg newsletter1
Metamaterial gives induction heating a boost for industrial processing https://physicsworld.com/a/metamaterial-gives-induction-heating-a-boost-for-industrial-processing/ Wed, 28 Aug 2024 15:05:27 +0000 https://physicsworld.com/?p=116469 Technology could help heavy industry transition from fossil fuels

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
A thermochemical reactor powered entirely by electricity has been unveiled by Jonathan Fan and colleagues at Stanford University. The experimental reactor was used to convert carbon dioxide into carbon monoxide with close to 90% efficiency. This makes it a promising development in the campaign to reduce carbon dioxide emissions from industrial processes that usually rely on fossil fuels.

Industrial processes account for a huge proportion of carbon emissions worldwide – accounting for roughly a third of carbon emissions in the US, for example. In part, this is because many industrial processes require huge amounts of heat, which can only be delivered by burning fossil fuels. To address this problem, a growing number of studies are exploring how combustion could be replaced with electrical sources of heat.

“There are a number of ways to use electricity to generate heat, such as through microwaves or plasma,” Fan explains. “In our research, we focus on induction heating, owing to its potential for supporting volumetric heating at high power levels, its ability to scale to large power levels and reactor volumes, and its strong safety record.”

Induction heating uses alternating magnetic fields to induce electric currents in a conductive material, generating heat via the electrical resistance of the material. It is used in a wide range of applications from domestic cooking to melting scrap metal. However, it has been difficult to use induction heating for complex industrial applications.

In its study, Fan’s team focused on using inductive heating in thermochemical reactors, where gases are transformed into valuable products through reactions with catalysts.

Onerous requirements

The heating requirements for these reactors are especially onerous, as Fan explains. “They need to produce heat in a 3D space; they need to feature exceptionally high heat transfer rates from the heat-absorbing material to the catalyst; and the energy efficiency of the process needs to be nearly 100%.”

To satisfy these requirements, the Stanford researchers created a new design for internal reactor structures called baffles. Conventional baffles are used to enhance heat transfer and mixing within a reactor, improving its reaction rates and yields.

In their design, Fan’s team re-reimagined these structures as integral components of the heating process itself. Their new baffles comprised a 3D lattice made from a conductive ceramic, which can be heated via magnetic induction at megahertz frequencies.

“The lattice structure can be modelled as a medium whose electrical conductivity depends on both the material composition of the ceramic and the geometry of the lattice,” Fan explains. “Therefore, it can be conceptualized as a metamaterial, whose physical properties can be tailored via their geometric structuring.”

 Encouraging heat transfer

This innovative design addressed three key requirements of a thermochemical reactor. First, by occupying the entire reactor volume, it ensures uniform 3D heating. Second, the metamaterial’s large surface area encourages heat transfer between the lattice and the catalyst. Finally, the combination of the high induction frequency and low electrical conductivity in the lattice delivers high energy efficiency.

To demonstrate these advantages, Fan says, “we tailored the metamaterial reactor for the ‘reverse water gas shift’ reaction, which converts carbon dioxide into carbon monoxide – a useful chemical for the synthesis of sustainable fuels”.

To boost the efficiency of the conversion, the team used a carbonate-based catalyst to minimize unwanted side reactions. A silicon carbide foam lattice baffle and a novel megahertz-frequency power amplifier were also used.

As Fan explains, initial experiments with the reactor yielded very promising results. “These demonstrations indicate that our reactor operates with electricity to internal heat conversion efficiencies of nearly 90%,” he says.

The team hopes that its design offers a promising step towards electrically powered thermochemical reactors that are suited for a wide range of useful chemical processes.

“Our concept could not only decarbonize the powering of chemical reactors but also make them smaller and simpler,” Fan says. “We have also found that as our reactor concept is scaled up, its energy efficiency increases. These implications are important, as economics and ease of implementation will dictate how quickly decarbonized reactor technologies could translate to real-world practice.”

The research is described in Joule.

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
Research update Technology could help heavy industry transition from fossil fuels https://physicsworld.com/wp-content/uploads/2024/08/28-08-24-metamaterial-reactor.jpg newsletter1
‘Kink states’ regulate the flow of electrons in graphene https://physicsworld.com/a/kink-states-regulate-the-flow-of-electrons-in-graphene/ Wed, 28 Aug 2024 12:00:11 +0000 https://physicsworld.com/?p=116456 New valleytronics-based switch could have applications in quantum networks

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
A new type of switch sends electrons propagating in opposite directions along the same paths – without ever colliding with each other. The switch works by controlling the presence of so-called topological kink states in a material known as Bernal bilayer graphene, and its developers at Penn State University in the US say that it could lead to better ways of transmitting quantum information.

Bernal bilayer graphene consists of two atomically-thin sheets of carbon stacked on top of each other and shifted slightly. This arrangement gives rise to several unusual electronic behaviours. One such behaviour, known as the quantum valley Hall effect, gets its name from the dips or “valleys” that appear in graphs of an electron’s energy relative to its momentum. Because graphene’s conduction and valence bands meet at discrete points (known as Dirac points), it has two such valleys. In the quantum valley Hall effect, the electrons in these different valleys flow in opposite directions. Hence, by manipulating the population of the valleys, researchers can alter the flow of electrons through the material.

This process of controlling the flow of electrons via their valley degree of freedom is termed “valleytronics” by analogy with spintronics, which uses the internal degree of freedom of electron spin to store and manipulate bits of information. For valleytronics to be effective, however, the materials the electrons flow through need to be of very high quality. This is because any atomic defects can produce intervalley backscattering, which causes electrons travelling in opposite directions to collide with each other.

A graphite/hBN global gate

Researchers led by Penn State physicist Jun Zhu have now succeeded in producing a device that is pristine enough to support such behaviour. They did this by incorporating a stack made from graphite and a two-dimensional material called hexagonal boron nitride (hBN) into their design. This stack, which acts as a global “gate” that allows electrons to flow through the device, is free of impurities, and team member Ke Huang explains that it was key to the team’s technical advance.

The principle behind the improvement is that while graphite is an excellent electrical conductor, hBN is an insulator. By combining the two materials, Zhu, Huang and colleagues created a structure known as a topological insulator – a material that conducts electricity very well along its edges or surfaces while acting as an insulator in its bulk. Within the edge states of such a topological insulator, electrons can only travel along one pathway. This means that, unlike in a normal conductor, they do not experience backscatter. This remarkable behaviour allows topological insulators to carry electrical current with near-zero dissipation.

In the present work, which is described in Science, the researchers confined electrons to special, topologically protected electrically conducting pathways known as kink states that formed by electrically gating the stack. By controlling the presence or absence of these states, they showed that they could regulate the flow of electrons in the system.

A quantized resistance value

“The amazing thing about our devices is that we can make electrons moving in opposite directions not collide with one another even though they share the same pathways,” Huang says. “This corresponds to the observation of a quantized resistance value, which is key to the potential application of the kink states as quantum wires to transmit quantum information.”

Importantly, this quantization of the kink states persists even when the researchers increased the temperature of the system from near absolute zero to 50 K. Zhu describes this as surprising because quantum states are fragile, and often only exist at temperatures of a few Kelvin. Operation at elevated temperatures will, of course, be important for real-world applications, she adds.

The new switch is the latest addition to a group of kink state-based quantum electronic devices the team has already built. These include valves, waveguides and beamsplitters. While the researchers admit that they have a long way to go before they can assemble these components into a fully functioning quantum interconnect system, they say their current set-up is potentially scalable and can already be programmed to direct current flow. They are now planning to study how electrons behave like coherent waves when travelling along the kink state pathways. “Maintaining quantum coherence is a key requirement for any quantum interconnect,” Zhu tells Physics World.

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
Research update New valleytronics-based switch could have applications in quantum networks https://physicsworld.com/wp-content/uploads/2024/08/graphene-web-206705884_Shutterstock_Inozemtsev-Konstantin.jpg newsletter1
A breezy tour of what gaseous materials do for us https://physicsworld.com/a/a-breezy-tour-of-what-gaseous-materials-do-for-us/ Wed, 28 Aug 2024 10:00:50 +0000 https://physicsworld.com/?p=116336 Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
A row of gas lamps outside the Louvre in Paris

The first person to use gas for illumination was a French engineer by the name of Philippe Lebon. In 1801 his revolutionary system of methane pipes and jets lit up the Hôtel de Seignelay so brilliantly that ordinary Parisians paid three francs apiece just to marvel at it. Overnight guests may have been less enthusiastic. Although methane itself is colourless and odourless, Lebon’s process for extracting it left the gas heavily contaminated with hydrogen sulphide, which – as Mark Miodownik cheerfully reminds us in his latest book – is a chemical that “smells of farts”.

The often odorous and frequently dangerous world of gases is a fascinating subject for a popular-science book. It’s also a logical one for Miodownik, a materials researcher at University College London, UK, whose previous books were about solids and liquids. The first, Stuff Matters, was a huge critical and commercial success, winning the 2014 Royal Society Winton Prize for science books (and Physics World’s own Book of the Year award) on its way to becoming a New York Times bestseller. The second, Liquid, drew more muted praise, with some critics objecting to a narrative gimmick that shoehorned liquid-related facts into the story of a hypothetical transatlantic flight.

Miodownik writes about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations

Miodownik’s third book It’s a Gas avoids this artificial structure and is all the better for it. It also adopts a very loose definition of “gas”, which leaves Miodownik free to write about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations. The result is a lively, free-associating mixture of personal, scientific and historical anecdotes very reminiscent of Stuff Matters, though inevitably one that feels less exceptional than it did the first time around.

The chapter on breath shows how this mixture works. It begins with a story about the young Miodownik watching a brass band march past. Next, we get an explanation of how air travels through brass instruments. By the end of the chapter, Miodownik has moved on, via Air Jordan sneakers and much else, to pneumatic bicycle tyres and their surprising impact on English genetic diversity.

Though the connection might seem fanciful at first, it seems that after John Dunlop patented his air-filled rubber bicycle tyre in 1888, many people (especially women) were suddenly able to traverse bumpy roads cheaply, comfortably and without assistance. As their horizons expanded, their inclination to marry someone from the same parish plummeted: between 1887 and the early years of the 20th century, marriages of this nature dropped from 77% to 41% of the total.

Miodownik is not the first to make the link between bicycle tyres and longer-distance courtships. (He credits the geneticist Steve Jones for the insight, building on work by the 20th-century geographer P J Parry.) However, his decision to include the tale is a deft one, as it illustrates just how important gases and their associated technologies have been to human history.

Anaesthetics are another good example. Though medical professionals were scandalously slow to accept nitrous oxide, ether and chloroform, these beneficial gases eventually revolutionized surgery, saving millions of patients from the agony of their predecessors. Interestingly, criminals proved far less hide-bound than doctors, swiftly adopting chloroform as a way of subduing victims – though the ever-responsible Miodownik notes that this tactic seldom works as quickly as it does in the movies, and errors in dosage can be fatal.

Not every gas-related invention had such far-reaching effects. Inflatable mattresses never really caught on; as Miodownik observes, “beds were for sleeping and sex, and neither was enhanced by being unexpectedly launched into the air every time your partner made a move”.

The history of balloons is similarly chequered. Around the same time as Lebon was filling the Hôtel de Seignelay with aromas, an early balloonist, Sophie Blanchard, was appointed Napoleon’s “aeronaut of the official festivals”. Though Blanchard went on to hold a similar post under the restored King Louis XVIII, Miodownik notes that her favourite party trick – launching fireworks from a balloon filled with highly flammable and escape-prone hydrogen – eventually caught up with her. In 1819, aged just 41, her firework-festooned craft crashed into the roof of a house and Blanchard fell to her death.

Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do

The lessons of this calamity were not learned. More than a century later, 35 passengers and crew on the hydrogen-filled Hindenburg airship (which included a smoking area among its many luxuries) met a similarly fiery end.

Occasional tragedies aside, Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do. He often opens chapters with stories from his actual childhood, and while a few of these (like the brass band) are merely cute, others are genuinely jaw-dropping. Some readers may recall that Miodownik began Stuff Matters by describing the time he got stabbed on the London Underground; while there is nothing quite so dramatic in It’s a Gas (and no spoilers in this review), he clearly had an eventful youth.

At times, it becomes almost a game to guess which gas these opening anecdotes will lead to. Though some readers may find the connections a little tenuous, Miodownik is a good enough writer to make his leaps of logic seem effortless even when they are noticeable. The result is a book as delightfully light as its subject matter, and a worthy conclusion to Miodownik’s informal phases-of-matter trilogy – although if he wants to write about plasmas next, I certainly won’t stop him.

  • 2024 Viking 304pp £22.00hb

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
Opinion and reviews Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Harris_gaslamp_feature.jpg newsletter
Free-space optical communications with FPGA-based instrumentation https://physicsworld.com/a/free-space-optical-communications-with-fpga-based-instrumentation/ Wed, 28 Aug 2024 09:03:13 +0000 https://physicsworld.com/?p=116418 Available to watch now. Liquid Instruments explores how to implement optical modulation and detection techniques with a reconfigurable, FPGA-based device

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>

As the world becomes more connected by global communications networks, the field of free-space optical communications has grown as an alternative to traditional data transmission via RF frequencies. While optical communications setups deliver scalability and security advantages along with a smaller infrastructure footprint, they also bring distinct challenges, including attenuation, interference, and beam divergence.

During this presentation, Liquid Instruments will give an overview of the FPGA-based Moku platform, a reconfigurable suite of test and measurement instruments that provide a flexible and efficient approach to optical communications development. You’ll learn how to use the Moku Lock-in Amplifier and Time & Frequency Analyzer for both coherent and direct detection of optical signals, as well as how to frequency-stabilize lasers with the Laser Lock Box.

You’ll also see how to deploy these instruments simultaneously in Multi-instrument Mode for maximum versatility, plus digital and analog modulation methods such as phase-shift keying (PSK) and pulse-position modulation (PPM) covered in a live demo.

A Q&A session will follow the demonstration.

Jason Ball is an engineer at Liquid Instruments, where he focuses on applications in quantum physics, particularly quantum optics, sensing, and computing. He holds a PhD in physics from the Okinawa Institute of Science and Technology and has a comprehensive background in both research and industry, with hands-on experience in quantum computing, spin resonance, microwave/RF experimental techniques, and low-temperature systems.

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>
Webinar Available to watch now. Liquid Instruments explores how to implement optical modulation and detection techniques with a reconfigurable, FPGA-based device https://physicsworld.com/wp-content/uploads/2024/08/2024-10-02-webinar-image.jpg
Management insights catalyse scientific success https://physicsworld.com/a/management-insights-catalyse-scientific-success/ Wed, 28 Aug 2024 09:00:38 +0000 https://physicsworld.com/?p=114112 Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Most scientific learning is focused on gaining knowledge, both to understand fundamental concepts and to master the intricacies of experimental tools and techniques. But even the most qualified scientists and engineers need other skills to build a successful career, whether they choose to continue in academia, pursue different pathways in the industrial sector, or exploit their technical prowess to create a new commercial enterprise.

“Scientists and engineers can really benefit from devoting just a small amount of time, in the broad scope of their overall professional development, to understanding and implementing some of the ideas from management science,” says Peter Hirst, who originally trained as a physicist at the University of St Andrews in the UK and now leads the executive education programme at MIT’s Sloan School of Management. “Whether you’re running a lab with just a few post-docs, or you have a leadership role in a large organization, a few simple tools can help to drive innovation and creativity while also making your team more effective and efficient.”

MIT Sloan Executive Education, part of the management school, the business school of the Massachusetts Institute of Technology in Cambridge, US, offers more than 100 short courses and programmes covering all aspects of business innovation, personal skills development, and organizational management, many of which can be accessed in different online formats. Delivered by expert faculty who can share their experiences and insights from their own research work, they are designed to introduce frameworks and tools that enable participants to apply key concepts from management science to real-world situations.

Research groups are really a type of enterprise, with people working together to produce clearly defined outputs

Peter Hirst, MIT Sloan School of Management

One obvious example is the process of transforming a novel lab-based technology into a compelling commercial proposition. “Lots of scientists develop intellectual property during their research work, but may not be aware of the opportunities for commercialization,” says Hirst. “Even here at MIT, which is known for its culture of innovation, many researchers don’t realize that educational support is available to help them to understand what’s needed to transfer a new technology into a viable product, or even to become more aware of what might be possible.”

For academic researchers who want to remain focused on the science, Hirst believes that management tools originally developed in the business sector can offer valuable support to help build more effective teams and nurture the talents of diverse individuals. “Research groups are really a type of enterprise, with people working together to produce clearly defined outputs,” he says. “When I was working as a scientist, I really didn’t really think about the human system that was doing that work, but that’s a really important dimension that can contribute to the success or failure of the whole enterprise.”

Modern science also depends on forging successful collaborations between research groups, or between academia and industry, while researchers are under mounting pressure to demonstrate the impact of their work – whether for scientific progress or commercial benefit. “Even if you’re working in academia, it’s really important to understand the contribution that your work is making to the whole value chain,” Hirst comments. “It provides context that helps to guide the work, but it’s also vital for sustainably securing the resources that are needed to pursue the science.”

The training offered by MIT Sloan takes different formats, including short courses and longer programmes that take a deeper dive into key topics. In each case, however, the faculty designs tasks, simulations and projects that allow participants to gain a deeper understanding of key concepts and how they might be exploited in their own workplace. “People believe by seeing, but they learn by doing,” says Hirst. “Our guiding philosophy is that the learning is always more effective if it can be done in the context of real work, real problems, and real challenges.”

Business team

Many of the courses are taught on the MIT campus, offering the opportunity for delegates to discuss key ideas, work together on training tasks, and network with people who have different backgrounds and experience. For those unable to attend in person, the same ethos extends to the two types of online training available through the executive education programme. One stream, developed in response to the Covid pandemic, offers live tutoring through the Zoom platform, while the other provides access to pre-recorded digital programmes that participants complete within a set time window. Some of these self-paced courses adopt a sprint format inspired by the concepts of agile product development, enabling participants to break down a complex challenge or opportunity into a series of smaller questions that can be tackled to reach a more effective solution.

“It’s not just sitting and watching, people really have the opportunity to work with the material and apply what they are learning,” explains Hirst. “In each case we have worked hard with the faculty to figure out how to achieve the same outcomes through a different type of experience, and it’s been great to see how compelling that can be.”

Evidence that the approach is working can be found in the retention rate for the self-paced courses, with more than 90% of participants completing all the modules and assignments. The Zoom-based programmes also remain popular amid the more general post-pandemic return to in-person training, providing greater flexibility for learners in different parts of the world. “We have tried to find the sweet spot between effectiveness and accessibility, and many people who can’t come to campus have told us they find these courses valuable and impactful,” says Hirst. “We have put the choice in the hands of the learners.”

Plenty of scientists and engineers have already taken the opportunity to develop their management capabilities through the courses offered by MIT Sloan, particularly those that have been thrown into leadership positions within a rapidly growing organization. “Perhaps because we’re at MIT, we are already seeing scientists and engineers who recognize the value of engaging with ideas and tools that some people might dismiss as corporate nonsense,” says Hirst. “Generally speaking, they have really great experiences and discover new approaches that they can use in their labs and businesses to improve their own work and that of their teams and organizations.”

For those who may not yet be ready to make the leap into developing their personal management style, Hirst advocates courses that analyse the dynamics of an organization – whether it’s a start-up company, a manufacturing business or a research collaboration. The central idea here is to apply concepts from systems engineering to organizations, and how work gets done by a human system, to improve overall productivity and performance.

One case study that Hirst cites from the biomedical sector is the Broad Institute, a research organization with links to MIT and Harvard that has developed a platform for generating human genomic information. “Originally they were taking months to extract the genomic data from a sample, but they have reduced that to a week by implementing some fairly simple ideas to manage their operational processes,” he says. “It’s a great example of a scientific organization that has used systems-based thinking to transform their business.”

Others may benefit from courses that focus on technology development and product strategy, or an entrepreneurship development programme that immerses participants in the process of creating a successful business from a novel idea or technology. “That programme can be transformational for many people,” Hirst observes. “Most people who come into it with a background in science and engineering are focused on demonstrating the technical superiority of their solution, but one of the big lessons is the importance of understanding the needs of the customer and the value they would derive from implementing the technology.”

For those who are keen to develop their skills in one particular area, MIT Sloan also offers a series of Executive Certificates that enable learners to choose four complementary courses focusing on topics such as strategy and innovation, or technology and operations. Once all four courses in the track have been completed – which can be achieved in just a few weeks as well as over several months or years – participants are awarded an Executive Certificate to demonstrate the commitment they have made to their own personal development.

More information can be found in a digital brochure that provides details all of the courses available through MIT Sloan, while the website for the executive education programme provides an easy way to search for relevant courses and programmes. Hirst also recommends reading the feedback and reviews from previous participants, which appear alongside each course description on the website. “Prospective learners find it really useful to see how people in similar situations, or with similar needs, have described their experience.”

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Analysis Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams https://physicsworld.com/wp-content/uploads/2024/04/Business-woman-sitting-front-laptop-1382663132-shutterstock-SFIO-CRACHO-web.jpg newsletter
Sunflowers ‘dance’ together to share sunlight https://physicsworld.com/a/sunflowers-dance-together-to-share-sunlight/ Tue, 27 Aug 2024 14:34:41 +0000 https://physicsworld.com/?p=116449 Zigzag patterns created by circular motion of growing stems

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Yasmine Meroz

Sunflowers in a field can co-ordinate the circular motions of their growing stems to minimize the amount of shade each plant experiences – a study done in the US and Israel has revealed. By doing a combination of experiments and simulations, a team led by Yasmine Meroz at Tel Aviv University discovered that seemingly random movements within groups of plants can lead to self-organizing patterns that optimize growing conditions.

Unlike animals, plant motion is usually related to growth – which is an irreversible process that defines a plant’s morphology. One movement frequently observed in plants is called circumnutation, which describes repeating, circular motions at the tips of growing plant stems.

“Charles Darwin and his son, Francis, already identified circumnutations in their book, The Power of Movement in Plants, in 1880,” Meroz explains. “While they documented these movements in a number of species, it was not clear whether these have a function. It is only in recent years that some research has started to identify possible roles of circumnutations, such as the ability of roots to circumvent obstacles.”

Understanding self-organization

Circumnutation was not the initial focus of the team’s study. Instead, they sought a deeper understanding of self-organization. This is a process whereby a system that start outs in a disorderly state can gain order through local interactions between its individual components.

In nature, self-organization has been widely studied in groups of animals, including fish, birds, and insects. The coordinated movements of many individuals help animals source food, evade predators, and conserve energy.

But in 2017 a fascinating example of self-organization in plants was discovered by a team of researchers in Argentina. While observing a field of sunflowers growing in dense rows, the team found that the plants’ stems self-organized into zigzag patterns as they grew. This arrangement minimized the shade the sunflowers cast on one another, ensuring each plant received the maximum possible amount of sunlight.

Meroz’s team has now studied this phenomenon in a controlled laboratory environment. “Unlike previous work, we tracked the movement of sunflower crowns during the whole experiment,” Meroz describes. “This is when we found that sunflowers move a lot via circumnutations, and we asked ourselves whether these movements might play a role in the self-organization process.”

To inform the analysis, Meroz’s team considered two key ingredients of self-organization. The first involved local interactions between individual plants – in this case, their ability to adapt their growth to avoid shading each other.

The second ingredient were the random, noisy motions that allow self-organized systems to explore a variety of possible states. This randomness enables plants to adapt to short-term environmental changes while maintaining stability in their growth patterns.

Tweaking noise

For their sunflowers, the researchers predicted that these random motions could be provided by the circumnutations first described by Charles and Francis Darwin. To investigate this idea, they ran simulations of groups of sunflowers based closely on the movements they had observed in the lab. In these simulations, they tweaked the amount of noise generated by circumnutation with a level of control that is not yet possible in real-world experiments.

“By comparing what we saw in the group experiments with our simulation data, we figured out the best balance of these factors,” explains Meroz’s colleague, Orit Peleg at the University of Colorado Boulder. “We also confirmed that real plants balance these factors in a way that leads to near-optimal minimization of shading.”

As expected, the results confirmed that the random movements of individual sunflowers play a vital role in minimizing the amount of shading experienced by each plant.

Peleg believes that their discovery has fascinating implications for our understanding of how plants behave. “It’s a bit surprising because we don’t usually think of random movement as having a purpose,” she says. “Yet, it’s vital for minimizing shading. This finding prompts us to view plants as active matter, with unique constraints imposed by their anchoring and growth-movement coupling.”

The research is described in Physical Review X.

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Research update Zigzag patterns created by circular motion of growing stems https://physicsworld.com/wp-content/uploads/2024/08/27-8-24-sunflower-1143037052-Shutterstock_Mykhailo-Baidala.jpg newsletter1
The most precise timekeeping device ever built https://physicsworld.com/a/the-most-precise-timekeeping-device-ever-built/ Tue, 27 Aug 2024 12:00:18 +0000 https://physicsworld.com/?p=116432 Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
If you want to make a clock, all you need is an oscillation – preferably one that is stable in frequency and precisely determined. Many systems will fit the bill, from the Earth’s rotation to pendulums and crystal oscillators. But if you want the world’s most precise clock, you’ll need to go to the US state of Colorado, where researchers from JILA and the University of Colorado, Boulder have measured the frequency of an optical lattice clock (OLC) with a record-low systematic uncertainty of 8.1 × 10−19 – equivalent to a fraction of a second throughout the age of the universe.

OLCs are atomic clocks that mark the passage of time using an electron that oscillates between two energy levels (the ground state 1S0 and clock state 3P0) in an atom such as strontium. The high frequency and narrow linewidth of this atomic transition makes these clocks orders of magnitude more precise than the atomic clocks used to redefine the second in 1968, which were based on a microwave transition in caesium atoms.

The high precision of OLCs gives them the potential to unlock technologies that can be used to sense quantities such as distances, the Earth’s gravitational field and even atomic properties such as the fine structure constant at extremely small scales. To achieve this precision, however, they must be isolated from external effects that can cause them to “tick” irregularly. This is why the atoms in an OLC are trapped in a lattice formed by laser beams and confined within a vacuum chamber.

An OLC that is isolated entirely from its environment would oscillate at the constant, natural frequency of the atomic transition, with an uncertainty of 0 Hz/Hz. In other words, its frequency would not change. However, in the real world, temperature, magnetic and electric fields, and even the collisional motion of the atoms in the lattice all influence the clock’s oscillations. These parameters therefore need to be very well controlled for the clock to operate at maximum precision.

Controlling blackbody radiation

According to Alexander Aeppli, a PhD student at JILA who was involved in setting the new record, the most detrimental environmental effect on their OLC is blackbody radiation (BBR). All thermal objects – light bulbs, human bodies, the vacuum chamber the atoms are trapped in – emit such radiation, and the electric field of this radiation couples to the atom’s energy levels. This causes a systematic shift that translates to an uncertainty in the clock’s frequency.

To minimize the effects of BBR, Aeppli and colleagues enclosed their entire system, including the vacuum chamber and optics for creating the clock, within a temperature-controlled box equipped with numerous temperature sensors. By running temperature-stabilized liquid around different parts of their experimental apparatus, they stabilized the air temperature and controlled the vacuum system temperature.

This didn’t completely solve the problem, though. BBR shift is the sum of a static component that scales with the fourth power of temperature and a dynamic component that scales with higher powers. Even after limiting the lab’s temperature fluctuations to a few millikelvin per day, the team still needed to carry out a systematic evaluation of the shift due to the dynamic component.

For this, the JILA-Boulder researchers turned to a 2013 study in which physicists in the US and Russia found a correlation between the uncertainty of the BBR shift and the lifetime of an electron occupying a higher-energy state (3D1) in strontium atoms. By measuring the lifetime of this 3D1 state, the team was able to calculate an uncertainty of 7.3 × 10−19 in the BBR shift.

To fully understand the atoms’ response to BBR, Aeppli explains that they also needed to measure the strength of transitions from the clock states. “The dominant transition that is perturbed by BBR radiation is at a relatively long wavelength,” he says. “This wavelength is longer than the spacing between the atoms, meaning that atoms can behave collectively, modifying the physics of this interaction. It took us quite some time to characterize this effect and involved almost a year of measurements to reduce its uncertainty.”

Photo of the vacuum chamber bathed in purple-blue light

Other environmental effects

BBR wasn’t the only environmental effect that needed systematic study. The in-vacuum mirrors used to create the lattice tend to accumulate electric charges, and the resulting stray electric fields produce a systematic DC Stark shift that changes the clock transition frequency. By shielding the mirrors with a copper structure, the researchers reduced these DC Stark shifts to below the 1 × 10−19 uncertainty level.

OLCs are also sensitive to magnetic fields. This is due to the Zeeman effect, which shifts the energy levels of an atom by different amounts in the presence of such fields. The researchers chose the least magnetically sensitive sub-states to operate their clock, but that still leaves a weaker second-order Zeeman shift for them to calibrate. In the latest work, they reached an uncertainty in this second-order Zeeman shift of 0.1 × 10−18, which is a factor of two smaller than previous measurements.

Even the lattice beams themselves cause an unwanted shift in the atoms’ transition frequency. This is known as the light or AC Stark shift, and it is due to the power of the laser beam. The researchers minimized this shift by ramping down the beam power just before starting the clock, but even at such low trapping powers, atoms in the different lattice sites can still interact, and atoms at the same site can collide. These events lead to a tunnelling and a density shift, respectively. While both are rather weak, the team nevertheless investigated their effect on the clock’s uncertainty and constrained them to below the 10−19 level.

How low can you go?

In early 2013, JILA scientists reported a then-record-low systematic uncertainty in their strontium OLC of 6.4 × 10−18. A year later, they managed to reduce this uncertainty by a factor of three, to 2.1 × 10−18. Ten years on, however, progress seems to have slowed: the latest uncertainty record improves on this value by a mere factor of two. Is there an intrinsic lower bound?

“The largest source of systematic uncertainty continues to be the BBR shift since it goes as temperature to the fourth power,” Aeppli says. “Even a small reduction in temperature can significantly reduce the shift uncertainty.”

To go below the 1 × 10−19 level, he explains that it would be advantageous to cool the system to cryogenic temperatures. Indeed, many OLC research groups are using this approach for their next-generation systems. Ultimately, though, while progress on optical clocks might not be quite as fast as it was 20 years ago, Aeppli says there is no obvious “floor”, no fundamental limit to the systematic uncertainty of optical lattice clocks. “There are plenty of clever people working on pushing uncertainty as low as possible,” he says.

The JILA-Boulder team reports its work in Physical Review Letters.

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
Analysis Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it https://physicsworld.com/wp-content/uploads/2024/08/27-08-2024-Precise-optical-clock_Main-image.jpg newsletter
Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist https://physicsworld.com/a/abdus-salam-honouring-the-first-muslim-nobel-prize-winning-scientist/ Tue, 27 Aug 2024 10:00:13 +0000 https://physicsworld.com/?p=116317 Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>

A child prodigy born in a humble village in British India on 29 January 1926, Abdus Salam became one of the world’s greatest theorists who tackled some of the most fundamental questions in physics. He shared the 1979 Nobel Prize for Physics with Sheldon Glashow and Steven Weinberg for unifying the weak and electromagnetic interactions. In doing so, Salam became the first Muslim scholar to win a science-related Nobel prize – and is so far the only Pakistani to achieve that feat.

After moving to the UK in 1946 just before the partition of India, Salam gained a double-first in mathematics and physics from the University of Cambridge and later did a PhD there in quantum electrodynamics. Following a couple of years back home in Pakistan, Salam returned to Cambridge, before spending the bulk of his career at Imperial College, London. He died aged 70 on 21 November 1996, his later life cruelly ravaged by a neurodegenerative disease.

Yet to many people, Salam’s life and contributions to science are not so well known despite his founding of the International Centre for Theoretical Physics (ICTP) in Trieste, Italy, exactly 60 years ago. Upon joining Imperial, he also became the first academic from Asia to hold a full professorship at a UK university. Keen to put Salam in the spotlight ahead of the centenary of Salam’s birth are Claudia de Rham, a theoretical physicist at Imperial, and quantum-optics researcher Ian Walmsley, who is currently provost of the college.

De Rham and Walmsley recently appeared on the Physics World Weekly podcast. An edited version of our conversation appears below.

How would you summarize Abdus Salam’s contributions to science?

CdR: Salam was one of the founders of modern physics. He pioneered the study of symmetries and unification, which helped contribute to the formulation of the Standard Model of particle physics. In 1967 he incorporated the Higgs mechanism – co-discovered by his Imperial colleague Tom Kibble – into electroweak theory, which unifies the electromagnetic and weak forces. It changed the way we see the world by underlining the importance of symmetry and by showing how some forces – which may appear different – are actually linked.

This breakthrough led him to win the 1979 Nobel Prize for Physics with Steven Weinberg and Sheldon Glashow, making him the first – in fact, so far, the only – Nobel laureate from Pakistan. Salam was also the first person from the Islamic world to win a Nobel prize in science and the most recent person from Imperial College to do so, which makes us very proud of him.

How did his connection to Imperial College come about?

CdR: After studying at Cambridge, he went back to Pakistan but realized that the scientific, opportunities there were limited. So he returned to Cambridge for a while, before being appointed a professor of applied mathematics at Imperial in 1957. That made him the first Asian academic to hold a professorship at any UK university. He then moved to the physics department at Imperial and stayed at the college for almost 40 years – for the rest of his life.

Large photo of Abdus Salam at the entrance the main library at Imperial College

For Salam, Imperial was his scientific home. He founded the theoretical physics group here, doing the work on quantum electromagnetics and quantum field theory that led to his Nobel prize. But he also did foundational work on renormalization, grand unification, supersymmetry and so on, making Imperial one of the world’s leading centres for fundamental physics research. Many of his students, like Michael Duff and Ray Rivers, also had an incredible impact in physics, paving the way for how we do quantum field theory today.

What was Salam like as a person?

IW: I had the privilege of meeting Salam when I was an undergraduate here in Imperial’s physics department in 1977. In the initial gathering of new students, he gave a short talk on his work and that of the theoretical physics group and the wider department. I didn’t understand much of what he said, but Salam’s presence was really important for motivating young people to think about – and take on – the hard problems and to get a sense of the kind of problems he was tackling. His enthusiasm was really fantastic for a young student like myself.

When he won the Nobel prize in 1979, I was by then a second-year student and there were a lot of big celebrations and parties in the department. There were a number of other luminaries at Imperial like Kibble, who’d made lots of important contributions. In fact, I think Salam’s group was probably the leading theoretical particle group in the UK and among the best in the world. He set it up and it was fantastic for the department to have someone of his calibre: it was a real boost.

How would you describe Salam’s approach to science?

CdR: Salam thought about science on many different levels. There wasn’t just the unification within science itself, but he saw science as a unifying force. As he showed when he set up the theoretical physics group at Imperial and, later, the ICTP in Trieste, he saw science as something that could bring people from all over the world together.

We’re used to that kind of approach today. But at the time, driving collaboration across the world was revolutionary. Salam wasn’t just an incredible scientist, but an incredible human being. He was eager to champion diversity – recognizing that it’s the best thing not just for science but for humanity too. Salam was ahead of his time in realizing the unifying power of science and being able to foster it throughout the world.

What impact has the ICTP had over the last 60 years?

CdR: The goal of the ICTP has been to combat the isolation and lack of resources that people in some parts of the world, especially the global south, were facing. It’s had a huge impact over the last 60 years and has now grown into a network of five institutions spread over four continents, all of which are devoted to advancing international collaboration and scientific expertise to the non-western world. It hosts around 6000 scientists every year, about 50% of whom are from the global south.

How well known do you think Salam is around the world?

IW: Is he well known in the physics community globally? Absolutely. I also think he is well regarded and known across the Muslim community. But is he well known to the general public as one of the UK’s greatest adopted scientists? Probably not. And I think that’s a shame because his skills as a pedagogue and his concern for people as a whole – and for science as a motivating force – are really important messages and things he really championed.

What activities has Imperial got planned for the centenary of Salam’s birth?

CdR: We want to use the centenary not only to promote and celebrate excellence in fundamental science but also to engage with people form the global south. In fact, we already had a 98th birthday celebration on campus earlier this year, where we renamed the Imperial Central Library, which is now called the Abdus Salam Library. Then there were public talks by various physicists, including the ICTP director Atisha Dabodkar and Tasneem Husain, who is Pakistan’s first female string theorist.

5 people stood in front of the Abdus Salam Library

We also held an exhibition here on campus about many aspects of Salam’s life for school children all around London to come and visit. It’s now moved to a permanent virtual home online. And we held an essay contest for school children from Pakistan to see how Salam has inspired them, selecting a few to go online. We also had a special documentary about Salam filmed called “A unifying force”.

What impact do you think those events have had?

IW: It was really great to name a building after him, especially as it’s the library where students congregate all the time. There’s a giant display on the wall outside that describes him and has a great picture of Salam. You can see it even without entering the library, which is great because you often have families taking their children and showing them the picture and reading the narrative. It’ll spread his fame a bit more, which is really important and really lovely.

CdR: One thing that was clear in the build-up to the event in January was just how much his life story resonates with people at absolutely every level. No matter your background or whether you’re a scientist or not, I think Salam’s life awakens the scientist in all of us – he connects with people. But as the centenary of his birth draws closer, we want to build on those initiatives. Fundamental, curiosity-driven research is a way to make connections with the global south so we’re very much looking forward to an even bigger celebration for his 100th birthday in 2026.

  • A full version of this interview can be heard on the 8 August 2024 episode of the Physics World Weekly podcast.

Abdus Salam: driven to success

Abdus Salam

Abdus Salam, like all geniuses, was not a straightforward character. That much is made clear in the 2018 documentary movie Salam: the First ****** Nobel Laureate directed by Anand Kamalakar and produced by Zakir Thaver and Omar Vandal. Containing interviews with Salam’s friends, family members and former colleagues, Salam is variously described as being “charismatic”, “humane”, “difficult”, “impatient”, “sensitive”, “gorgeous”, “bright”, “dismissive” and “charming”.

Despite him being the first Nobel-prize winner from Pakistan, the film also wonders why he is relatively poorly known and unrecognized in his homeland. The movie argues that this was down to his religious beliefs. Most Pakistanis are Sunnis but Salam was an Ahmadi, part of a minor Islamic movement. Opposition in Pakistan to the Ahmadis even led to its parliament declaring them non-Muslims in 1974, forbidden from professing their creed in public or even worshipping in their own mosques.

Those edicts, which led to Salam’s religious beliefs being re-awakened, also saw him effectively being ignored by Pakistan (hence the title of the movie). However, Salam was throughout his life keen to support scientists from less wealthy nations, such as his own, which is why he founded the International Centre for Theoretical Physics (ICTP) in Trieste in 1964.

Celebrating its 60th anniversary this year, the ICTP now has 45 permanent research staff and brings together more than 6000 leading and early-career scientists from over 150 nations to attend workshops, conferences and scientific meetings. It also has international outposts in Brazil, China, Mexico and Rwanda, as well as eight “affiliated centres” – institutes or university departments with which the ICTP has formal collaborations.

Matin Durrani

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>
Interview Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Durrani-Abdus-Salam-featured.jpg
3D printing creates strong, stretchy hydrogels that stick to tissue https://physicsworld.com/a/3d-printing-creates-strong-stretchy-hydrogels-that-stick-to-tissue/ Mon, 26 Aug 2024 10:00:41 +0000 https://physicsworld.com/?p=116440 A new 3D printing method fabricates entangled hydrogels for medical applications

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
A new method for 3D printing, described in Science, makes inroads into hydrogel-based adhesives for use in medicine.

3D printers, which deposit individual layers of a variety of materials, enable researchers to create complex shapes and structures. Medical applications often require strong and stretchable biomaterials that also stick to moving tissues, such as the beating human heart or tough cartilage covering the surfaces of bones at a joint.

Many researchers are pursuing 3D printed tissues, organs and implants created using biomaterials called hydrogels, which are made from networks of crosslinked polymer chains. While significant progress has been made in the field of fabricated hydrogels, traditional 3D printed hydrogels may break when stretched or crack under pressure. Others are too stiff to sculpt around deformable tissues.

Researchers at the University of Colorado Boulder, in collaboration with the University of Pennsylvania and the National Institutes of Standards and Technology (NIST), realized that they could incorporate intertwined chains of molecules to make 3D printed hydrogels stronger and more elastic – and possibly even allow them to stick to wet tissue. The method, known as CLEAR, sets an object’s shape using spatial light illumination (photopolymerization) while a complementary redox reaction (dark polymerization) gradually yields a high concentration of entangled polymer chains.

To their knowledge, the researchers say, this is the first time that light and dark polymerization have been combined simultaneously to enhance the properties of biomaterials fabricated using digital light processing methods. No special equipment is needed – CLEAR relies on conventional fabrication methods, with some tweaks in processing.

“This was developed by a graduate student in my group, Abhishek Dhand, and research associate Matt Davidson, who were looking at the literature on entangled polymer networks. In most of these cases, the entangled networks that form hydrogels with high levels of certain material properties…are made with very slow reactions,” explains Jason Burdick from CU-Boulder’s BioFrontiers Institute. “This is not compatible with [digital light processing], where each layer is reacted through short periods of light. The combination of the traditional [digital light processing] with light and the slow redox dark polymerization overcomes this.”

Experiments confirmed that hydrogels produced with CLEAR were fourfold to sevenfold tougher than hydrogels produced with conventional digital light processing methods for 3D printing. The CLEAR-fabricated hydrogels also conformed and stuck to animal tissues and organs.

“We illustrated in the paper the application of hydrogels printed with CLEAR as tissue adhesives, as others had previously defined material toughness as an important material property in adhesives. Through CLEAR, we can then process these adhesives into any structures, such as porous lattices or introduce spatial adhesion that may be of interest for biomedical applications,” Burdick says. “What is also interesting is that CLEAR can be used with other types of materials, such as elastomers, and we believe that it can be used across broad manufacturing methods.”

CLEAR could also have environmentally friendly implications for manufacturing and research, the researchers suggest, by eliminating the need for additional light or heat energy to harden parts. The researchers have filed for a provisional patent and will be conducting additional studies to better understand how tissues react to the printed hydrogels.

“Our work so far was mainly proof-of-concept of the method and showing a range of applications,” says Burdick. “The next step is to identify those applications where CLEAR can make an impact and then further explore those topics, whether this is specific to biomedicine or more broadly beyond this.”

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
Research update A new 3D printing method fabricates entangled hydrogels for medical applications https://physicsworld.com/wp-content/uploads/2024/08/26-08-24-3D-Printer-Matt-Davidson.jpg newsletter1
Drowsiness-detecting earbuds could help drivers stay safe at the wheel https://physicsworld.com/a/drowsiness-detecting-earbuds-could-help-drivers-stay-safe-at-the-wheel/ Thu, 22 Aug 2024 15:00:41 +0000 https://physicsworld.com/?p=116407 In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Drowsiness plays a major role in traffic crashes, injuries and deaths, and is considered the most critical hazard in construction and mining. A wearable device that can monitor fatigue could help protect drivers, pilots and machine operators from the life-threatening dangers of fatigue.

With this aim, researchers at UC Berkeley are developing techniques to detect signs of drowsiness in the brain, using a pair of prototype earbuds to perform electroencephalography (EEG) and other physiological measurements. Describing the device in Nature Communications, the team reports successful tests on volunteers.

“Wireless earbuds are something we already wear all the time,” says senior author Rikky Muller in a press statement. “That’s what makes ear EEG such a compelling approach to wearables. It doesn’t require anything extra. I was inspired when I bought my first pair of Apple’s AirPods in 2017. I immediately thought, ‘What an amazing platform for neural recording’.”

Improved design

EEG uses multiple electrodes placed on the scalp to non-invasively monitor the brain’s electrical activity – such as the alpha waves that increase when a person is relaxed or sleepy. Researchers have also demonstrated that multi-channel EEG signals can be recorded from inside the ear canal, using in-ear sensors and electrodes.

Existing in-ear devices, however, mostly use wet electrodes (which necessitate skin-preparation and hydrogel on the electrodes), contain bulky electronics and require customized earpieces for each user. Instead, Muller and colleagues aimed to create an in-ear EEG with long-lifespan dry electrodes, wireless electronics and a generic earpiece design.

In-ear EEG device

The researchers developed a fabrication process based on 3D printing of a polymer earpiece body and electrodes. They then plated the electrodes with copper, nickel and gold, creating electrodes that remain stable over months of use. To ensure comfort for all users, they designed small, medium and large earpieces (with slightly different electrode sizes to maximize electrode surface area).

The final medium-sized earpiece contains four 60 mm2 in-ear electrodes, which apply outward pressure to lower the electrode–skin impedance and improve mechanical stability, plus two 3 cm2 out-ear electrodes. Signals from the earpiece are read out and transmitted to a base station by a low-power wireless neural recording platform (the WANDmini) affixed to a headband.

Drowsiness study

To assess the earbuds’ performance, the team recorded 35 h of electrophysiological data from nine volunteers. Subjects wore two earpieces and did not prepare their skin beforehand or apply hydrogel to the electrodes. As well as EEG, the device measured signals such as heart beats (using electrocardiography) and eye movements (via electrooculography), collectively known as ExG.

To induce drowsiness, subjects played a repetitive reaction time game for 40–50 min. During this task, they rated their drowsiness every 5 min on the Karolinska Sleepiness Scale (KSS). The measured ExG data, reaction times and KSS ratings were used to generate labels for classifier models. Data were labelled as “drowsy” if the user reported a KSS score of 5 or higher and their reaction time had more than doubled since the first 5 min.

To create the alert/drowsy classifier, the researchers extracted relevant temporal and spectral features in standard EEG frequency bands (delta, theta, alpha, beta and gamma). They used these data to train three low-complexity machine learning models: logistic regression, support vector machines (SVM) and random forest. They note that spectral features associated with eye movement, relaxation and drowsiness were the most important for model training.

All three classifier models achieved high accuracy, with comparable performance to state-of-the-art wet electrode systems. The best-performing model (utilizing a SVM classifier) achieved an average accuracy of 93.2% when evaluating users it had seen before and 93.3% with never-before-seen users. The logistic regression model, meanwhile, is more computationally efficient and requires significantly less memory.

The researchers conclude that the results show promise for developing next-generation wearables that can monitor brain activity in work environments and everyday scenarios. Next, they will integrate the classifiers on-chip to enable real-time brain-state classification. They also intend to miniaturize the hardware to eliminate the need for the WANDmini.

“We plan to incorporate all of the electronics into the earbud itself,” Muller tells Physics World. “We are working on earpiece integration, and new applications, including the use of earbuds during sleep.”

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Research update In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue https://physicsworld.com/wp-content/uploads/2024/08/22-08-24-ear-EEG-fig1.jpg
Physics for a better future: mammoth book looks at science and society https://physicsworld.com/a/physics-for-a-better-future-mammoth-book-looks-at-science-and-society/ Thu, 22 Aug 2024 12:24:13 +0000 https://physicsworld.com/?p=116412 Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast explores how physics can be used as a force for good – helping society address important challenges such as climate change, sustainable development, and improving health.

Our guest is the Swiss physicist Christophe Rossel, who is a former president of the European Physical Society (EPS) and an emeritus scientist at IBM Research in Zurich.

Rossel is a co-editor and co-author of the book EPS Grand Challenges, which looks at how science and physics can help drive positive change in society and raise standards of living worldwide as we approach the middle of the century. The huge tome weighs in at 829 pages, was written by 115 physicists and honed by 13 co-editors.

Rossel talks to Physics World’s Matin Durrani about the intersection of science and society and what physicists can do to make the world a better place.

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
Podcasts Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges https://physicsworld.com/wp-content/uploads/2024/08/21-8-24-Christophe-Rossel-list.jpg newsletter
Quantum sensor detects magnetic and electric fields from a single atom https://physicsworld.com/a/quantum-sensor-detects-magnetic-and-electric-fields-from-a-single-atom/ Thu, 22 Aug 2024 09:30:12 +0000 https://physicsworld.com/?p=116396 New device is like an MRI machine for quantum materials, say physicists

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Researchers in Germany and Korea have fabricated a quantum sensor that can detect the electric and magnetic fields created by individual atoms – something that scientists have long dreamed of doing. The device consists of an organic semiconducting molecule attached to the metallic tip of a scanning tunnelling microscope, and its developers say that it could have applications in biology as well as physics. Some possibilities include sensing the presence of spin-labelled biomolecules and detecting the magnetic states of complex molecules on a surface.

Today’s most sensitive magnetic field detectors exploit quantum effects to map the presence of extremely weak fields. Among the most promising of these new-generation quantum sensors are nitrogen vacancy (NV) centres in diamond. These structures can be fabricated inside a nanopillar on the tip of an atomic force microscope (AFM) tip, and their spatial resolution is an impressively small 10–100 nm. However, this is still a factor of 10 to 100 larger than the diameter of an atom.

A spatial resolution of 0.1 nm

The new sensor developed by Andreas Heinrich and colleagues at the Forschungszentrum Jülich and Korea’s IBS Center for Quantum Nanoscience (QNS) can also be placed on a microscope tip – in this case, a scanning tunnelling microscope (STM). The difference is the spatial resolution of this atomic-scale device is just 0.1 nm, making it 100 to 1000 times more sensitive than devices based on NV centres.

The team made the sensor by attaching a molecule with an unpaired electron – a molecular spin – to the apex of an STM’s metallic tip. “Typically, the lifetime of a spin in direct contact with a metal is very short and cannot be controlled,” explains team member Taner Esat, who was previously at QNS and is now at Jülich. “In our approach, we brought a planar molecule known as 3,4,9,10-perylenetetracarboxylic-dianhydride (or PTCDA for short) into a special configuration on the tip using precise atomic-scale manipulation, thus decoupling the molecular spin.”

Determining the magnetic field of a single atom

In this configuration, Esat explains that the molecule is a spin ½ system, and in the presence of a magnetic field, it behaves like a two-level quantum system. This behaviour is due to the Zeeman effect, which splits the molecule’s ground state into spin-up and spin-down states with an energy difference that depends on the strength of the magnetic field. Using electron spin resonance in the STM, the researchers were able to detect this energy difference with a resolution of around ~100 neV. “This allowed us to determine the magnetic field of a single atom (which finds itself only a few atomic distances away from the sensor) that caused the change in spin states,” Esat tells Physics World.

The team demonstrated the feasibility of its technique by measuring the magnetic and electric dipole fields from a single iron atom and a silver dimer on a gold substrate with greater than 0.1 nm resolution.

The next step, says Esat, is to increase the new device’s magnetic field sensitivity by implementing more advanced sensing protocols based on pulsed electron spin resonance schemes and by finding molecules with longer spin decoherence times. “We hope to increase the sensitivity by a factor of about 1000, which would allow us to detect nuclear spins at the atomic scale,” he says.

A holy grail for quantum sensing

The new atomic-scale quantum magnetic field sensor should also make it possible to resolve spins in certain emerging two-dimensional quantum materials. These materials are predicted to have many complex magnetic orders, but they cannot be measured with existing instruments, Heinrich and his QNS colleague Yujeong Bae note. Another possibility would be to use the sensor to study so-called encapsulated spin systems such as endohedral-fullerenes, which comprise a magnetic core surrounded by an inert carbon cage.

“The holy grail of quantum sensing is to detect individual nuclear spins in complex molecules on surfaces,” Heinrich concludes. “Being able to do so would make for a magnetic resonance imaging (MRI) technique with atomic-scale spatial resolution.”

The researchers detail their sensor in Nature Nanotechnology. They have also prepared a video to illustrate the working principle of the device and how they fabricated it.

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Research update New device is like an MRI machine for quantum materials, say physicists https://physicsworld.com/wp-content/uploads/2024/08/Low-Res_2024_06_19_Esat_005.jpg
Software expertise powers up quantum computing https://physicsworld.com/a/software-expertise-powers-up-quantum-computing/ Wed, 21 Aug 2024 14:37:58 +0000 https://physicsworld.com/?p=116387 Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Making a success of any new venture can be a major challenge, but it always helps to have powerful partnerships. In the case of the Quantum Software Lab (QSL), established in April 2023 as part of the University of Edinburgh’s School of Informatics, its position within one of the world’s leading research centres for computer science offers direct access to expertise spanning everything from artificial intelligence through to high-performance computing. But the QSL also has a strategic alliance with the UK’s National Quantum Computing Centre (NQCC), providing a gateway to emerging hardware platforms and opening up new opportunities to work with end users on industry-relevant problems.

Bringing those worlds together is Elham Kashefi, who is both the director of the QSL and Chief Scientist of the NQCC. In her dual role, Kashefi is able to connect and engage with the global research community, while also exploiting her insights and ideas to shape the technology programme at the national lab. “Elham Kashefi is the most vibrant and exuberant character, and she has all the right attitudes to bring diverse people together to tackle the big challenges we are facing in quantum computing,” says Sir Peter Knight, the architect behind the UK’s National Quantum Technologies Programme. “Elham has the ability to apply insights from her background in computer science in a way that helps physicists like me to make the hardware work more effectively.”

The QSL’s connection to the NQCC imbues its activities with a strong focus on innovation, centring its development programme around the objective of demonstrating quantum utility – in other words, delivering reliable and accurate quantum solutions that offer a genuine improvement over classical computing. “Our partnership with the QSL is all about driving user adoption,” says NQCC director Michael Cuthbert. “The NQCC can provide a front door to the end-user community and raise awareness of the potential of quantum computing, while our colleagues in Edinburgh bring the academic expertise and rigour to translate the mathematics of quantum theory into use cases and applications that benefit all parts of our society and the economy.”

Since its launch, the QSL has become the largest research group for quantum software and algorithm development in the UK, with more than 50 researchers and PhD students. This core team is also supported by number of affiliate members from across the University of Edinburgh, notably the EPCC supercomputing centre, as well as from the Sorbonne University in France, where Kashefi also has a research role.

Within this extended network Kashefi and her faculty team have been working to establish a research culture that is based on collective success rather than individual endeavour. “There is so much discovery and innovation happening right now, and we set ourselves the goal of bringing disparate pieces together to establish a coherent programme,” she explains. “What has made me very happy is that we are now focusing on what we can achieve by combining our knowledge and expertise, rather than what we can do on our own.”

Within the Lab’s core programme, the Quantum Advantage Pathfinder, the primary goal is to work with end users in industry and the public sector to identify key computational roadblocks and translate them into research problems that can be addressed with quantum techniques. Once an algorithm has been devised and implemented, a crucial step of the process is to benchmark the solution to assess what sort of benefit it might offer over a conventional supercomputer.

“We are all academic researchers, but within the QSL we are nurturing a start-up culture where we want to understand and address the needs of the ecosystem,” says Kashefi. “For each project we are following the full pathway from the initial pain point identified by our industry partners through to a commercial application where we can show that quantum computing has delivered a genuine advantage.”

In just one example, application engineers from the NQCC and software developers from the QSL have been working with the high-street bank HSBC to explore the benefits of quantum computing for tackling the growing problem of financial fraud. HSBC already exploits classical machine learning to detect anomalous transactions that could indicate criminal behaviour, and the project team – which also includes hardware provider Rigetti – has been investigating whether quantum machine learning could deliver an advantage that would reduce risk and enable the bank to improve its anti-fraud services.

Quantum Software Lab

Alongside these problem-focused projects, the discovery-led nature of the academic environment also provides the QSL with the freedom to reverse the pipeline: to develop optimal approaches for a class of quantum algorithms or protocols that could be relevant for many different application areas. One project, for example, is investigating how hybrid quantum/classical algorithms could be exploited to solve big data problems using a small-scale quantum computer, while another is developing a unified benchmarking approach that could be applied across different hardware architectures.

For the NQCC, meanwhile, Cuthbert believes that the insights gained from this more universal approach will be crucial for planning future activities at the national lab. “Theoretical advances that are focused on the practical utilization of quantum computing will inform our technology programme and help us to build an effective quantum ecosystem,” he says. “It is vitally important that we understand how different elements of theory are developing, and what new techniques and discoveries are emerging in classical computing.”

Indeed, the importance of theory and informatics for accelerating the development of useful quantum computing is underlined by the QSL’s leading role in two of the new quantum hubs that were launched by the UK government at the end of July. For the one that will be focused on quantum computing, which is based at the University of Oxford, QSL researchers will take the lead on developing software tools that will help to extract more power from emerging quantum hardware, such as quantum error correction, distributed quantum computing, and hybrid quantum/classical algorithms. The QSL team will also investigate novel protocols for secure multi-party computing through its partnership with the Integrated Quantum Networks hub, which is being led by Heriot-Watt University.

Sir Peter Knight

At the same time, the QSL’s direct link to the NQCC will help to ensure that these software tools advance in tandem with the rapidly evolving capabilities of the quantum processors. “You need a marriage between the hardware and software to drive progress and work out where the roadblocks are,” comments Sir Peter. “Continuous feedback between algorithm development, the design of the quantum computing stack, and the physical constraints of the hardware creates a virtuous circle that produces better results within a shorter timeframe.”

An integral part of that accelerated co-development is the NQCC’s development of hardware platforms based on superconducting qubits, trapped ions and neutral atoms, while the national lab is also set to host seven quantum testbeds that are now being installed by commercial hardware developers. Once the testbeds are up and running in March 2025, there will be a two-year evaluation phase in which QSL researchers and the UK’s wider quantum community will be able to work with the NQCC and the hardware companies to understand the unique capabilities of each technology platform, and to investigate which qubit modalities are most suited to solving particular types of problems.

One key focus for this collaborative work will be developing and testing novel schemes for error correction, since it is becoming clear that quantum machines with even modest numbers of qubits can address complex problems if the noise levels can be reduced. Researchers at the QSL are now working to translate recent theoretical advances into software that can run on real computer architectures, with the testbeds providing a unique opportunity to investigate which error-correction codes can deliver the optimal results for each qubit modality.

Supporting these future endeavours will be a new Centre for Doctoral Training (CDT) for Quantum Informatics, led by the University of Edinburgh in collaboration with the University of Oxford, University College London, the University of Strathclyde and Heriot-Watt University.

“As part of their training, each cohort will spend two weeks at the NQCC, enabling the students to learn key technical skills as well as gaining an understanding of wider issues, such as the importance of responsible and ethical quantum computing,” says CDT director Chris Heunen, a senior member of the QSL team. “During their placement the students will also work with the NQCC’s applications engineers to solve a specific industry problem, exposing them to real-world use cases as well as the hardware resources installed at the national lab.”

With the CDT set to train around 80 PhD students over the next eight years, Kashefi believes that it will play a vital role in ensuring the long-term sustainability of the QSL’s programme and the wider quantum ecosystem. “We need to train a new generation of quantum innovators,” she says. “Our CDT will provide a unique programme for enabling young people to learn how to use a quantum computer, which will help us in our goal to deliver innovative solutions that derive real value from quantum technologies.”

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Analysis Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems https://physicsworld.com/wp-content/uploads/2024/08/web-QSL-launch-2023.jpg newsletter
Vacuum-sealed tubes could form the backbone of a long-distance quantum network https://physicsworld.com/a/vacuum-sealed-tubes-could-form-the-backbone-of-a-long-distance-quantum-network/ Wed, 21 Aug 2024 14:00:15 +0000 https://physicsworld.com/?p=116394 Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
A network of vacuum-sealed tubes inspired by the “arms” of the LIGO gravitational wave detector could provide the foundations for a future quantum Internet. The proposed design, which its US-based developers describe as both “revolutionary” and feasible, could support communication rates as high as 1013 quantum bits (qubits) per second. This would exceed currently-available quantum channels based on satellites or optical fibres by at least four orders of magnitude, though members of the team note that implementing the design will be challenging.

Quantum computers outperform their classical counterparts at certain problems. Realizing their full potential, however, will require connecting multiple quantum machines via a network that can transmit quantum information over long distances, just as the Internet does with classical information.

One way of creating such a network would be to use existing technologies such as fibre optics cables or satellites. Both technologies transmit classical information using photons, and in principle they can transmit quantum information using photonic qubits, too. The problem is that they are inherently “lossy”, with photons being absorbed by the fibre or (to a lesser degree) by the Earth’s atmosphere on their way to and from the vacuum of space. This loss of information is particularly challenging for quantum networks, as qubits cannot be “copied” in the same way that classical bits can.

Inspired by LIGO

The proposal put forward by Liang Jiang and colleagues at the University of Chicago’s Pritzker School of Molecular Engineering, Stanford University and the California Institute of Technology aims to solve this problem by combining the advantages of satellite- and fibre-based communications. “In a vacuum, you can send a lot of information without attenuation,” explains team member Yesun Huang, the lead author of a Physical Review Letters paper on the proposal. “But being able to do that on the ground would be ideal.”

The new design for a long-distance quantum network involves connecting quantum channels made from vacuum-sealed tubes fitted with a series of lenses. These vacuum beam guides (VBGs), as they are known, measure around 20 cm in diameter, and Huang says they could span thousands of kilometres while supporting the transmission of 10 trillion qubits per second. “Photons carrying quantum information could travel through these tubes with the lenses placed every few kilometres in the tubes to ensure they do not spread out too much and stay focused,” he explains.

Infographic showing a map of the US with "backbone" vacuum quantum channels connecting several major cities, supplemented with shorter fibre-based communication channels reaching smaller hubs. A smaller diagram shows the positioning of lenses along the vacuum channel between quantum nodes.

The new design is inspired by the system that the Laser Interferometer Gravitational-Wave Observatory (LIGO) experiment employs to detect gravitational waves. In LIGO, twin laser beams travel down two tubes – the “arms” of the interferometer – that are arranged in an L-shape and kept under ultrahigh vacuum. Mirrors precisely positioned at the ends of each arm reflect the laser light back down the tubes and onto a detector. When a gravitational wave passes through this set-up, it distorts the distance travelled by each laser beam by a tiny but detectable amount.

Engineering challenges, but a big payoff

While LIGO’s arms measure 4::km in length, the tubes in Jiang and colleagues’ experiments could be much smaller. They would also need only a moderate vacuum of 10-4 atmospheres of pressure as opposed to LIGO’s 10-11 atm. Even so, the researchers acknowledge that implementing their technology will not be simple, with several civil engineering issues still to be addressed.

For the moment, the team is focusing on small-scale experiments to characterize the VBGs’ performance. But members are thinking big. “Our hope is to realize these channels over a continental scale,” Huang tells Physics World.

The benefits of doing so would be significant, he argues. “As well as benefiting secure quantum communication (quantum key distribution protocols, for example), the new VBG channels might also be employed in other quantum applications,” he says. As examples, he cites ultra-long-baseline optical telescopes, quantum networks of clocks, quantum data centres and delegated quantum computing.

Jiang adds that with the entanglement created from VBG channels, the researchers also hope to improve the performance of coordinating decisions between remote parties using so-called quantum telepathy – a phenomenon whereby two non-communicating parties can exhibit correlated behaviours that would be impossible to achieve using classical methods.

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
Research update Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet https://physicsworld.com/wp-content/uploads/2024/08/21-08-2024-Liang-Jiang.jpg newsletter1
Solar-driven atmospheric water extractor provides continuous freshwater output https://physicsworld.com/a/solar-driven-atmospheric-water-extractor-provides-continuous-freshwater-output/ Wed, 21 Aug 2024 10:15:52 +0000 https://physicsworld.com/?p=116399 Standalone device harvests water out of air without requiring maintenance, solely using sunlight

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Freshwater scarcity affects 2.2 billion people around the world, especially in arid and remote regions. More work needs to be done to develop new technologies that can provide freshwater in regions where there is a lack of suitable water for drinking and irrigation. Harvesting moisture from the air is one approach that has been trialled over the years with varying degrees of success.

“Water scarcity is one of the major challenges faced by the globe, which is particularly important in Middle East regions. Depending on the local conditions, one needs to identify all possible water sources to get fresh water for our daily use,” explains Qiaoqiang Gan, from King Abdullah University of Science and Technology (KAUST).

Gan and his team have recently developed a solar-driven atmospheric water extraction (SAWE) device that can continuously harvest moisture from the air to supply clean water to people in humid climates.

New development in an existing area

Technologies for harvesting water from the air have been around for many years, but SAWEs have faced various obstacles – one of the main being slow kinetics in the sorbent materials. In SAWEs, the sorbent material first captures moisture from the air. Once saturated, the system is sealed and exposed to sunlight to extract the water.

The slow kinetics means that only one cycle is possible per day with most devices, so they have traditionally worked using a two-stage approach – moisture capture at night and desorption via sunlight during the day. Many systems have low outputs, and require manual switching between cycles, so they cannot provide continuous water harvesting.

This could be about to change, because the system developed by Gan and colleagues can produce water continuously. “We can use the extracted water from the air for irrigation with no need for tap water. This is an attractive technology for regions with humid air but no access to fresh water,” says Gan.

Continuous water production

The SAWE developed at KAUST passively alternates between the two stages and can cycle continuously without human intervention. This was made possible by the inclusion of mass transport bridges (MTBs) that provide a connection between the water capture and water generation mechanisms.

The MTBs comprise vertical microchannels filled with a salt solution to absorb water from the atmosphere. Once saturated, the water-rich salt solution is pulled up via capillary action into an enclosed high-temperature chamber. Here, a solar absorber generates concentrated vapour, which then condenses on the chamber wall, producing freshwater. The concentrated salt solution then diffuses back down the channel to collect more water.

Under 1-sun illumination at 90% relative humidity, a prototype SAWE system with an evaporation area of 3 × 3 cm consistently produced fresh water at a rate of 0.65 L/m2/h. The researchers found that the system could also function in more arid environments with relative humidity as low as 40% and that – in regions with abundant solar irradiance and high humidity – it had a maximum water production potential of 4.6 L/m2 per day.

Scaling up in Saudi Arabia

Following the initial tests, the researchers built a scaled-up system (with an evaporation area of 13.5 × 24 cm) in Thuwal, Saudi Arabia, that was just as affordable and simple to produce as the small-scale prototype. They tested the system over 35 days across two seasons.

“Saudi Arabia launched an aggressive initiative known as Saudi Green Initiative, aiming to plant 10 billion trees in the country. The key challenge is to get fresh water for irrigation,” Gan explains. “Our technology provided a potential solution to address the water needs in suitable regions like the core area near the Red Sea and Arabic Bay, where they have humid air but no sufficient fresh water.”

The tests in Saudi Arabia showed that the scaled-up system could produce 2–3 L/m2 of freshwater per day during summer and 1–2.8 L/m2 per day during the autumn. The water harvested was also used for off-grid irrigation of Chinese cabbage plants in the local harvesting area, showing its potential for use in remote areas that lack access to large-scale water sources.

Looking ahead, Gan tells Physics World that “we are developing prototypes for the atmospheric water extraction module to irrigate plants and trees, as the water productivity can meet the water needs of many plants in their seeding stage”.

The research is described in Nature Communications.

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Research update Standalone device harvests water out of air without requiring maintenance, solely using sunlight https://physicsworld.com/wp-content/uploads/2024/08/21-08-24-solar-powered-water-extractor.jpg newsletter1
Half-life measurement of samarium-146 could help reveal secrets of the early solar system https://physicsworld.com/a/half-life-measurement-of-samarium-146-could-help-reveal-secrets-of-the-early-solar-system/ Tue, 20 Aug 2024 15:51:10 +0000 https://physicsworld.com/?p=116378 Isotope is extracted from an accelerator target

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
The radioactive half-life of samarium-146 has been measured to the highest accuracy and precision so far. Researchers at the Paul Scherrer Institute (PSI) in Switzerland and the Australian National University in Canberra made their measurement using waste from the PSI’s neutron source and the result should help scientists gain a better understanding of the history of the solar system.

With a half-life of 92 million years, samarium-146 is ideally suited for dating events that occurred early in the history of the solar system. These include volcanic activity on the Moon, the formation of meteorites, and the differentiation of Earth’s interior into distinct layers.

Samarium-146 in the early solar system was probably produced in a nearby supernova as our galaxy was forming about 4.5 billion years ago. Thanks to the isotope’s relatively long half-life, it would have been incorporated into nascent planets and asteroids. The isotope then slowly vanished from the solar system. It is now so rare that it is considered an extinct isotope, whose previous existence is inferred from the presence of the neodymium isotope to which it decays.

There is another isotope, samarium-147, with a half-life that is 1000 times longer than samarium-146. While the two isotopes have identical chemical properties, samarium-147 currently accounts for about 15% of samarium on Earth. Together, these two isotopes can be used for dating rocks, but only if their half-lives are known to sufficiently high accuracy.

Huge range

Unfortunately, the half-life of samarium-146 has proven notoriously difficult to measure. Over the past few decades, numerous studies have placed its value somewhere between 60 and 100 million years, but its exact value within this range has remained uncertain. The main reason for this uncertainty is that the isotope does not occur naturally on Earth and instead is made in tiny quantities in nuclear physics experiments.

In previous studies, the isotope was created by irradiating other samarium isotopes with protons or neutrons. However, this approach has drawbacks. “The main disadvantages are the cost and time required for dedicated irradiation and the fact that the desired isotope is made of the same element as the target material itself,” explains Rugard Dressler at PSI’s Laboratory for Radiochemistry. “This rules out the possibility of separating samarium-146 by chemical means alone.”

To overcome these limitations, a team led by Dorothea Schumann at PSI looked to the Swiss Spallation Neutron Source (SINQ) as a source of the isotope. SINQ creates neutrons by smashing protons into solid targets, which are damaged in the process. To better understand how this damage occurs, a range of different target materials have been irradiated at SINQ. This included tantalum, which Schumann identified as the most promising material to extract a quantity of samarium-146 in solution using a sequence of highly selective radiochemical separation and purification steps.

“Only in this way it was possible to obtain a sufficient amount of samarium-146 for the precise determination of its half-life – a possibility that is not available anywhere else around the world,” explains PSI’s Zeynep Talip.

Then they used some of the solution to create a thin layer of samarium oxide on a graphite substrate. Using mass spectrometers at PSI and in Australia to study their original solution, the team determined that there were  6.28×1013 samarium-146 nuclei in their sample.

Alpha particles

The sample was place at a well-defined distance from a carefully calibrated alpha radiation detector. By measuring the energy of emitted alpha particles, the team confirmed that the particles were produced by the decay of samarium-146. Over the course of three months, they measured the isotope’s decay rate and found it to be just under 54 decays per hour.

From this, they calculated the samarium-146 half-life to be 92 million years, with an uncertainty of just 2.6 million years.

“The half-life derived in our study shows that the results from the last century are compatible with our value within their uncertainties,” Dressler notes. “Furthermore, we were able to reduce the uncertainty considerably.”

This result marks an important breakthrough in an experimental challenge that has persisted for decades, and could soon provide a new window into the distant past. “A more precise determination of the half-life of will pave the way for a more detailed and accurate chronology of processes in our solar system and geological events on Earth,” says Dressler.

The research is described in Scientific Reports.

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
Research update Isotope is extracted from an accelerator target https://physicsworld.com/wp-content/uploads/2024/08/20-8-24-Samarium-half-life.jpg
Enabling battery quality at scale https://physicsworld.com/a/enabling-battery-quality-at-scale/ Tue, 20 Aug 2024 13:59:59 +0000 https://physicsworld.com/?p=115702 The Electrochemical Society in partnership with BioLogic explores how high-throughput inspection can enable battery quality at scale

The post Enabling battery quality at scale appeared first on Physics World.

]]>

Battery quality lies at the heart of major issues relating to battery safety, reliability, and manufacturability. This talk reviews the challenges and opportunities to enable battery quality at scale. First, the interplay between various battery failure modes and their numerous root causes is described. Then, which failure modes are best detected by electrochemistry, and which are not, is discussed. Finally, how improved inspection – specifically, high-throughput computed tomography (CT) – can play a role in solving the battery quality challenge is reviewed.

An interactive Q&A session follows the presentation.

Peter Attia is co-founder and chief technical officer of Glimpse. Previously, he worked as an engineering lead on some of Tesla’s toughest battery failure modes and managed a team focused on battery data analysis. Peter holds a PhD from Stanford, where he developed seminal machine learning methods for battery lifetime prediction and optimization. He has received honours such as Forbes 30u30 but has not written a bestselling book on aging.

The Electrochemical Society

 

The post Enabling battery quality at scale appeared first on Physics World.

]]>
Webinar The Electrochemical Society in partnership with BioLogic explores how high-throughput inspection can enable battery quality at scale https://physicsworld.com/wp-content/uploads/2024/07/ECS_image_2024_09_18.jpg
AI-assisted photonic detector identifies fake semiconductor chips https://physicsworld.com/a/ai-assisted-photonic-detector-identifies-fake-semiconductor-chips/ Tue, 20 Aug 2024 12:35:14 +0000 https://physicsworld.com/?p=116361 New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Diagram of the RAPTOR detection system

The semiconductor industry is an economic powerhouse, but it is not without its challenges. As well as shortages of new semiconductor chips, it increasingly faces an oversupply of counterfeit ones. The spread of these imitations poses real dangers for the many sectors that rely on computer chips, including aviation, finance, communications, artificial intelligence and quantum technologies.

Researchers at Purdue University in the US have now combined artificial intelligence (AI) and photonics technology to develop a robust new method for detecting counterfeit chips. The new method could reduce the risks of unwanted surveillance, chip failure and theft within the $500 bn global semiconductor industry by reining in the market for fake chips, which is estimated at $75 bn.

The main way of detecting counterfeit semiconductor chips relies on “baking” security tags into chips or their packaging. Such tags work using technologies such as physical unclonable functions made from media such as arrays of metallic nanomaterials. These structures can be engineered to scatter light strongly in specific patterns that can be detected and used as a “fingerprint” for the tagged chip.

The problem is that these security structures are not tamper-proof. They can degrade naturally – for example, if temperatures get too high. If they are printed on packaging, they can also be rubbed off, either accidentally or intentionally.

Embedded gold nanoparticles

The Purdue researchers developed an alternative optical anti-counterfeiting technique for semiconductor devices based on identifying modifications in the patterns of light scattered off nanoparticle arrays embedded in chips or chip packaging. Their approach, which they call residual attention-based processing of tampering response (RAPTOR), relies on analysing the light scattered before and after an array has degraded naturally or been tampered with.

To make the technique work, a team led by electrical and computer engineer Alexander Kildishev embedded gold nanoparticles in the packaging of a packet of semiconductor chips. The team then took several dark-field microscope images of random places on the packaging to record the nanoparticle scattering patterns. This made it possible to produce high-contrast images even though the samples being imaged are transparent to light and provide little to no light absorption contrast. The team then stored these measurements for later authentication.

“If someone then tries to swap the chip, they not only have to embed the gold nanoparticles, but they also have to place them all in the original locations,” Kildishev explains.

The role of artificial intelligence

To guard against false positives caused by natural abrasions disrupting the nanoparticles, or a malicious actor getting close to replacing the nanoparticles in the right way, the team trained an AI model to distinguish between natural degradation and malicious tampering. This was the biggest challenge, Kildishev tells Physics World. “It [the model] also had to identify possible adversarial nanoparticle filling to cover up a tampering attempt,” he says.

Writing in Advanced Photonics, the Purdue researchers show that RAPTOR outperforms current state-of-the-art counterfeit detection methods (known as the Hausdorff, Procrustes and average Hausdorff metrics) by 40.6%, 37.3%, and 6.4% respectively. The analysis process takes just 27 ms, and it can verify a pattern’s authenticity in 80 ms with nearly 98% accuracy.

“We took on this study because we saw a need to improve chip authentication methods and we leveraged our expertise in AI and nanotechnology to do just this,” Kildishev says.

The Purdue researchers hope that other research groups will pick up on the possibilities of combining AI and photonics for the semiconductor industry. This would help advance deep-learning-based anti-counterfeiting methods, they say.

Looking forward, Kildishev and colleagues plan to improve their nanoparticle embedding process and streamline the authentication steps further. “We want to quickly convert our approach into an industry solution,” Kildishev says.

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Research update New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers https://physicsworld.com/wp-content/uploads/2024/08/circuit-board-20436508-iStock_Henrik5000.jpg
Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power https://physicsworld.com/a/fast-monte-carlo-dose-calculation-with-precomputed-electron-tracks-and-gpu-power/ Tue, 20 Aug 2024 08:59:15 +0000 https://physicsworld.com/?p=116282 Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>

In this webinar, we will explore innovative advancements in Monte Carlo based dose calculations that are poised to impact radiation oncology quality assurance. This expert session will focus on new developments in 3D dose calculation engines and improved dosimetry capabilities.

Designed for medical physics and dosimetrist experts, the discussion will outline the latest developments and emphasize how these can improve dose calculation accuracy, treatment verification processes, and clinical workflows in general. Join us in understanding better how fast Monte Carlo can contribute to advancing quality assurance in radiation therapy.

An interactive Q&A session follows the presentation.

Veng Jean Heng, PhD, is a medical physics resident at Stanford University. He received both an MSc and a PhD from McGill University. During his MSc, he performed Monte Carlo beam and dose-to-outcome modelling for CyberKnife patients. His PhD was on the clinical implementation of a mixed photon-electron beam radiation therapy technique. His current research interests revolve around the development of dose calculation and optimization methods.

 

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

 

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2024/08/20240924_LAP-image-1.jpg