Saturday 19 2024

To make nuclear fusion a reliable energy source one day, scientists will first need to design heat- and radiation-resilient materials

Kinguin: Everybody plays.
A fusion experiment ran so hot that the wall materials facing the plasma retained defects. Christophe Roux/CEA IRFM, CC BY
Sophie Blondel, University of Tennessee

Fusion energy has the potential to be an effective clean energy source, as its reactions generate incredibly large amounts of energy. Fusion reactors aim to reproduce on Earth what happens in the core of the Sun, where very light elements merge and release energy in the process. Engineers can harness this energy to heat water and generate electricity through a steam turbine, but the path to fusion isn’t completely straightforward.

Controlled nuclear fusion has several advantages over other power sources for generating electricity. For one, the fusion reaction itself doesn’t produce any carbon dioxide. There is no risk of meltdown, and the reaction doesn’t generate any long-lived radioactive waste.

I’m a nuclear engineer who studies materials that scientists could use in fusion reactors. Fusion takes place at incredibly high temperatures. So to one day make fusion a feasible energy source, reactors will need to be built with materials that can survive the heat and irradiation generated by fusion reactions.

Fusion material challenges

Several types of elements can merge during a fusion reaction. The one most scientists prefer is deuterium plus tritium. These two elements have the highest likelihood of fusing at temperatures that a reactor can maintain. This reaction generates a helium atom and a neutron, which carries most of the energy from the reaction.

Humans have successfully generated fusion reactions on Earth since 1952 – some even in their garage. But the trick now is to make it worth it. You need to get more energy out of the process than you put in to initiate the reaction.

Fusion reactions happen in a very hot plasma, which is a state of matter similar to gas but made of charged particles. The plasma needs to stay extremely hot – over 100 million degrees Celsius – and condensed for the duration of the reaction.

To keep the plasma hot and condensed and create a reaction that can keep going, you need special materials making up the reactor walls. You also need a cheap and reliable source of fuel.

While deuterium is very common and obtained from water, tritium is very rare. A 1-gigawatt fusion reactor is expected to burn 56 kilograms of tritium annually. But the world has only about 25 kilograms of tritium commercially available.

Researchers need to find alternative sources for tritium before fusion energy can get off the ground. One option is to have each reactor generating its own tritium through a system called the breeding blanket.

The breeding blanket makes up the first layer of the plasma chamber walls and contains lithium that reacts with the neutrons generated in the fusion reaction to produce tritium. The blanket also converts the energy carried by these neutrons to heat.

The fusion reaction chamber at ITER will electrify the plasma.

Fusion devices also need a divertor, which extracts the heat and ash produced in the reaction. The divertor helps keep the reactions going for longer.

These materials will be exposed to unprecedented levels of heat and particle bombardment. And there aren’t currently any experimental facilities to reproduce these conditions and test materials in a real-world scenario. So, the focus of my research is to bridge this gap using models and computer simulations.

From the atom to full device

My colleagues and I work on producing tools that can predict how the materials in a fusion reactor erode, and how their properties change when they are exposed to extreme heat and lots of particle radiation.

As they get irradiated, defects can form and grow in these materials, which affect how well they react to heat and stress. In the future, we hope that government agencies and private companies can use these tools to design fusion power plants.

Our approach, called multiscale modeling, consists of looking at the physics in these materials over different time and length scales with a range of computational models.

We first study the phenomena happening in these materials at the atomic scale through accurate but expensive simulations. For instance, one simulation might examine how hydrogen moves within a material during irradiation.

From these simulations, we look at properties such as diffusivity, which tells us how much the hydrogen can spread throughout the material.

We can integrate the information from these atomic level simulations into less expensive simulations, which look at how the materials react at a larger scale. These larger-scale simulations are less expensive because they model the materials as a continuum instead of considering every single atom.

The atomic-scale simulations could take weeks to run on a supercomputer, while the continuum one will take only a few hours.

All this modeling work happening on computers is then compared with experimental results obtained in laboratories.

For example, if one side of the material has hydrogen gas, we want to know how much hydrogen leaks to the other side of the material. If the model and the experimental results match, we can have confidence in the model and use it to predict the behavior of the same material under the conditions we would expect in a fusion device.

If they don’t match, we go back to the atomic-scale simulations to investigate what we missed.

Additionally, we can couple the larger-scale material model to plasma models. These models can tell us which parts of a fusion reactor will be the hottest or have the most particle bombardment. From there, we can evaluate more scenarios.

For instance, if too much hydrogen leaks through the material during the operation of the fusion reactor, we could recommend making the material thicker in certain places, or adding something to trap the hydrogen.

Designing new materials

As the quest for commercial fusion energy continues, scientists will need to engineer more resilient materials. The field of possibilities is daunting – engineers can manufacture multiple elements together in many ways.

You could combine two elements to create a new material, but how do you know what the right proportion is of each element? And what if you want to try mixing five or more elements together? It would take way too long to try to run our simulations for all of these possibilities.

Thankfully, artificial intelligence is here to assist. By combining experimental and simulation results, analytical AI can recommend combinations that are most likely to have the properties we’re looking for, such as heat and stress resistance.

The aim is to reduce the number of materials that an engineer would have to produce and test experimentally to save time and money.

Sophie Blondel, Research Assistant Professor of Nuclear Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. 

Kinguin: Everybody plays.

Wednesday 28 2024

New forms of steel for stronger, lighter cars

Automakers are tweaking production processes to create a slew of new steels with just the right properties, allowing them to build cars that are both safer and more fuel-efficient

Like many useful innovations, it seems, the creation of high-quality steel by Indian metallurgists more than two thousand years ago may have been a happy confluence of clever workmanship and dumb luck.

Firing chunks of iron with charcoal in a special clay container produced something completely new, which the Indians called wootz. Roman armies were soon wielding wootz steel swords to terrify and subdue the wild, hairy tribes of ancient Europe.

Twenty-four centuries later, automakers are relying on electric arc furnaces, hot stamping machines and quenching and partitioning processes that the ancients could never have imagined. These approaches are yielding new ways to tune steel to protect soft human bodies when vehicles crash into each other, as they inevitably do — while curbing car weights to reduce their deleterious impact on the planet.

“It is a revolution,” says Alan Taub, a University of Michigan engineering professor with many years in the industry. The new steels, dozens of varieties and counting, combined with lightweight polymers and carbon fiber-spun interiors and underbodies, hark back to the heady days at the start of the last century when, he says, “Detroit was Silicon Valley.”

Such materials can reduce the weight of a vehicle by hundreds of pounds — and every pound of excess weight that is shed saves roughly $3 in fuel costs over the lifetime of the car, so the economics are hard to deny. The new maxim, Taub says, is “the right material in the right place.”

The transition to battery-powered vehicles underscores the importance of these new materials. Electric vehicles may not belch pollution, but they are heavy — the Volvo XC40 Recharge, for example, is 33 percent heavier than the gas version (and would be heavier still if the steel surrounding passengers were as bulky as it used to be). Heavy can be dangerous.

“Safety, especially when it comes to new transportation policies and new technologies, cannot be overlooked,” Jennifer Homendy, chief of the National Transportation Safety Board, told the Transportation Research Board in 2023. Plus, reducing the weight of an electric vehicle by 10 percent delivers roughly 14 percent improvement in range.

As recently as the 1960s, the steel cage around passengers was made of what automakers call soft steel. The armor from Detroit’s Jurassic period was not much different from what Henry Ford had introduced decades earlier. It was heavy and there was a lot of it.

With the 1965 publication of Ralph Nader’s Unsafe at Any Speed: The Designed-In Dangers of the American Automobile, big automakers realized they could no longer pursue speed and performance exclusively. The oil embargos of the 1970s only hastened the pace of change: Auto steel now had to be both stronger and lighter, requiring less fuel to push around.

In response, over the past 60 years, like chefs operating a sous vide machine to produce the perfect bite, steelmakers — their cookers arc furnaces reaching thousands of degrees Fahrenheit, with robots doing the cooking — have created a vast variety of steels to match every need. There are high-strength, hardened steels for the chassis; corrosion-resistant stainless steels for side panels and roofs; and highly stretchable metals in bumpers to absorb impacts without crumpling.

Tricks with the steel

Most steel is more than 98 percent iron. It is the other couple of percent — sometimes only hundredths of a single percent, in the case of metals added to confer desired properties — that make the difference. Just as important are treatment methods: the heating, cooling and processing, such as rolling the sheets prior to forming parts. Modifying each, sometimes by only seconds, changes the metal’s structure to yield different properties. “It’s all about playing tricks with the steel,” says John Speer, director of the Advanced Steel Processing and Products Research Center at the Colorado School of Mines.

At the most basic level, the properties of steel are about microstructure: the arrangement of different types, or phases, of steel in the metal. Some phases are harder, while others confer ductility, a measure of how much the metal can be bent and twisted out of shape without shearing and creating jagged edges that penetrate and tear squishy human bodies. At the atomic level, there are principally four phases of auto steel, including the hardest yet most brittle, called martensite, and the more ductile austenite. Carmakers can vary these by manipulating the times and temperatures of the heating process to produce the properties they want.

Academic researchers and steelmakers, working closely with automakers, have developed three generations of what is now called advanced high-strength steel. The first, adopted in the 1990s and still widely employed, had a good combination of strength and ductility. A second generation used more exotic alloys to achieve even greater ductility, but those steels proved expensive and challenging to manufacture.

The third generation, which Speer says is beginning to make its way onto the factory floor, uses heating and cooling techniques to produce steels that are stronger and more formable than the first generation; nearly ten times as strong as common steels of the past; and much cheaper (though less ductile) than second-generation steels.

Steelmakers have learned that cooling time is a critical factor in creating the final arrangements of atoms and therefore the properties of the steel. The most rapid cooling, known as quenching, freezes and stabilizes the internal structure before it undergoes further change during the hours or days it could otherwise take to reach room temperature.

One of the strongest types of modern auto steel — used in the most critical structural components, such as side panels and pillars — is made by superheating the metal with boron and manganese to a temperature above 850 degrees Celsius. After becoming malleable, the steel is transferred within 10 seconds to a die, or form, where the part is shaped and rapidly cooled.

In one version of what is known as transformation-induced plasticity, the steel is heated to a high temperature, cooled to a lower temperature and held there for a time and then rapidly quenched. This produces islands of austenite surrounded by a matrix of softer ferrite, with regions of harder bainite and martensite. This steel can absorb a large amount of energy without fracturing, making it useful in bumpers and pillars.

Recipes can be further tweaked by the use of various alloys. Henry Ford was employing alloys of steel and vanadium more than a century ago to improve the performance of steel in his Model T, and alloy recipes continue to improve today. One modern example of the use of lighter metals in combination with steel is the Ford Motor Company’s aluminum-intensive F-150 truck, the 2015 version weighing nearly 700 pounds less than the previous model.

A process used in conjunction with new materials is tube hydroforming, in which a metal is bent into complex shapes by the high-pressure injection of water or other fluids into a tube, expanding it into the shape of a surrounding die. This allows parts to be made without welding two halves together, saving time and money. A Corvette aluminum frame rail, the largest hydroformed part in the world, saved 20 percent in mass from the steel rail it replaced, according to Taub, who coauthored a 2019 article on automotive lightweighting in the Annual Review of Materials Research.

New alloys

More recent introductions are alloys such as those using titanium and particularly niobium, which increase strength by stabilizing a metal’s microstructure. In a 2022 paper, Speer called the introduction of niobium “one of the most important physical metallurgy developments of the 20th century.”

One tool now shortening the distance between trial and error is the computer. “The idea is to use the computer to develop materials faster than through experimentation,” Speer says. New ideas can now be tested down to the atomic level without workmen bending over a bench or firing up a furnace.

The ever-continuing search for better materials and processes led engineer Raymond Boeman and colleagues to found the Institute for Advanced Composites Manufacturing Innovation (IACMI) in 2015, with a $70 million federal grant. Also known as the Composites Institute, it is a place where industry can develop, test and scale up new processes and products.

“The field is evolving in a lot of ways,” says Boeman, who now directs the institute’s research on upscaling these processes. IACMI has been working on finding more climate-friendly replacements for conventional plastics such as the widely used polypropylene. In 1960, less than 100 pounds of plastic were incorporated into the typical vehicle. By 2017, the figure had risen to nearly 350 pounds, because plastic is cheap to make and has a high strength-to-weight ratio, making it ideal for automakers trying to save on weight.

By 2019, according to Taub, 10-15 percent of a typical vehicle was made of polymers and composites, everything from seat components to trunks, door parts and dashboards. And when those cars reach the end of their lives, their plastic and other difficult-to-recycle materials known as automotive shredder residue, 5 million tons of it, ends up in landfills — or, worse, in the wider environment.

Researchers are working hard to develop stronger, lighter and more environmentally friendly plastics. At the same time, new carbon fiber products are enabling these lightweight materials to be used even in load-bearing places such as structural underbody parts, further reducing the amount of heavy metal used in auto bodies.

Clearly, work remains to make autos less of a threat, both to human bodies and the planet those bodies travel over every day, to work and play. But Taub says he is optimistic about Detroit’s future and the industry’s ability to solve the problems that came with the end of the horse-and-buggy days. “I tell students they will have job security for a long time.”

Knowable Magazine

Saturday 13 2024

Meteorites from Mars help scientists understand the red planet’s interior

Free US & UK Shipping
A Martian meteorite in cross-polarized light. This meteorite is dominated by the mineral olivine. Each grain is about half a millimeter across. James Day
James Day, University of California, San Diego

Of the more than 74,000 known meteorites – rocks that fall to Earth from asteroids or planets colliding together – only 385 or so stones came from the planet Mars.

It’s not that hard for scientists to work out that these meteorites come from Mars. Various landers and rovers have been exploring Mars’ surface for decades. Some of the early missions – the Viking landers – had the equipment to measure the composition of the planet’s atmosphere. Scientists have shown that you can see this unique Martian atmospheric composition reflected in some of these meteorites.

Mars also has unique oxygen. Everything on Earth, including humans and the air we breathe, is made up of a specific composition of the three isotopes of the element oxygen: oxygen-16, oxygen-17 and oxygen-18. But Mars has an entirely different composition – it’s like a geochemical fingerprint for being Martian.

The Martian meteorites found on Earth give geologists like me hints about the makeup of the red planet and its history of volcanic activity. They allow us to study Mars without sending a spacecraft 140 million miles away.

A planet of paradoxes

These Martian meteorites formed from once red-hot magma within Mars. Once these volcanic rocks cooled and crystallized, radioactive elements within them started to decay, acting as a radiometric clock that enables scientists to tell when they formed.

From these radiometric ages, we know that some Martian meteorites are as little as 175 million years old, which is – geologically speaking – quite young. Conversely, some of the Martian meteorites are older, and formed close to the time Mars itself formed.

These Martian meteorites tell a story of a planet that has been volcanically active throughout its entire history. In fact, there’s potential for Martian volcanoes to erupt even today, though scientists have never seen such an eruption.

The rocks themselves also preserve chemical information that indicates some of the major events on Mars happened early in its history. Mars formed quite rapidly, 4.5 billion years ago, from gas and dust that made up the early solar system. Then, very soon after formation, its interior separated out into a metallic core and a solid rocky mantle and crust.

Since then, very little seems to have disturbed Mars’ interior – unlike Earth, where plate tectonics has acted to stir and homogenize its deep interior. To use a food analogy, the Earth’s interior is like a smoothie and Mars’ is like a chunky fruit salad.

Two fume hoods with vials of sample under them.
Martian meteorite samples are prepared for analysis in a clean lab. James Day

Martian volcano remnants

Understanding how Mars underwent such an early and violent adolescence, yet still may remain volcanically active today, is an area of great interest to me. I would like to know what the inside of Mars looks like, and how its interior makeup might explain features, like volcanoes, on the red planet’s surface.

When geologists set out to answer questions about volcanism on Earth, we typically examine lava samples that erupted at different places or times from the same volcano. These samples allow us to disentangle local processes specific to each volcano from planetary processes that take place at a larger scale.

It turns out we can do the same thing for Mars. The rather exotically named nakhlite and chassignite meteorites are a group of rocks from Mars that erupted from the same volcanic system some 1.3 billion years ago.

Nakhlites are basaltic rocks, similar to lavas you would find in Iceland or Hawaii, with beautiful large crystals of a mineral known as clinopyroxene. Chassignites are rocks made almost entirely of the green mineral olivine – you might know the gem-quality variety of this mineral, peridot.

Along with the much more common shergottites, which are also basaltic rocks, and a few other more exotic Martian meteorite types, these categories of meteorite constitute all the rocks researchers possess from the red planet.

When studied together, nakhlites and chassignites tell researchers several things about Mars. First, as the molten rock that formed them oozed to the surface and eventually cooled and crystallized, some surrounding older rocks melted into them.

That older rock doesn’t exist in our meteorite collection, so my team had to tease out its composition from the chemical information we obtained from nakhlites. From this information, we learned that the older rock was basaltic in composition and chemically distinct from other Martian meteorites. We found that it had been chemically weathered by exposure to water and brine.

This older rock is quite different from the Martian crust samples in our meteorite collection today. In fact, it is much more like what we would expect the Martian crust to look like, based on data gathered by rover missions and satellites orbiting Mars.

We know that the magmas that made nakhlites and chassignites come from a distinct portion of Mars’ mantle. The mantle is the rocky portion between Mars’ crust and metallic core. These nakhlites and chassignites come from the solid rigid shell at the top of Mars’ mantle, known as the mantle lithosphere, and this source makes them distinct from the more common shergottites.

Shergottites come from at least two sources within Mars. They may come from parts of the mantle just beneath the lithosphere, or even the deep mantle, which is closer to the planet’s metallic core.

alt
The interior structure of Mars, with the sources of meteorites indicated. James Day

Understanding how volcanoes on Mars work can inform future research questions to be addressed by missions to the planet. It can also help scientists understand whether the planet has ever been habitable for life, or if it could be in the future.

Hints at habitability

Earth’s active geological processes and volcanoes are part of what makes our planet habitable. The gases emanating from volcanoes are a major part of our atmosphere. So if Mars has similar geological processes, that could be good news for the potential habitability of the red planet.

Mars is much smaller than Earth, however, and studies suggest that it’s been losing the chemical elements essential for a sustainable atmosphere since it formed. It likely won’t look anything like Earth in the future.

Our next steps for understanding Mars lie in learning how the basaltic shergottite meteorites formed. These are a diverse and richly complex set of rocks, ranging in age from 175 million years to 2.4 billion years or so.

Studying these meteorites in greater detail will help to prepare the next generation of scientists to analyze rocks collected using the Perseverance Rover for the forthcoming NASA Mars Sample Return mission.The Conversation

James Day, Professor of Geosciences, University of California, San Diego

This article is republished from The Conversation under a Creative Commons license. 

Free US & UK Shipping

Sunday 02 2024

I’m an astrophysicist mapping the universe with data from the Chandra X-ray Observatory − clear, sharp photos help me study energetic black holes

BlackBull MetaTrader 4
NASA’s Chandra X-ray Observatory detects X-ray emissions from astronomical events. NASA/CXC & J. Vaughan
Giuseppina Fabbiano, Smithsonian Institution

When a star is born or dies, or when any other very energetic phenomenon occurs in the universe, it emits X-rays, which are high-energy light particles that aren’t visible to the naked eye. These X-rays are the same kind that doctors use to take pictures of broken bones inside the body. But instead of looking at the shadows produced by the bones stopping X-rays inside of a person, astronomers detect X-rays flying through space to get images of events such as black holes and supernovae.

Images and spectra – charts showing the distribution of light across different wavelengths from an object – are the two main ways astronomers investigate the universe. Images tell them what things look like and where certain phenomena are happening, while spectra tell them how much energy the photons, or light particles, they are collecting have. Spectra can clue them in to how the event they came from formed. When studying complex objects, they need both imaging and spectra.

Scientists and engineers designed the Chandra X-ray Observatory to detect these X-rays. Since 1999, Chandra’s data has given astronomers incredibly detailed images of some of the universe’s most dramatic events.

The Chandra craft, which looks like a long metal tube with six solar panels coming off it in two wings.
The Chandra spacecraft and its components. NASA/CXC/SAO & J.Vaughan

Stars forming and dying create supernova explosions that send chemical elements out into space. Chandra watches as gas and stars fall into the deep gravitational pulls of black holes, and it bears witness as gas that’s a thousand times hotter than the Sun escapes galaxies in explosive winds. It can see when the gravity of huge masses of dark matter trap that hot gas in gigantic pockets.

An explosion of light and color, and a cloud with points of bright light.
On the left is the Cassiopeia A supernova. The image is about 19 light years across, and different colors in the image identify different chemical elements (red indicates silicon, yellow indicates sulfur, cyan indicates calcium, purple indicates iron and blue indicates high energy). The point at the center could be the neutron star remnant of the exploded star. On the right are the colliding ‘Antennae’ galaxies, which form a gigantic structure about 30,000 light years across. Chandra X-ray Center

NASA designed Chandra to orbit around the Earth because it would not be able to see any of this activity from Earth’s surface. Earth’s atmosphere absorbs X-rays coming from space, which is great for life on Earth because these X-rays can harm biological organisms. But it also means that even if NASA placed Chandra on the highest mountaintop, it still wouldn’t be able to detect any X-rays. NASA needed to send Chandra into space.

I am an astrophysicist at the Smithsonian Astrophysical Observatory, part of the Center for Astrophysics | Harvard and Smithsonian. I’ve been working on Chandra since before it launched 25 years ago, and it’s been a pleasure to see what the observatory can teach astronomers about the universe.

Supermassive black holes and their host galaxies

Astronomers have found supermassive black holes, which have masses ten to 100 million times that of our Sun, in the centers of all galaxies. These supermassive black holes are mostly sitting there peacefully, and astronomers can detect them by looking at the gravitational pull they exert on nearby stars.

But sometimes, stars or clouds fall into these black holes, which activates them and makes the region close to the black hole emit lots of X-rays. Once activated, they are called active galactic nuclei, AGN, or quasars.

My colleagues and I wanted to better understand what happens to the host galaxy once its black hole turns into an AGN. We picked one galaxy, ESO 428-G014, to look at with Chandra.

An AGN can outshine its host galaxy, which means that more light comes from the AGN than all the stars and other objects in the host galaxy. The AGN also deposits a lot of energy within the confines of its host galaxy. This effect, which astronomers call feedback, is an important ingredient for researchers who are building simulations that model how the universe evolves over time. But we still don’t quite know how much of a role the energy from an AGN plays in the formation of stars in its host galaxy.

Luckily, images from Chandra can provide important insight. I use computational techniques to build and process images from the observatory that can tell me about these AGNs.

Three images of a black hole, from low to high resolution, with a bright spot above and right from the center surrounded by clouds.
Getting the ultimate Chandra resolution. From left to right, you see the raw image, the same image at a higher resolution and the image after applying a smoothing algorithm. G. Fabbiano

The active supermassive black hole in ESO 428-G014 produces X-rays that illuminate a large area, extending as far as 15,000 light years away from the black hole. The basic image that I generated of ESO 428-G014 with Chandra data tells me that the region near the center is the brightest, and that there is a large, elongated region of X-ray emission.

The same data, at a slightly higher resolution, shows two distinct regions with high X-ray emissions. There’s a “head,” which encompasses the center, and a slightly curved “tail,” extending down from this central region.

I can also process the data with an adaptive smoothing algorithm that brings the image into an even higher resolution and creates a clearer picture of what the galaxy looks like. This shows clouds of gas around the bright center.

My team has been able to see some of the ways the AGN interacts with the galaxy. The images show nuclear winds sweeping the galaxy, dense clouds and interstellar gas reflecting X-ray light, and jets shooting out radio waves that heat up clouds in the galaxy.

These images are teaching us how this feedback process operates in detail and how to measure how much energy an AGN deposits. These results will help researchers produce more realistic simulations of how the universe evolves.

The next 25 years of X-ray astronomy

The year 2024 marks the 25th year since Chandra started making observations of the sky. My colleagues and I continue to depend on Chandra to answer questions about the origin of the universe that no other telescope can.

By providing astronomers with X-ray data, Chandra’s data supplements information from the Hubble Space Telescope and the James Webb Space Telescope to give astronomers unique answers to open questions in astrophysics, such as where the supermassive black holes found at the centers of all galaxies came from.

For this particular question, astronomers used Chandra to observe a faraway galaxy first observed by the James Webb Space Telescope. This galaxy emitted the light captured by Webb 13.4 billion years ago, when the universe was young. Chandra’s X-ray data revealed a bright supermassive black hole in this galaxy and suggested that supermassive black holes may form by the collapsing clouds in the early universe.

Sharp imaging has been crucial for these discoveries. But Chandra is expected to last only another 10 years. To keep the search for answers going, astronomers will need to start designing a “super Chandra” X-ray observatory that could succeed Chandra in future decades, though NASA has not yet announced any firm plans to do so.The Conversation

Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license.

BlackBull MetaTrader 4

Sunday 21 2024

Exploding stars send out powerful bursts of energy − I’m leading a citizen scientist project to classify and learn about these bright flashes

New Releases
Gamma-ray bursts, as shown in this illustration, come from powerful astronomical events. NASA, ESA and M. Kornmesser
Amy Lien, University of Tampa

When faraway stars explode, they send out flashes of energy called gamma-ray bursts that are bright enough that telescopes back on Earth can detect them. Studying these pulses, which can also come from mergers of some exotic astronomical objects such as black holes and neutron stars, can help astronomers like me understand the history of the universe.

Space telescopes detect on average one gamma-ray burst per day, adding to thousands of bursts detected throughout the years, and a community of volunteers are making research into these bursts possible.

On Nov. 20, 2004, NASA launched the Neil Gehrels Swift Observatory, also known as Swift. Swift is a multiwavelength space telescope that scientists are using to find out more about these mysterious gamma-ray flashes from the universe.

Gamma-ray bursts usually last for only a very short time, from a few seconds to a few minutes, and the majority of their emission is in the form of gamma rays, which are part of the light spectrum that our eyes cannot see. Gamma rays contain a lot of energy and can damage human tissues and DNA.

Fortunately, Earth’s atmosphere blocks most gamma rays from space, but that also means the only way to observe gamma-ray bursts is through a space telescope like Swift. Throughout its 19 years of observations, Swift has observed over 1,600 gamma-ray bursts. The information it collects from these bursts helps astronomers back on the ground measure the distances to these objects.

A cylindrical spacecraft, with two flat solar panels, one on each side.
NASA’s Swift observatory, which detects gamma rays. NASA E/PO, Sonoma State University/Aurore Simonnet

Looking back in time

The data from Swift and other observatories has taught astronomers that gamma-ray bursts are one of the most powerful explosions in the universe. They’re so bright that space telescopes like Swift can detect them from across the entire universe.

In fact, gamma-ray bursts are among one of the farthest astrophysical objects observed by telescopes.

Because light travels at a finite speed, astronomers are effectively looking back in time as they look farther into the universe.

The farthest gamma-ray burst ever observed occurred so far away that its light took 13 billion years to reach Earth. So when telescopes took pictures of that gamma-ray burst, they observed the event as it looked 13 billion years ago.

Gamma-ray bursts allow astronomers to learn about the history of the universe, including how the birth rate and the mass of the stars change over time.

Types of gamma-ray bursts

Astronomers now know that there are basically two kinds of gamma-ray bursts – long and short. They are classified by how long their pulses last. The long gamma-ray bursts have pulses longer than two seconds, and at least some of these events are related to supernovae – exploding stars.

When a massive star, or a star that is at least eight times more massive than our Sun, runs out of fuel, it will explode as a supernova and collapse into either a neutron star or a black hole.

Both neutron stars and black holes are extremely compact. If you shrank the entire Sun into a diameter of about 12 miles, or the size of Manhattan, it would be as dense as a neutron star.

Some particularly massive stars can also launch jets of light when they explode. These jets are concentrated beams of light powered by structured magnetic fields and charged particles. When these jets are pointed toward Earth, telescopes like Swift will detect a gamma-ray burst.

Gamma-ray burst emission.

On the other hand, short gamma-ray bursts have pulses shorter than two seconds. Astronomers suspect that most of these short bursts happen when either two neutron stars or a neutron star and a black hole merge.

When a neutron star gets too close to another neutron star or a black hole, the two objects will orbit around each other, creeping closer and closer as they lose some of their energy through gravitational waves.

These objects eventually merge and emit short jets. When the short jets are pointed toward Earth, space telescopes can detect them as short gamma-ray bursts.

Neutron star mergers emit gamma-ray bursts.

Classifying gamma-ray bursts

Classifying bursts as short or long isn’t always that simple. In the past few years, astronomers have discovered some peculiar short gamma-ray bursts associated with supernovae instead of the expected mergers. And they’ve found some long gamma-ray bursts related to mergers instead of supernovae.

These confusing cases show that astronomers do not fully understand how gamma-ray bursts are created. They suggest that astronomers need a better understanding of gamma-ray pulse shapes to better connect the pulses to their origins.

But it’s hard to classify pulse shape, which is different than pulse duration, systematically. Pulse shapes can be extremely diverse and complex. So far, even machine learning algorithms haven’t been able to correctly recognize all the detailed pulse structures that astronomers are interested in.

Community science

My colleagues and I have enlisted the help of volunteers through NASA to identify pulse structures. Volunteers learn to identify the pulse structures, then they look at images on their own computers and classify them.

Our preliminary results suggest that these volunteers – also referred to as citizen scientists – can quickly learn and recognize gamma-ray pulses’ complex structures. Analyzing this data will help astronomers better understand how these mysterious bursts are created.

Our team hopes to learn about whether more gamma-ray bursts in the sample challenge the previous short and long classification. We’ll use the data to more accurately probe the history of the universe through gamma-ray burst observations.

This citizen science project, called Burst Chaser, has grown since our preliminary results, and we’re actively recruiting new volunteers to join our quest to study the mysterious origins behind these bursts.The Conversation

Amy Lien, Assistant Professor of Physics, University of Tampa

This article is republished from The Conversation under a Creative Commons license. 

New Releases

Sunday 31 2024

Lessons from sports psychology research


Scientists are probing the head games that influence athletic performance, from coaching to coping with pressure

Since the early years of this century, it has been commonplace for computerized analyses of athletic statistics to guide a baseball manager’s choice of pinch hitter, a football coach’s decision to punt or pass, or a basketball team’s debate over whether to trade a star player for a draft pick.

But many sports experts who actually watch the games know that the secret to success is not solely in computer databases, but also inside the players’ heads. So perhaps psychologists can offer as much insight into athletic achievement as statistics gurus do.

Sports psychology has, after all, been around a lot longer than computer analytics. Psychological studies of sports appeared as early as the late 19th century. During the 1970s and ’80s, sports psychology became a fertile research field. And within the last decade or so, sports psychology research has exploded, as scientists have explored the nuances of everything from the pursuit of perfection to the harms of abusive coaching.

“Sport pervades cultures, continents, and indeed many facets of daily life,” write Mark Beauchamp, Alan Kingstone and Nikos Ntoumanis, authors of an overview of sports psychology research in the 2023 Annual Review of Psychology.

Their review surveys findings from nearly 150 papers investigating various psychological influences on athletic performance and success. “This body of work sheds light on the diverse ways in which psychological processes contribute to athletic strivings,” the authors write. Such research has the potential not only to enhance athletic performance, they say, but also to provide insights into psychological influences on success in other realms, from education to the military. Psychological knowledge can aid competitive performance under pressure, help evaluate the benefit of pursuing perfection and assess the pluses and minuses of high self-confidence.

Confidence and choking

In sports, high self-confidence (technical term: elevated self-efficacy belief) is generally considered to be a plus. As baseball pitcher Nolan Ryan once said, “You have to have a lot of confidence to be successful in this game.” Many a baseball manager would agree that a batter who lacks confidence against a given pitcher is unlikely to get to first base.

Various studies suggest that self-talk can increase confidence, enhance focus, control emotions and initiate effective actions.

And in fact, a lot of psychological research actually supports that view, suggesting that encouraging self-confidence is a beneficial strategy. Yet while confident athletes do seem to perform better than those afflicted with self-doubt, some studies hint that for a given player, excessive confidence can be detrimental. Artificially inflated confidence, unchecked by honest feedback, may cause players to “fail to allocate sufficient resources based on their overestimated sense of their capabilities,” Beauchamp and colleagues write. In other words, overconfidence may result in underachievement.

Other work shows that high confidence is usually most useful in the most challenging situations (such as attempting a 60-yard field goal), while not helping as much for simpler tasks (like kicking an extra point).

Of course, the ease of kicking either a long field goal or an extra point depends a lot on the stress of the situation. With time running out and the game on the line, a routine play can become an anxiety-inducing trial by fire. Psychological research, Beauchamp and coauthors report, has clearly established that athletes often exhibit “impaired performance under pressure-invoking situations” (technical term: “choking”).

In general, stress impairs not only the guidance of movements but also perceptual ability and decision-making. On the other hand, it’s also true that certain elite athletes perform best under high stress. “There is also insightful evidence that some of the most successful performers actually seek out, and thrive on, anxiety-invoking contexts offered by high-pressure sport,” the authors note. Just ask Michael Jordan or LeBron James.

Many studies have investigated the psychological coping strategies that athletes use to maintain focus and ignore distractions in high-pressure situations. One popular method is a technique known as the “quiet eye.” A basketball player attempting a free throw is typically more likely to make it by maintaining “a longer and steadier gaze” at the basket before shooting, studies have demonstrated.

“In a recent systematic review of interventions designed to alleviate so-called choking, quiet-eye training was identified as being among the most effective approaches,” Beachamp and coauthors write.

Another common stress-coping method is “self-talk,” in which players utter instructional or motivational phrases to themselves in order to boost performance. Saying “I can do it” or “I feel good” can self-motivate a marathon runner, for example. Saying “eye on the ball” might help a baseball batter get a hit.

Researchers have found moderate benefits of self-talk strategies for both novices and experienced athletes, Beauchamp and colleagues report. Various studies suggest that self-talk can increase confidence, enhance focus, control emotions and initiate effective actions.

Moderate performance benefits have also been reported for other techniques for countering stress, such as biofeedback, and possibly meditation and relaxation training.

“It appears that stress regulation interventions represent a promising means of supporting athletes when confronted with performance-related stressors,” Beauchamp and coauthors conclude.

Pursuing athletic perfection

Of course, sports psychology encompasses many other issues besides influencing confidence and coping with pressure. Many athletes set a goal of attaining perfection, for example, but such striving can induce detrimental psychological pressures. One analysis found that athletes pursuing purely personal high standards generally achieved superior performance. But when perfectionism was motivated by fear of criticism from others, performance suffered.

Similarly, while some coaching strategies can aid a player’s performance, several studies have shown that abusive coaching can detract from performance, even for the rest of an athlete’s career.

Beauchamp and his collaborators conclude that a large suite of psychological factors and strategies can aid athletic success. And these factors may well be applicable to other areas of human endeavor where choking can impair performance (say, while performing brain surgery or flying a fighter jet).

But the authors also point out that researchers shouldn’t neglect the need to consider that in sports, performance is also affected by the adversarial nature of competition. A pitcher’s psychological strategies that are effective against most hitters might not fare so well against Shohei Ohtani, for instance.

Besides that, sports psychology studies (much like computer-based analytics) rely on statistics. As Adolphe Quetelet, a pioneer of social statistics, emphasized in the 19th century, statistics do not define any individual — average life expectancy cannot tell you when any given person will die. On the other hand, he noted, no single exceptional case invalidates the general conclusions from sound statistical analysis.

Sports are, in fact, all about the quest of the individual (or a team) to defeat the opposition. Success often requires defying the odds — which is why gambling on athletic events is such a big business. Sports consist of contests between the averages and the exceptions, and neither computer analytics nor psychological science can tell you in advance who is going to win. That’s why they play the games.

Knowable 

Monday 25 2024

An eclipse for everyone – how visually impaired students can ‘get a feel for’ eclipses

Speak any language like a native! 41 languages, real conversations, practical topics. Start now!

A solar eclipse approaching totality. AP Photo/Richard Vogel, File
Cassandra Runyon, College of Charleston and David Hurd, Pennsylvania Western University

Many people in the U.S. will have an opportunity to witness nearly four minutes of a total solar eclipse on Monday, April 8, 2024, as it moves from southern Texas to Maine. But in the U.S., over 7 million people are blind or visually impaired and may not be able to experience an eclipse the traditional way.

Of course they, like those with sight, will feel colder as the Sun’s light is shaded, and will hear the songs and sounds of birds and insects change as the light dims and brightens. But much of an eclipse is visual.

We are a planetary scientist and an astronomer who, with funding and support from NASA’s Solar System Exploration Research Virtual Institute, have created and published a set of tactile graphics, or graphics with raised and textured elements, on the 2024 total solar eclipse.

The guide, called “Getting a Feel for Eclipses,” illustrates the paths of the 2017 total, 2023 annular and 2024 total solar eclipses. In a total eclipse, the Moon fully blocks the Sun from Earth view, while during an annular eclipse, a narrow ring of sunlight can be seen encircling the Moon.

The tactile graphics and associated online content detail the specific alignment of the Earth, Moon and Sun under which eclipses occur.

To date, we have distributed almost 11,000 copies of this book to schools for the blind, state and local libraries, the Library of Congress and more.

A map of the US with three curved lines stretching across, indicating the eclipses of 2024, 2023 and 2017.
‘The Getting A Feel for Eclipses’ guide helps blind and visually impaired people learn about the eclipse. NASA SSERVI

Why publish a tactile book on eclipses?

NASA has lots of explanatory material that helps people visualize and understand rare phenomena like eclipses. But for people with visual impairments, maps and images don’t help. For tactile readers, their sense of touch is their vision. That’s where this guide and our other tactile books come in.

Over 65,000 students in the U.S. are blind or visually impaired. After working with several of our students who are totally blind, we wanted to find out how to make events like eclipses as powerful for these students as they are for us. We also wanted to help our students visualize and understand the concept of an eclipse.

These aims resulted in the three tactile graphics, which are physical sheets with textures and raised surfaces that can be interpreted through touch, as well as online content.

The first tactile graphic models the alignment of the Earth, Moon and Sun. The second illustrates the phases of an eclipse as the Moon moves in between the Earth and Sun to full totality, and then out of the way. The third includes a map of the continental U.S. that illustrates the paths of three eclipses: the Aug. 21, 2017, total eclipse, the Oct. 14, 2023, annular eclipse and the Apr. 8, 2024, total eclipse. We used different textures to illustrate these concepts.

Each book includes a QR code on the front cover, outlined by a raised square boundary. The code links to an online guide that leads the user through the content behind the graphics while also providing background information. With the online content, users may opt to print the information in large font or have it read to them by a device.

Although initially created to assist visually impaired audiences, these books are still helpful resources for those with sight. Some students can see but might learn better when able to explore the tactile parts of the guide while listening to the audio. Often it’s helpful for students to get the same information presented in different styles, with options to read or have the content information read to them.

A sheet of paper with raised textures labeled Sun, Umbra, Moon and Totality, with three students touching the textures.
Students at Florida School for the Deaf and Blind in St. Augustine explore tactiles 1 and 2. Florida School for the Deaf and Blind

How are the books made?

We hand-make each book starting by identifying which science concepts the user will likely want to know, and which illustrations can support those concepts.

Once identified, the next step is to create a tactile master, or model, which has one or more raised textures that help to define the science concepts. We pick a set of unique textures to use on the master to signify different items, so the Sun feels different than the Earth. This way, the textures of the graphics become part of the story being shared.

For example, in a model of the Sun’s surface, we use Spanish moss to create the dynamic texture of the Sun. In past projects, we’ve used textures like doll hair, sand and differently textured cardboard to illustrate planet features, instruments on spacecraft, fine surface features and more. Then, we add Braille labels for figure titles, key features and specific notes.

A circle filled with moss.
The tactile master – Spanish moss – used for the Sun. Cassandra Runyon

Once we’ve finished making the masters and laying out each page, a small family print shop – McCarty Printing in Erie, Pennsylvania – prints the page titles and key feature labels on Brailon, a type of plastic paper.

Once printed, we place the masters and the Brailon sheets on a thermoform Machine, which heats up the sheets and creates a vacuum that forms the final tactile graphics. Then, we return the pages to McCarty Printing for binding.

Viewing and experiencing the eclipse

Like fully sighted people, people with partial vision should avoid looking directly at the Sun. Instead, everyone should use eclipse glasses. If you don’t have eclipse glasses, you can use an indirect viewing method such as a colander or pinhole projector.

As the eclipse approaches totality, take time to enjoy your surroundings, feel the changes in temperature and light, and note how the animals around you react to the remarkable event using another of your senses – sound.

Cassandra Runyon, Professor of Geology & Environmental Geosciences, College of Charleston and David Hurd, Professor of Geosciences, Pennsylvania Western University

This article is republished from The Conversation under a Creative Commons license.

Speak any language like a native! 41 languages, real conversations, practical topics. Start now!

Wednesday 13 2024

Total solar eclipses, while stunning, can damage your eyes if viewed without the right protection

Solar eclipses don’t come around often, but make sure to view these rare events with eclipse glasses to protect your vision. AP Photo/Charlie Riedel
Geoffrey Bradford, West Virginia University

On April 8, 2024, and for the second time in the past decade, people in the U.S. will have an opportunity to view a total solar eclipse. But to do so safely, you’ll need to wear proper protection, or risk eye damage.

Earth is the only planet in our solar system where solar eclipses can occur. During these celestial events, the Moon passes between our planet and the Sun, blocking the Sun and casting a shadow over the Earth. Total eclipses rarely happen multiple times in the same region of a country during one’s lifetime.

The path of totality for this spring’s eclipse, where you can view the total eclipse, will extend over a 100-mile path that crosses through Mexico, Texas, New England and eastern Canada.

Those in the path of totality will have the opportunity to see a total solar eclipse this April.

As excitement for the celestial show grows across the country, hotels in the path of totality have been booked up by eclipse enthusiasts. Museums and schools have planned viewing events, and researchers have developed technology for the visually impaired and those with hearing loss so more people have the opportunity to experience the eclipse.

Seeing an eclipse is a rare and special opportunity, but as an ophthalmologist, I know that looking directly at the Sun, even for a few moments, can severely damage your eyes. With a few easy precautions, eclipse viewers can protect themselves from severe and irreparable eye damage and vision loss.

Safe eclipse viewing

This year’s eclipse will unfold over a 75-minute period, from the moment the Moon starts to partially block the Sun until it completely moves away from it again.

During the partial eclipse period, when the Moon is partly blocking the Sun, you should never look directly at the Sun nor through binoculars, cameras or cellphones. Sunglasses, photographic filters, exposed color film and welding glasses will dim the sunlight, but these items do not prevent eye damage from the Sun’s very intense light rays.

Only solar eclipse glasses with filters designed specifically for observing the partial eclipse are safe to use. They are easily available from a variety of sources, and you can wear them by themselves or over your glasses or contact lenses.

Keep in mind that these safety filters will permit you to view only the eclipse, as they blacken out everything around you but the Sun itself. Before purchasing a pair, make sure your eclipse glasses are approved by the ISO 12312-2 international standard.

Only during its period of totality, the time when the Sun is fully behind the Moon, is it safe to remove your filtered glasses – and then only with caution.

This year, totality will last an unusually long four and a half minutes. If you leave your eclipse glasses on, you will miss seeing the Sun’s bright ring, or corona, behind the Moon. But then, as the Moon moves on, the sky will brighten and you’ll need to put the eclipse glasses back on.

Eyes and light

While the pupils of our eyes naturally constrict to limit bright light, and our eyes have pigments to absorb light, direct sunlight overwhelms these functions. Even viewing the Sun for a few brief moments can cause permanent vision loss.

The Sun emits intense ultraviolet and infrared light, which, while not visible to the human eye, can burn sensitive ocular tissues, such as the cornea and retina.

A diagram of an eye as viewed from the side.
The cornea is the clear front surface of the eye, which lets light in. The retina is the inner lining of the back part of the eye, which sends signals to your brain, allowing you to see. American Association for Pediatric Ophthalmology and Strabismus

Corneal damage from sunlight, called solar keratosis, can blur vision and be quite painful. While the cornea can heal itself, it may require several days to get better and lead to lost time at work or school.

Retinal damage, called solar retinopathy, occurs inside the eye. While it isn’t painful, it can be more severe than corneal damage and can dramatically impair vision. Solar retinopathy symptoms include a blind spot in one’s central vision, visual distortions and altered color vision.

In mild cases, these symptoms may go away, but in more severe cases, and even with treatment, they may become permanent.

To both enjoy the eclipse and prevent eye damage, make sure you and your loved ones all view the event with strict proper precautions.The Conversation

Geoffrey Bradford, Professor of Pediatrics and Ophthalmology, West Virginia University

This article is republished from The Conversation under a Creative Commons license. 

Sunday 25 2024

Making sense of many universes

The idea of a multiverse — multiple realms of space differing in basic properties of physics — bugs some scientists. Others find it a real possibility that should not be ignored.

Almost anybody who has ever thought deeply about the universe sooner or later wonders if there is more than one of them. Whether a multiplicity of universes — known as a multiverse — actually exists has been a contentious issue since ancient times. Greek philosophers who believed in atoms, such as Democritus, proposed the existence of an infinite number of universes. But Aristotle disagreed, insisting that there could be only one.

Today a similar debate rages over whether multiple universes exist. In recent decades, advances in cosmology have implied (but not proved) the existence of a multiverse. In particular, a theory called inflation suggests that in the instant after the Big Bang, space inflated rapidly for a brief time and then expanded more slowly, creating the vast bubble of space in which the Earth, sun, Milky Way galaxy and billions of other galaxies reside today. If this inflationary cosmology theory is correct, similar big bangs occurred many times, creating numerous other bubbles of space like our universe.

Properties such as the mass of basic particles and the strength of fundamental forces may differ from bubble to bubble. In that case, the popular goal pursued by many physicists of finding a single theory that prescribes all of nature’s properties may be in vain. Instead, a multiverse may offer various locales, some more hospitable to life than others. Our universe must be a bubble with the right combination of features to create an environment suitable for life, a requirement known as the anthropic principle.

But many scientists object to the idea of the multiverse and the anthropic reasoning it enables. Some even contend that studying the multiverse doesn’t count as science. One physicist who affirms that the multiverse is a proper subject for scientific investigation is John Donoghue of the University of Massachusetts, Amherst.

As Donoghue points out in the 2016 Annual Review of Nuclear and Particle Science, the Standard Model of Particle Physics — the theory describing the behavior of all of nature’s basic particles and forces — does not specify all of the universe’s properties. Many important features of nature, such as the masses of the particles and strengths of the forces, cannot be calculated from the theory’s equations. Instead they must be measured. It’s possible that in other bubbles, or even in distant realms within our bubble but beyond the reach of our telescopes, those properties might be different.

Maybe some future theory will show why nature is the way it is, Donoghue says, but maybe reality does encompass multiple possibilities. The true theory describing nature might permit many stable “ground states,” corresponding to the different cosmic bubbles or distant realms of space with different physical features. A multiverse of realms with different ground states would support the view that the universe’s habitability can be explained by the anthropic principle — we live in the realm where conditions are suitable — and not by a single theory that specifies the same properties everywhere.

Knowable Magazine quizzed Donoghue about the meaning of the multiverse, the issues surrounding anthropic reasoning and the argument that the idea of a multiverse is not scientific. His answers have been edited for brevity and clarity.

Can you explain just what you mean by multiverse?

For me, at least, the multiverse is the idea that physically out there, beyond where we can see, there are portions of the universe that have different properties than we see locally. We know the universe is bigger than we can see. We don’t know how much bigger. So the question is, is it the same everywhere as you go out or is it different?

If there is a multiverse, is the key point not just the existence of different realms, but that they differ in their properties in important ways?

If it’s just the same all the way out, then the multiverse is not relevant. The standard expectation is that aside from random details — like here’s a galaxy, there’s a galaxy, here’s empty space — that it’s more or less uniform everywhere in the greater universe. And that would happen if you have a theory like the Standard Model where there’s basically just one possible way that the model looks. It looks the same everywhere. It couldn’t be different.

Isn’t that what most physicists would hope for?

Probably literally everyone’s hope is that we would someday find a theory and all of a sudden everything would become clear — there would be one unique possibility, it would be tied up, there would be no choice but this was the theory. Everyone would love that.

But the Standard Model does not actually specify all the numbers describing the properties of nature, right?

The structure of the Standard Model is fixed by a symmetry principle. That’s the beautiful part. But within that structure there’s freedom to choose various quantities like the masses of the particles and the charges, and these are the parameters of the theory. These are numbers that are not predicted by the theory. We’ve gone out and we’ve measured them. We would like eventually that those are predicted by some other theory. But that’s the question, whether they are predicted or whether they are in some sense random choices in a multiverse.

The example I use in the paper is the distance from the Earth to the sun. If you were studying the solar system, you’d see various regularities and a symmetry, a spherically symmetrical force. The fact that the force goes like 1 over the radius squared is a consequence of the underlying theory. So you might say, well, I want to predict the radius of the Earth. And Kepler tried to do this and came up with a very nice geometric construction, which almost worked. But now we know that this is not something fundamental — it’s an accident of the history. The same laws that give our solar system with one Earth-to-sun distance will somewhere else give a different solar system with a different distance for the planets. They’re not predictable. So the physics question for us then is, are the parameters like the mass of the electron something that’s fundamentally predictable from some more fundamental theory, or is it the accident of history in our patch of the universe?

How does the possibility of a multiverse affect how we interpret the numbers in the Standard Model?

We’ve come to understand how the Standard Model produces the world. So then you could actually ask the scientific question: What if the numbers in the Standard Model were slightly different? Like the mass of the electron or the charge on the electron. One of the surprises is, if you make very modest changes in these parameters, then the world changes dramatically. Why does the electron have the mass it does? We don’t know. If you make it three times bigger, then all the atoms disappear, so the world is a very, very different place. The electrons get captured onto protons and the protons turn into neutrons, and so you end up with a very strange universe that’s very different from ours. You would not have any chance of having life in such a universe.

Are there other changes in the Standard Model numbers that would have such dramatic effects?

My own contribution here is about the Higgs field [the field that is responsible for the Higgs boson]. It has a much smaller value than its expected range within the Standard Model. But if you change it by a bit, then atoms don’t form and nuclei don’t form — again, the world changes dramatically. My collaborators and I were the ones that pointed that out.

There’s some maybe six or seven of these constraints — parameters of the Standard Model that have to be just so in order to satisfy the need for atoms, the need for stars, planets, et cetera. So about six combinations of the parameters are constrained anthropically.

By “anthropically,” you mean that these parameters are constrained to narrow values in order to have a universe where life can exist. That is an old idea known as the anthropic principle, which has historically been unpopular with many physicists.

Yes, I think almost anybody would prefer to have a well-developed theory that doesn’t have to invoke any anthropic reasoning. But nevertheless it’s possible that these types of theories occur. To not consider them would also be unscientific. So you’re forced into looking at them because we have examples where it would occur.

Historically there’s a lot of resistance to anthropic reasoning, because at least the popular explanations of it seem to get causality backwards. It was sort of saying that we [our existence] determine the parameters of the universe, and that didn’t feel right. The modern version of it, with the multiverse, is more physical in the sense that if you do have these differing domains with different parameters, we would only find ourselves in one that allows atoms and nuclei. So the causality is right. The parameters are such that we can be here. The modern view is more physical.

If there is a multiverse, then doesn’t that change some of the goals of physics, such as the search for a unified theory of everything, and require some sort of anthropic reasoning?

What we can know may depend on things that may end up being out of our reach to explore. The idea that we should be searching for a unified theory that explains all of nature may in fact be the wrong motivation. It’s certainly true that multiverse theories raise the possibility that we will never be able to answer these questions. And that’s disturbing.

Does that mean the multiverse changes some of the questions that physicists should be asking?

We certainly still should be trying to answer “how” questions about how does the W boson decay or the Higgs boson, how does it decay, to try to get our best description of nature. And we have to realize we may not be able to get the ultimate theory because we may not be able to probe enough of the universe to answer certain questions. That’s a discouraging feature. I have to admit when I first heard of anthropic reasoning in physics my stomach sank. It kills some of the things that you’d like to do.

Don’t some people even argue that though a multiverse would seem to justify anthropic reasoning, that approach should still be regarded as not scientific?

It’s one of the things that bothers me about the discussion. Just because you feel bad about the multiverse, and just because some aspects of it are beyond reach for testing, doesn’t mean that it’s wrong. So if it’s worth considering, and looking within the class of multiverse theories to see what it is that we could know, how does it change our motivations? How does it change the questions that we ask? And to say that the multiverse is not science is itself not science. You’re not allowing a particular physical type of theory, a possible physical theory, that you’re throwing out on nonscientific grounds. But it does raise long-term issues about how much we could understand about the ultimate theory when we can just look locally. It’s science, it’s sometimes a frustrating bit of science, but we have to see what ideas become fruitful and what happens.

An important part of investigating the multiverse is finding a theory that includes multiple “ground states.” What does that mean?

The ground state is the state that you get when you take all the energy out of a system. Normally if you take away all the particles, that’s your ground state — all the background fields, the things that permeate space. The ground state is described by the Standard Model. Its ground state tells you exactly what particles will look like when you put them back in; they will have certain masses and certain charges.

You could imagine that there are theories which have more than one ground state, and if you put particles in this state they look one way and if you put particles in another state they look another way — they might have different masses. The multiverse corresponds to the hypothesis that there are very many ground states, lots and lots of them, and in the bigger universe they are realized in different parts of the universe.

Even if a theory of particles and forces can accommodate multiple ground states, don’t you need a method of creating those ground states?

Two features have to happen. You have to have the possibility of multiple ground states, and then you have to have a mechanism to produce them. In our present theories, producing them is easier, because inflationary cosmology has the ability to do this. Finding theories that have enough ground states is a more difficult requirement. But that’s a science question. Is there one, is there two, is there a lot?

Superstring theory encompasses multiple ground states, described as the “string landscape.” Is that an example of the kind of theory that might imply a multiverse?

The string landscape is one of the ways we know that this [multiple ground states] is a physical possibility. You can start counting the number of states in string theory, and you get a very enormous number, 10 to the 500. So we have at least one theory that has this property of having a very large number of ground states. And there could be more. People have tried cooking up other theories that have that possibility also. So it is a physical possibility.

Don’t critics say that neither string theory nor inflationary cosmology has been definitely established?

That’s true of all theories beyond the Standard Model. None of them are established yet. So we can’t really say with any confidence that there is a multiverse. It’s a physical possibility. It may be wrong. But it still may be right.