The Conversation

Evidence of ancient Atlantic hurricanes on the ocean floor foretells bad news for the future

If you look back at the history of Atlantic hurricanes since the late 1800s, it might seem hurricane frequency is on the rise.

The year 2020 had the most tropical cyclones in the Atlantic, with 31, and 2021 had the third-highest, after 2005. The past decade saw five of the six most destructive Atlantic hurricanes in modern history.

Then a year like 2022 comes along, with no major hurricane landfalls until Fiona and Ian struck in late September. The Atlantic hurricane season, which ends Nov. 30, has had eight hurricanes and 14 named storms. It’s a reminder that small sample sizes can be misleading when assessing trends in hurricane behavior. There is so much natural variability in hurricane behavior year to year and even decade to decade that we need to look much further back in time for the real trends to come clear.

Fortunately, hurricanes leave behind telltale evidence that goes back millennia.

Two thousand years of this evidence indicates that the Atlantic has experienced even stormier periods in the past than we’ve seen in recent years. That’s not good news. It tells coastal oceanographers like me that we may be significantly underestimating the threat hurricanes pose to Caribbean islands and the North American coast in the future.

The natural records hurricanes leave behind

When a hurricane nears land, its winds whip up powerful waves and currents that can sweep coarse sands and gravel into marshes and deep coastal ponds, sinkholes and lagoons.

Under normal conditions, fine sand and organic matter like leaves and seeds fall into these areas and settle to the bottom. So when coarse sand and gravel wash in, a distinct layer is left behind.

Imagine cutting through a layer cake – you can see each layer of frosting. Scientists can see the same effect by plunging a long tube into the bottom of these coastal marshes and ponds and pulling up several meters of sediment in what’s known as a sediment core. By studying the layers in sediment, we can see when coarse sand appeared, suggesting an extreme coastal flood from a hurricane.

With these sediment cores, we have been able to document evidence of Atlantic hurricane activity over thousands of years.

One sediment core with dates showing high levels of sand deposits and a photo of one section showing the sand layer.

The red dots indicate large sand deposits going back about 1,060 years. The yellow dots are estimated dates from radiocarbon dating of small shells.

Tyler Winkler

We now have dozens of chronologies of hurricane activity at different locations – including New England, the Florida Gulf Coast, the Florida Keys and Belize – that reveal decade- to century-scale patterns in hurricane frequency.

Others, including from Atlantic Canada, North Carolina, northwestern Florida, Mississippi and Puerto Rico, are lower-resolution, meaning it is nearly impossible to discern individual hurricane layers deposited within decades of one another. But they can be highly informative for determining the timing of the most intense hurricanes, which can have significant impacts on coastal ecosystems.

It’s the records from the Bahamas, however, with nearly annual resolution, that are crucial for seeing the long-term picture for the Atlantic Basin.

Why The Bahamas are so important

The Bahamas are exceptionally vulnerable to the impacts of major hurricanes because of their geographic location.

In the North Atlantic, 85% of all major hurricanes form in what is known as the Main Development Region, off western Africa. Looking just at observed hurricane tracks from the past 170 years, my analysis shows that about 86% of major hurricanes that affect the Bahamas also form in that region, suggesting the frequency variability in the Bahamas may be representative of the basin.

Satellite view of Atlantic showing tracks of each storm, most starting off Africa, heading west and then curving northward.

Atlantic hurricane tracks from 1851 to 2012.

Nilfanion/Wikimedia

A substantial percentage of North Atlantic storms also pass over or near these islands, so these records appear to reflect changes in overall North Atlantic hurricane frequency through time.

By coupling coastal sediment records from the Bahamas with records from sites farther north, we can explore how changes in ocean surface temperatures, ocean currents, global-scale wind patterns and atmospheric pressure gradients affect regional hurricane frequency.

As sea surface temperatures rise, warmer water provides more energy that can fuel more powerful and destructive hurricanes. However, the frequency of hurricanes – how often they form – isn’t necessarily affected in the same way.

Satellite image of a hurricane over The Bahamas, marked on the map, next to  Florida.

Hurricane Dorian sat over the Bahamas as a powerful Category 5 storm in 2019.

Laura Dauphin/NASA Earth Observatory

The secrets hidden in blue holes

Some of the best locations for studying past hurricane activity are large, near-shore sinkholes known as blue holes.

Blue holes get their name from their deep blue color. They formed when carbonate rock dissolved to form underwater caves. Eventually, the ceilings collapsed, leaving behind sinkholes. The Bahamas has thousands of blue holes, some as wide as a third of a mile and as deep as a 60-story building.

They tend to have deep vertical walls that can trap sediments – including sand transported by strong hurricanes. Fortuitously, deep blue holes often have little oxygen at the bottom, which slows decay, helping to preserve organic matter in the sediment through time.

Images showing the depth of a blue hole

Hine’s Blue Hole in the Bahamas is about 330 feet (100 meters) deep. Seismic imaging shows about 200 feet (60-plus meters) of accumulated sediment.

Pete van Hengstum; Tyler Winkler

Cracking open a sediment core

When we bring up a sediment core, the coarse sand layers are often evident to the naked eye. But closer examination can tell us much more about these hurricanes of the past.

I use X-rays to measure changes in the density of sediment, X-ray fluorescence to examine elemental changes that can reveal if sediment came from land or sea, and sediment textural analysis that examines the grain size.

To figure out the age of each layer, we typically use radiocarbon dating. By measuring the amount of carbon-14, a radioactive isotope, in shells or other organic material found at various points in the core, I can create a statistical model that predicts the age of sediments throughout the core.

So far, my colleagues and I have published five paleohurricane records with nearly annual detail from blue holes on islands across the Bahamas.

Each record shows periods of significant increase in storm frequency lasting decades and sometimes centuries.

A map showing hurricane frequency from 1850 to 2019, with parts of Florida, Louisiana and North Carolina showing nine to 10 storms.

The red dots show the sites of high-resolution paleohurricane records. The map shows the frequency of hurricanes ranked Category 2 or above from 1850 to 2019.

Tyler Winkler

The records vary, showing that a single location might not reflect broader regional trends.

For example, Thatchpoint Blue Hole on Great Abaco Island in the northern Bahamas includes evidence of at least 13 hurricanes per century that were Category 2 or above between the years 1500 and 1670. That significantly exceeds the rate of nine per century documented since 1850. During the same period, 1500 to 1670, blue holes at Andros Island, just 186 miles (300 kilometers) south of Abaco, documented the lowest levels of local hurricane activity observed in this region during the past 1,500 years.

Spotting patterns across the Atlantic Basin

Together, however, these records offer a glimpse of broad regional patterns. They’re also giving us new insight into the ways ocean and atmospheric changes can influence hurricane frequency.

While rising sea surface temperatures provide more energy that can fuel more powerful and destructive hurricanes, their frequency – how often they form – isn’t necessarily affected in the same way. Some studies have predicted the total number of hurricanes will actually decrease in the future.

Eight chronologies of hurricane evidence stacked to show corresponding periods of higher hurricane frequency.

Comparing paleohurricane records from several locations shows periods of higher frequency. The highlighted periods cover the Little Ice Age, a time of cooler conditions in the North Atlantic from 1300 to 1850, and the Medieval Warm Period, from 900 to 1250.

Tyler Winkler

The compiled Bahamian records document substantially higher hurricane frequency in the northern Caribbean during the Little Ice Age, around 1300 to 1850, than in the past 100 years.

That was a time when North Atlantic surface ocean temperatures were generally cooler than they are today. But it also coincided with an intensified West African monsoon. The monsoon could have produced more thunderstorms off the western coast of Africa, which act as low-pressure seeds for hurricanes.

Steering winds and vertical wind shear likely also affect a region’s hurricane frequency over time. The Little Ice Age active interval observed in most Bahamian records coincides with increased hurricane strikes along the U.S. Eastern Seaboard from 1500 to 1670, but at the same time it was a quieter period in the Gulf of Mexico, central Bahamas and southern Caribbean.

Records from sites farther north tell us more about the climate. That’s because changes in ocean temperature and climate conditions are likely far more important to controlling regional impacts in such areas as the Northeastern U.S. and Atlantic Canada, where cooler climate conditions are often unfavorable for storms.

A warning for the islands

I am currently developing records of coastal storminess in locations including Newfoundland and Mexico. With those records, we can better anticipate the impacts of future climate change on storm activity and coastal flooding.

In the Bahamas, meanwhile, sea level rise is putting the islands at increasing risk, so even weaker hurricanes can produce damaging flooding. Given that storms are expected to be more intense, any increase in storm frequency could have devastating impacts.The Conversation

Tyler Winkler, Postdoctoral Researcher in Oceanography, Woods Hole Oceanographic Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do regulations on animal research ensure it is done ethically? What does that even mean?

A proposed measure in Switzerland would have made that country the first to ban medical and scientific experimentation on animals. It failed to pass in February 2022, with only 21% of voters in favor. Yet globally, including in the United States, there is concern about whether animal research is ethical.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We are scientists who support ethical animal research that reduces suffering of humans and animals alike by helping researchers discover the causes of disease and how to treat it. One of us is a neuroscientist who studies behavioral treatments and medications for people with post-traumatic stress disorder – treatments made possible by research with dogs and rodents. The other is a veterinarian who cares for laboratory animals in research studies and trains researchers on how to interact with their subjects.

We both place high importance on ensuring that animal research is conducted ethically and humanely. But what counts as “ethical” animal research in the first place?

The 4 R’s of animal research

There is no single standard definition of ethical animal research. However, it broadly means the humane care of research animals – from their acquisition and housing to the study experience itself.

Federal research agencies follow guiding principles in evaluating the use and care of animals in research. One is that the research must increase knowledge and, either directly or indirectly, have the potential to benefit the health and welfare of humans and other animals. Another is that only the minimum number of animals required to obtain valid results should be included. Researchers must use procedures that minimize pain and distress and maximize the animals’ welfare. They are also asked to consider whether they could use nonanimal alternatives instead, such as mathematical models or computer simulations.

These principles are summarized by the “3 R’s” of animal research: reduction, refinement and replacement. The 3 R’s encourage scientists to develop new techniques that allow them to replace animals with appropriate alternatives.

Two men bend over a microscope in an office with big glass walls overlooking water.

L'Oreal Brazil CEO Marcelo Zimet looks at microscope samples at the Episkin laboratory, which has developed alternative methods to animal testing.

Mauro Pimentel/AFP via Getty Images

Since these guidelines were first disseminated in the early 1960s, new tools have helped to significantly decrease animal research. In fact, since 1985, the number of animals in research has been reduced by half.

A fourth “R” was formalized in the late 1990s: rehabilitation, referring to care for animals after their role in research is complete.

These guidelines are designed to ensure that researchers and regulators consider the costs and benefits of using animals in research, focused on the good it could provide for many more animals and humans. These guidelines also ensure protection of a group – animals – that cannot consent to its own participation in research. There are a number of human groups that cannot consent to research, either, such as infants and young children, but for whom regulated research is still permitted, so that they can gain the potential benefits from discoveries.

Enforcing ethics

Specific guidelines for ethical animal research are typically established by national governments. Independent organizations also provide research standards.

In the U.S., the Animal Welfare Act protects all warmblooded animals except rats, mice and birds bred for research. Rats, mice and birds are protected – along with fish, reptiles and all other vertebrates – by the Public Health Service Policy.

Each institution that conducts animal research has an entity called the Institutional Animal Care and Use Committee, or IACUC. The IACUC is composed of veterinarians, scientists, nonscientists and members of the public. Before researchers are allowed to start their studies, the IACUC reviews their research protocols to ensure they follow national standards. The IACUC also oversees studies after approval to continually enforce ethical research practices and animal care. It, along with the U.S. Department of Agriculture, accreditation agencies and funding entities, may conduct unannounced inspections.

Laboratories that violate standards may be fined, forced to stop their studies, excluded from research funding, ordered to cease and desist, and have their licenses suspended or revoked. Allegations of misconduct are also investigated by the National Institutes of Health’s Office of Laboratory Animal Welfare.

Above and beyond the basic national standards for humane treatment, research institutions across 47 countries, including the U.S., may seek voluntary accreditation by a nonprofit called the Association for Assessment and Accreditation of Laboratory Animal Care, or AAALAC International. AAALAC accreditation recognizes the maintenance of high standards of animal care and use. It can also help recruit scientists to accredited institutes, promote scientific validity and demonstrate accountability.

Principles in practice

So what impact do these guidelines actually have on research and animals?

First, they have made sure that scientists create protocols that describe the purpose of their research and why animals are necessary to answer a meaningful question that could benefit health or medical care. While computer models and cell cultures can play an important role in some research, others studies, like those on Alzheimer’s disease, need animal models to better capture the complexities of living organisms. The protocol must outline how animals will be housed and cared for, and who will care for and work with the animals, to ensure that they are trained to treat animals humanely.

During continual study oversight, inspectors look for whether animals are provided with housing specifically designed for their species’ behavioral and social needs. For example, mice are given nesting materials to create a comfortable environment for living and raising pups. When animals don’t have environmental stimulation, it can alter their brain function – harming not only the animal, but also the science.

Monitoring agencies also consider animals’ distress. If something is known to be painful in humans, it is assumed to be painful in animals as well. Sedation, painkillers or anesthesia must be provided when animals experience more than momentary or slight pain.

For some research that requires assessing organs and tissues, such as the study of heart disease, animals must be euthanized. Veterinary professionals perform or oversee the euthanasia process. Methods must be in compliance with guidelines from the American Veterinary Medical Association, which requires rapid and painless techniques in distress-free conditions.

Fortunately, following their time in research, some animals can be adopted into loving homes, and others may be retired to havens and sanctuaries equipped with veterinary care, nutrition and enrichment.

Continuing the conversation

Animal research benefits both humans and animals. Numerous medical advances exist because they were initially studied in animals – from treatments for cancer and neurodegenerative disease to new techniques for surgery, organ transplants and noninvasive imaging and diagnostics.

These advances also benefit zoo animals, wildlife and endangered species. Animal research has allowed for the eradication of certain diseases in cattle, for example, leading not only to reduced farm cattle deaths and human famine, but also to improved health for wild cattle. Health care advances for pets – including cancer treatments, effective vaccines, nutritional prescription diets and flea and tick treatments – are also available thanks to animal research.

People who work with animals in research have attempted to increase public awareness of research standards and the positive effects animal research has had on daily life. However, some have faced harassment and violence from anti-animal research activists. Some of our own colleagues have received death threats.

Those who work in animal research share a deep appreciation for the creatures who make this work possible. For future strides in biomedical care to be possible, we believe that research using animals must be protected, and that animal health and safety must always remain the top priority.

Editor’s note: One photo depicting a species that is highly restricted for use in biomedical research has been removed from the article.The Conversation

Lana Ruvolo Grasser, Postdoctoral Research Fellow in Neuroscience, National Institutes of Health and Rachelle Stammen, Clinical Veterinarian, Emory National Primate Research Center, Emory University

Keeping global warming under 1.5 degrees Celsius is now practically impossible. But all is not yet lost

The world could still, theoretically, meet its goal of keeping global warming under 1.5 degrees Celsius, a level many scientists consider a dangerous threshold. Realistically, that’s unlikely to happen.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Part of the problem was evident at COP27, the United Nations climate conference in Egypt.

While nations’ climate negotiators were successfully fighting to “keep 1.5 alive” as the global goal in the official agreement, reached Nov. 20, 2022, some of their countries were negotiating new fossil fuel deals, driven in part by the global energy crisis. Any expansion of fossil fuels – the primary driver of climate change – makes keeping warming under 1.5 C (2.7 Fahrenheit) compared to pre-industrial times much harder.

Attempts at the climate talks to get all countries to agree to phase out coal, oil, natural gas and all fossil fuel subsidies failed. And countries have done little to strengthen their commitments to cut greenhouse gas emissions in the past year.

There have been positive moves, including advances in technology, falling prices for renewable energy and countries committing to cut their methane emissions.

But all signs now point toward a scenario in which the world will overshoot the 1.5 C limit, likely by a large amount. The World Meteorological Organization estimates global temperatures have a 50-50 chance of reaching 1.5C of warming, at least temporarily, in the next five years.

That doesn’t mean humanity can just give up.

Why 1.5 degrees?

During the last quarter of the 20th century, climate change due to human activities became an issue of survival for the future of life on the planet. Since at least the 1980s, scientific evidence for global warming has been increasingly firm , and scientists have established limits of global warming that cannot be exceeded to avoid moving from a global climate crisis to a planetary-scale climate catastrophe.

There is consensus among climate scientists, myself included, that 1.5 C of global warming is a threshold beyond which humankind would dangerously interfere with the climate system.

We know from the reconstruction of historical climate records that, over the past 12,000 years, life was able to thrive on Earth at a global annual average temperature of around 14 C (57 F). As one would expect from the behavior of a complex system, the temperatures varied, but they never warmed by more than about 1.5 C during this relatively stable climate regime.

Today, with the world 1.2 C warmer than pre-industrial times, people are already experiencing the effects of climate change in more locations, more forms and at higher frequencies and amplitudes.

Climate model projections clearly show that warming beyond 1.5 C will dramatically increase the risk of extreme weather events, more frequent wildfires with higher intensity, sea level rise, and changes in flood and drought patterns with implications for food systems collapse, among other adverse impacts. And there can be abrupt transitions, the impacts of which will result in major challenges on local to global scales.

Tipping points: Warmer ocean water is contributing to the collapse of the Thwaites Glacier, a major contributor to sea level rise with global consequences.

Steep reductions and negative emissions

Meeting the 1.5 goal at this point will require steep reductions in carbon dioxide emissions, but that alone isn’t enough. It will also require “negative emissions” to reduce the concentration of carbon dioxide that human activities have already put into the atmosphere.

Carbon dioxide lingers in the atmosphere for decades to centuries, so just stopping emissions doesn’t stop its warming effect. Technology exists that can pull carbon dioxide out of the air and lock it away. It’s still only operating at a very small scale, but corporate agreements like Microsoft’s 10-year commitment to pay for carbon removed could help scale it up.

A report in 2018 by the Intergovernmental Panel on Climate Change determined that meeting the 1.5 C goal would require cutting carbon dioxide emissions by 50% globally by 2030 – plus significant negative emissions from both technology and natural sources by 2050 up to about half of present-day emissions.

A direct air capture project in Iceland stores captured carbon dioxide underground in basalt formations, where chemical reactions mineralize it.

Climeworks

Can we still hold warming to 1.5 C?

Since the Paris climate agreement was signed in 2015, countries have made some progress in their pledges to reduce emissions, but at a pace that is way too slow to keep warming below 1.5 C. Carbon dioxide emissions are still rising, as are carbon dioxide concentrations in the atmosphere.

A recent report by the United Nations Environment Program highlights the shortfalls. The world is on track to produce 58 gigatons of carbon dioxide-equivalent greenhouse gas emissions in 2030 – more than twice where it should be for the path to 1.5 C. The result would be an average global temperature increase of 2.7 C (4.9 F) in this century, nearly double the 1.5 C target.

Given the gap between countries’ actual commitments and the emissions cuts required to keep temperatures to 1.5 C, it appears practically impossible to stay within the 1.5 C goal.

Global emissions aren’t close to plateauing, and with the amount of carbon dioxide already in the atmosphere, it is very likely that the world will reach the 1.5 C warming level within the next five to 10 years.

With current policies and pledges, the world will far exceed the 1.5 C goal.

Climate Action Tracker

How large the overshoot will be and for how long it will exist critically hinges on accelerating emissions cuts and scaling up negative emissions solutions, including carbon capture technology.

At this point, nothing short of an extraordinary and unprecedented effort to cut emissions will save the 1.5 C goal. We know what can be done – the question is whether people are ready for a radical and immediate change of the actions that lead to climate change, primarily a transformation away from a fossil fuel-based energy system.The Conversation

Peter Schlosser, Vice President and Vice Provost of the Julie Ann Wrigley Global Futures Laboratory, Arizona State University

Why Americans of all races, classes and genders looked to the ancient Mediterranean for inspiration

The ancient world of the Mediterranean has long permeated American society, in everything from museum collections to home furnishings. The design of the nation’s public monuments, buildings and universities, as well as its legal system and form of government, show the enduring influence of Mediterranean antiquity on American culture.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Until the late 19th century, Americans encountered the ancient world almost exclusively through reproductions – in books, artwork and even popular plays. Very few could afford to travel abroad to encounter Mediterranean artifacts firsthand.

Yet despite barriers to access, many Americans forged personal connections with the cultures of the ancient Mediterranean – not only the Greeks and Romans, but also the Egyptians and Israelites. Perhaps the newness of American culture inspired this deep interest in the ancient past.

One of the most fascinating aspects of Mediterranean antiquity’s influence on America, even before it officially became a country, is how it cut across cultural lines of race, class and gender. Far from being the preserve of a privileged few, the art and literature of the ancients was often embraced by Americans of all stripes – including the enslaved Black poet Phillis Wheatley (circa 1753-1784) and Black and Native American sculptor Edmonia Lewis (1844-1907). But the circumstances of these encounters and the way individual Americans thought about antiquity varied greatly.

I’m an art historian specializing in ancient Mediterranean art and culture. I am particularly fascinated by the way Americans, from the earliest days, made creative connections between past and present, despite being separated by thousands of miles and millennia of history.

In researching and selecting works of art for the exhibit “Antiquity and America,” on view at the Bowdoin College Museum of Art, I was excited to show an exceptionally diverse range of American encounters with the ancient world, especially in portrait painting.

Marker of education

Take, for example, Samson Occom (1723-1792), a member of the Mohegan nation, Presbyterian minister and one of the first Native Americans to pen an autobiography in English.

Painting of a Native American man in a drapey shirt and cape looking to the right. Trees and sky are in the background.

The portrayal of Samson Occom includes symbols of both the Indigenous identity of the sitter and his connections to Mediterranean antiquity.

Painted by Nathaniel Smibert. Courtesy of Bowdoin College Museum of Art

His unfinished portrait, painted by Nathaniel Smibert (1735-1756) in the mid-18th century, alluded to Occom’s Indigenous identity in the coloring of his skin and the styling of his hair. Simultaneously, it also referenced his training in classical literature and oratory, acquired by studying with Eleazar Wheelock (1711-1779), a Connecticut Congregational minister.

Occom’s pose and draped cloak recall those found on ancient statues of Roman senators – a portrait convention familiar in early America from prints circulating at the time – and one that would later become quite popular in American society.

While his learning in Greek and Latin was undoubtedly a source of great pride for Occom – and a way for him to level the playing field with the European colonists – it was used by others to demonstrate the “civilizing” effect of European culture and education in the British Colonies.

In 1776, Eleazar Wheelock sent his former pupil Occom to Great Britain to raise money for a Native American school – funds that were ultimately repurposed for the founding of Dartmouth College. Occom would later charge Wheelock with using him as a “gazing stock” in Europe while planning all the while to use the funds for the benefit of white settlers.

Shaping public opinion

A portrait of Sengbe Pieh, also known as Cinqué, who led the 1839 Amistad slave ship revolt, is an example of Black Americans’ use of the classical world for political purposes.

Painting of a black man holding a bamboo staff in a toga-like outfit looking to the left. The background shows a landscape with a cliff, distant mountain, tropical trees and a moody, cloudy sky.

Portraying Sengbe Pieh, who led the revolt on the slave ship Amistad, in the pose and garb of an ancient Roman senator was an intentional way to influence public opinion.

Courtesy of Bowdoin College Museum of Art

Commissioned by Robert Purvis (1810-1898), a Black Philadelphian and prominent abolitionist, this striking portrait by John Sartain (1808-1897) was intended to shape the popular image of Pieh and his fellow Africans during their Supreme Court trial for mutiny and murder in 1840-1841.

Pieh’s African identity is made evident not only in the tone of his skin, but in the bamboo staff he holds and the landscape in background depicting his homeland. The white cloak draped over his shoulder would have called to mind the white robes worn by Roman senators and, by extension, the Roman virtues of honor and dignity.

Pieh and his fellow Africans were ultimately acquitted and returned to the Sierra Leone Colony in 1842.

Feminist icon

Woman posed outdoors in flowing robes holding a lute. In the background are hand written scrolls, the ocean and distant cliffs.

At the turn of the 20th century, a portrait of an American woman portrayed as the Greek poet Sappho connected the sitter to themes in the ancient work.

Painted by Jean-Léon Gérôme, 1899. Courtesy of Bowdoin College Museum of Art

Caroline Sanders Truax (1870–1940), one of the first women admitted to the New York state bar, was so enamored by the ancient past she was portrayed as the Greek lyric poet Sappho by painter Jean-Léon Gérôme (1824–1904).

This was a bold choice for a representation of an American woman in 1899. Sappho, whose writing is among the only surviving sources of female authorship from antiquity, was already an icon of the first-wave feminist movement, and the homoerotic themes of her poetry were well understood. Was the choice the artist’s – or the sitter’s? The most likely answer is it was by mutual agreement, perhaps inspired by Truax’s knowledge of classical language and literature – and her own interest in composing lyric poetry.

The portrait was a sensation in New York society when it arrived from the artist’s studio in Paris. It was featured in several portrait exhibitions and newspaper articles – and was hung with pride by Truax and her husband in their home.

Painting of a man and his daughter walking under an elaborately sculpted Roman arch.

American poet Henry Wadsworth Longfellow (1807–1882) walks with his daughter under the Arch of Titus in Rome, with the famed Colosseum in the background.

Painted by George Healy. Courtesy of Bowdoin College Museum of Art

For generations of Americans, the history and literature of Mediterranean antiquity was fertile ground for contemporary comparisons. It was universal enough to be brought into debates about the Constitution and founding principles of democracy, slavery and abolition, and women’s rights and suffrage. It was also of great individual significance for Americans of many different backgrounds – a past they were on intimate terms with, despite the millennia and miles separating the United States from the ancient Mediterranean.The Conversation

Sean P. Burrus, Post-Doctoral Curatorial Fellow, Bowdoin College

'Similarity' and 'contagion': Anthropologist traces the psychological roots of magical thinking

Growing up in Greece, I spent my summers at my grandparents’ home in a small coastal village in the region of Chalkidiki. It was warm and sunny, and I passed most of my time playing in the streets with my cousins. But occasionally, the summer storms brought torrential rain. You could see them coming from far away, with black clouds looming over the horizon, lit up by lightning.

As I rushed home, I was intrigued to see my grandparents prepare for the thunderstorm. Grandma would cover a large mirror on the living room wall with a dark cloth and throw a blanket over the TV. Meanwhile, Grandpa would climb a ladder to remove the light bulb over the patio door. Then they switched off all the lights in the house and waited the storm out.

I never understood why they did all this. When I asked, they said that light attracts lightning. At least that was what people said, so better to be on the safe side.

Where do these kinds of beliefs come from?

My fascination with seemingly bizarre cultural beliefs and practices eventually led me to become an anthropologist. I have come across similar superstitions around the world, and although one may marvel at their variety, they share some common features.

The principles of magical thinking

At the core of most superstitions are certain intuitive notions about how the world works. Early anthropologists described these intuitions in terms of principles such as “similarity” and “contagion.”

According to the principle of similarity, things that look alike may share some deeper connection, just as the members of a family tend to resemble each other both in appearance and in other traits. Of course, this is not always the case. But this inference feels natural, so we often abuse it.

Case in point: The light reflected on the surface of a mirror is not related to the light resulting from the electrical discharges produced during a thunderstorm. But because they both seem to give off light, a connection between the two was plausible enough to become folk wisdom in many parts of the world. Likewise, because our reflection on the mirror closely resembles our own image, many cultures hold that breaking a mirror brings bad luck, as if damage to that reflection would also mean damage to ourselves.

The principle of contagion is based on the idea that things have internal properties that can be transmitted through contact. The heat of a fire is transferred to anything it touches, and some illnesses can spread from one organism to another. Whether consciously or unconsciously, people in all cultures often expect that other kinds of essences can also be transferred through contact.

For example, people often believe that certain essences can “rub off” on someone, which is why casino players sometimes touch someone who is on a winning streak. It is also why, in 2014, a statue of Juliet, the Shakespearean character who fell madly in love with Romeo, had to be replaced due to excessive wear caused by visitors touching it to find love.

A search for patterns

These kinds of superstitions betray something more general about the way people think. To make sense of our world, we look for patterns in nature. When two things occur at around the same time, they may be related. For instance, black clouds are associated with rain.

But the world is far too complex. Most of the time, correlation does not mean causation, although it may feel like it does.

If you wear a new shirt to the stadium and your team wins, you might wear it again. If another victory comes, you begin to see a pattern. This now becomes your lucky shirt. In reality, myriad other things have changed since the last game, but you do not have access to all those things. What you know for sure is that you wore the lucky shirt, and the result was favorable.

Superstitions are comforting

People really want their lucky charms to work. So when they don’t, we are less motivated to remember them, or we may attribute our luck to some other factor. If their team loses, they might blame the referee. But when their team wins, they are more likely to notice the lucky shirt, and more likely to declare to others that it worked, which helps spread the idea.

As a social species, so much of what we know about the world comes from common wisdom. It would therefore seem safe to assume that if other people believe in the utility of a particular action, there might be something to it. If people around you say you should not eat those mushrooms, it’s probably a good idea to avoid them.

This “better safe than sorry” strategy is one of the main reasons superstitions are so widespread. Another reason is that they simply feel good.

Research shows that rituals and superstitions spike during times of uncertainty, and performing them can help reduce anxiety and boost performance. When people feel powerless, turning to familiar actions provides a sense of control, which, even if illusory, can still be comforting.

Thanks to these psychological effects, superstitions have been around for ages, and will likely be around for ages to come.The Conversation

Dimitris Xygalatas, Associate Professor of Anthropology and Psychological Sciences, University of Connecticut

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Avian flu is back: Millions of poultry birds culled ahead of Thanksgiving

An outbreak of highly pathogenic avian influenza has spread through chicken and turkey flocks in 46 states since it was first detected in Indiana on Feb. 8, 2022. The outbreak is also taking a heavy toll in Canada and Europe.

Better known as bird flu, avian influenza is a family of highly contagious viruses that are not harmful to wild birds that transmit it but are deadly to domesticated birds. The virus spreads quickly through poultry flocks and almost always causes severe disease or death, so when it is detected, officials quarantine the site and cull all the birds in the infected flock.

As of early November, this outbreak had led to the culling of over 50 million birds from Maine to Oregon, driving up prices for eggs and poultry – including holiday turkeys. This matches the toll from a 2014-2015 bird flu outbreak that previously was considered the most significant animal disease event in U.S. history. Yuko Sato, an associate professor of veterinary medicine who works with poultry producers, explains why so many birds are getting sick and whether the outbreak threatens human health.

Why is avian influenza so deadly for domesticated birds but not for wild birds that carry it?

Avian influenza (AI) is a contagious virus that affects all birds. There are two groups of avian influenza viruses that cause disease in chickens: highly pathogenic AI (HPAI) and low pathogenic AI (LPAI).

HPAI viruses cause high mortality in poultry, and occasionally in some wild birds. LPAI can cause mild to moderate disease in poultry, and usually little to no clinical signs of illness in wild birds.

The primary natural hosts and reservoir of AI viruses are wild waterfowl, such as ducks and geese. This means that the virus is well adapted to them, and these birds do not typically get sick when they are infected with it.

But when domesticated poultry, such as chickens and turkeys, come in direct or indirect contact with feces of infected wild birds, they become infected and start to show symptoms, such as lethargy, coughing and sneezing and sudden death.

Map of US and Canada showing avian influenza distribution among commercial, backyard and wild bird flocks.

Migrating wild birds, most of which are not harmed by avian influenza, are known to spread the disease to commercial and backyard flocks.

USGS

There are multiple strains of avian influenza. What type is this outbreak, and is it dangerous to humans?

The virus of concern in this outbreak is a Eurasian H5N1 HPAI virus that causes high mortality and severe clinical signs in domesticated poultry. Scientists who monitor wild bird flocks have also detected a reassortant virus that contains genes from both the Eurasian H5 and low pathogenic North American viruses. Reassortment happens when multiple strains of the virus circulating in the bird population exchange genes to create a new strain of the virus, much as new strains of COVID-19 like omicron and delta have emerged during the ongoing pandemic.

According to the U.S. Centers for Disease Control and Prevention, the risk to public health from this outbreak is low. No human illnesses were associated with the 2014-2015 H5N1 outbreak in the U.S.

The only known human case in the U.S. during the current outbreak was found in a man in Colorado who had contact with infected birds. The man tested positive once, then negative on follow-up tests, and reported only mild symptoms, so health experts theorized that the virus may have been present in his nose without actually causing an infection.

Health officials recommend avoiding direct contact with wild birds to avoid spreading avian flu.

Are these outbreaks connected to wild bird migration?

Yes, wild bird migration has been an important factor in this outbreak. Scientists have detected the same H5N1 virus that is infecting poultry in more than 3,000 wild birds during this outbreak, compared with 75 detections during the 2014-2015 outbreak. This tells us that the virus is highly prevalent in wild bird populations. While most detections occur in ducks and geese, the virus has also been found in other bird species, including raptors, such as eagles and vultures, and other waterfowl, such as swans and pelicans.

The U.S. Department of Agriculture’s Animal and Plant Health Inspection Service conducts targeted sampling to test wild birds in fall and early winter, which correlates with migration season. This helps scientists and wildlife managers understand where avian flu viruses may be introduced to domestic flocks, track their spread and monitor for any reassortment.

Because there are high amounts of virus circulating, wildlife agencies advise against handling or eating game birds that appear sick. Waterfowl can also be infected, with no signs of illness, so hunters need to be especially careful not to handle or eat game birds without properly cleaning their clothing and equipment afterward and ensuring the birds are cooked to an internal temperature of 165 degrees F (74 C) before consuming them.

Hunters and other members of the public are advised not to approach any wild animals that are acting strange and to report any such sightings to officials. In some cases, avian flu viruses have spilled over to other wild animals, such as red foxes, raccoons, skunks, opossums and bobcats. We did not see this trend in 2014-15.

HPAI is a transboundary disease, which means it is highly contagious and spreads rapidly across national borders. Some research indicates that detection of HPAI viruses in wild birds has become more common.

Detection of HPAI in wild birds is seasonal, with a peak in February and a low point in September. Many migratory bird species travel thousands of miles between continents, posing a continuing risk of AI virus transmission.

On the positive side, we have better diagnostic tests for much more rapid and improved detection of avian influenza compared to 20 to 30 years ago, and can use molecular diagnostics such as polymerase chain reaction (PCR) tests – the same method labs use to detect COVID-19 infections.

How are poultry farmers affected when HPAI is detected in their flocks?

To detect AI, the U.S. Department of Agriculture oversees routine testing of flocks by farmers and carries out federal inspection programs to ensure that eggs and birds are safe and free of virus. When H5N1 is diagnosed on a farm or in a backyard flock, state and federal officials will quarantine the site and cull and dispose of all the birds in the infected flock. Then the site is cleaned and decontaminated, a process that includes removing organic materials like manure and chicken feed that can harbor virus particles.

After several weeks without new virus detections, the area is required to test negative in order to be deemed free of infection. We call this process the four D’s of outbreak control: diagnosis, depopulation, disposal and decontamination.

Wire cages hold chicken figurines

Live birds are banned at agricultural fairs during bird flu outbreaks to avoid spreading infections. These fake chickens were on display at the Cabarrus County, N.C., fair in 2015, a previous H5N1 outbreak year.

Elizabeth W. Kearley via Getty Images

Flock owners are eligible for federal indemnity payments for birds and eggs that have to be destroyed because of avian influenza, as well as for the costs of removing birds and cleaning and disinfecting their farms. This support is designed to help producers move past an outbreak, get their farms back in condition for restocking and get back into business as soon as possible.

But these payments almost never cover all of farmers’ expenses. Poultry farms can’t always recover financially from major bird flu outbreaks. That makes it especially important to focus on prevention strategies to keep the virus out.

This is an updated version of an article originally published on April 7, 2022.The Conversation

Yuko Sato, Associate Professor of Veterinary Medicine, Iowa State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why some people think fascism is the greatest expression of democracy ever invented: political philosopher

Warnings that leaders like Donald Trump hold a dagger at the throat of democracy have evoked a sense of befuddlement among moderates. How can so many Republicans – voters, once reasonable-sounding officeholders and the new breed of activists who claim to be superpatriots committed to democracy – be acting like willing enablers of democracy’s destruction?

As a political philosopher, I spend a lot of time studying those who believe in authoritarian, totalitarian and other repressive forms of government, on both the right and the left. Some of these figures don’t technically identify themselves as fascists, but they share important similarities in their ways of thinking.

One of the most articulate thinkers in this group was the early-20th-century philosopher Giovanni Gentile, whom Italian dictator Benito Mussolini called “the philosopher of fascism.” And many fascists, like Gentile, claim they are not opposed to democracy. On the contrary, they think of themselves as advocating a more pure version of it.

Unity of leader, nation-state and people

The idea that forms the bedrock of fascism is that there is a unity between the leader, the nation-state and the people.

For instance, Mussolini famously claimed that “everything is in the state, and nothing human or spiritual exists, much less has value, outside the state.” But this is not an end to be achieved. It is the point from which things begin.

This is how Trump, according to those around him, can believe “I am the state” and equate what is good for him is by definition also good for the country. For while this view may seem inconsistent with democracy, this is true only if society is viewed as a collection of individuals with conflicting attitudes, preferences and desires.

But fascists have a different view. For example, Othmar Spann, whose thought was highly influential during the rise of fascism in Austria in the 1920s and 1930s, argued that society is not “the summation of independent individuals,” for this would make society a community only in a “mechanical” and therefore trivial sense.

On the contrary, for Spann and others, society is a group whose members share the same attitudes, beliefs, desires, view of history, religion, language and so on. It is not a collective; it is more like what Spann describes as a “super-individual.” And ordinary individuals are more like cells in a single large biological organism, not competing independent organisms important in themselves.

This sort of society could indeed be democratic. Democracy is intended to give effect to the will of the people, but it doesn’t require that society be diverse and pluralistic. It does not tell us who “the people” are.

Who are the people?

According to fascists, only those who share the correct attributes can be part of “the people” and therefore true members of society. Others are outsiders, perhaps tolerated as guests if they respect their place and society feels generous. But outsiders have no right to be part of the democratic order: Their votes should not count.

This helps explain why Tucker Carlson claims “our democracy is no longer functioning,” because so many nonwhites have the vote. It also helps explain why Carlson and others so vigorously promote the “great replacement theory,” the idea that liberals are encouraging immigrants to come to the U.S. with the specific purpose of diluting the political power of “true” Americans.

The importance of seeing the people as an exclusive, privileged group, one that actually includes rather than is represented by the leader, is also at work when Trump denigrates Republicans who defy him, even in the smallest ways, as “Republicans in Name Only.” The same is also true when other Republicans call for these “in-house” critics to be cast out of the party, for to them any disloyalty is equivalent to defying the will of the people.

How representative democracy is undemocratic

Ironically, it is all the checks and balances and the endless intermediate levels of representative government that fascists view as undemocratic. For all these do is interfere with the ability of the leader to give direct effect to the will of the people as they see it.

Here is Libyan dictator and Arab nationalist Moammar Gadhafi on this issue in 1975:

Parliament is a misrepresentation of the people, and parliamentary systems are a false solution to the problem of democracy. … A parliament is … in itself … undemocratic as democracy means the authority of the people and not an authority acting on their behalf.”

In other words, to be democratic, a state does not need a legislature. All it needs is a leader.

How is the leader identified?

For the fascist, the leader is certainly not identified through elections. Elections are simply spectacles meant to announce the leader’s embodiment of the will of the people to the world.

But the leader is supposed to be an extraordinary figure, larger than life. Such a person cannot be selected through something as pedestrian as an election. Instead, the leader’s identity must be gradually and naturally “revealed,” like the unveiling of religious miracle, says Nazi theorist Carl Schmitt.

For Schmitt and others like him, then, these are the true hallmarks of a leader, one who embodies the will of the people: intense feeling expressed by supporters, large rallies, loyal followers, the consistent ability to demonstrate freedom from the norms that govern ordinary people, and decisiveness.

So when Trump claims “I am your voice” to howls of adoration, as happened at the 2016 Republican National Convention, this is supposed to be a sign that he is exceptional, part of the unity of nation-state and leader, and that he alone meets the above criteria for leadership. The same was true when Trump announced in 2020 that the nation is broken, saying “I alone can fix it.” To some, this even suggests he is sent by God.

If people accept the above criteria for what identifies a true leader, they can also understand why Trump claims he attracted bigger crowds than President Joe Biden when explaining why he could not have lost the 2020 presidential election. For, as Spann wrote a century earlier, “one should not count votes, but weigh them such that the best, not the majority prevails.”

Besides, why should the mild preference of 51% prevail over the intense preference of the rest? Is not the latter more representative of the will of the people? These questions certainly sound like something Trump might ask, even though they are actually taken from Gadhafi again.

The duty of the individual

In a true fascist democracy, then, everyone is of one mind about everything of importance. Accordingly, everyone intuitively knows what the leader wants them to do.

It is therefore each person’s responsibility, citizen or official, to “work towards the leader” without needing specific orders. Those who make mistakes will soon learn of it. But those who get it right will be rewarded many times over.

So argued Nazi politician Werner Willikens. And so, it appears, thought Trump when he demanded absolute loyalty and obedience from his administration officials.

But most importantly, according to their own words, so thought many of the insurrectionists on Jan. 6, 2021, when they tried to prevent the confirmation of Biden’s election. And so Trump signaled when he subsequently promised to pardon the rioters.

With that, the harmonization of democracy and fascism is complete.The Conversation

Mark R Reiff, Research Affiliate in Legal and Political Philosophy, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fetterman’s struggles with language highlight the challenges after a stroke: A vascular neurologist explains

John Fetterman, the Democratic nominee for a hotly contested U.S. Senate seat in Pennsylvania, has been drawing scrutiny for his performance in his first post-stroke broadcast interview and most recently, his Oct. 25, 2022, Senate debate against Republican Mehmet Oz.

Fetterman suffered a stroke on the way to a campaign event in May 2022. His apparent post-stroke neurological effects - including auditory processing and speech issues – have caused some to question his fitness for the role and have become a central factor in the Senate race. The Conversation asked Andrew Southerland, a vascular neurologist specializing in stroke and cerebrovascular disease who sees many patients like Fetterman, to explain what Fetterman’s case can teach us about stroke recovery.

What does the public know about Fetterman’s stroke?

Fetterman has chosen not to release his full medical record, so it’s not possible to draw conclusions about the exact location or extent of brain injury resulting from his stroke. He and his team have confirmed that his initial symptoms began with feeling fatigued and slurring his speech, which his wife immediately identified as a possible stroke.

Because of her early recognition of his symptoms and rapid transport to a nearby facility, Lancaster General Hospital in Pennsylvania, he had the opportunity to receive a clot-busting drug called a thrombolytic and underwent a catheter-based procedure to remove the blood clot from an artery in the brain.

Based on this information, experts know that Fetterman suffered an ischemic stroke caused by a blockage of blood flow and oxygen to a certain part of the brain. Ischemic stroke accounts for roughly 85% of the 800,000 new cases of stroke occurring each year in the United States. The remainder are hemorrhagic strokes caused by bleeding in or around the brain.

Ischemic stroke often results in a collage of symptoms including facial droop, speech changes and limb weakness, numbness or lack of coordination on one side of the body. These symptoms help bystanders recognize the signs of stroke. When treating ischemic stroke, we in the stroke community use the motto, “Time Is Brain,” because the sooner we can restore blood flow to the brain after a stroke begins the better chance the patient has of making a good recovery.

Strokes can occur in people of all ages, and it’s important to recognize the warning signs.

Fetterman has said publicly that his stroke occurred due to an abnormal rhythm of the heart called atrial fibrillation. This is a common cause of ischemic stroke, which happens when blood clots form in the heart and travel – or embolize – to the brain. This is the origin of the term “thromboembolism,” which basically means blood clot traveling from one location to another. In the case of atrial fibrillation causing stroke, it refers to a blood clot traveling through arteries from the heart into the brain.

Fortunately, these types of stroke are highly preventable simply by taking a daily anticoagulant to prevent the clots from forming. Atrial fibrillation may cause symptoms of fast heart beat or shortness of breath. But often, it is silent, coming and going in short episodes. This makes it more challenging to diagnose and treat. Current guidelines recommend starting an anticoagulant for stroke prevention in high-risk patients with atrial fibrillation.

Why can stroke lead to auditory processing issues?

Just like any organ or tissue in the body, normal function in the brain depends on steady blood flow and oxygen. Interruptions in this blood flow – as is the case in ischemic stroke – can lead to permanent injury called infarction. The location and extent of infarction after a stroke determine what deficits a patient suffer.

In the case of an auditory processing issue, the injury occurs in a part of the brain called the temporal lobe affecting the connection between areas where auditory and language processing occur. In other words, a stroke can disrupt how we hear and process words.

Recovery from stroke depends on a number of variables, including a patient’s age and other medical problems, but largely on the extent of the injury and where it occurs in the brain.

How do auditory processing issues relate to cognition?

Auditory processing disorders fall under a larger family of stroke deficits termed aphasia, which have to do with one’s ability to produce or comprehend various forms of language. Aphasia is often categorized as expressive, related to difficulty producing language, or receptive, meaning a difficulty understanding language.

The types of things that aphasia can affect include word finding, grammar, naming, reading and writing. Patients with aphasia can also struggle with paraphasic errors – in other words, saying an incorrect word that sounds like the intended word they are trying to say.

Fetterman identified this specific challenge during his NBC News interview, pointing to the example of his saying “emphetic” in place of the word “empathetic.” These issues often get worse during high-pressure situations like debates. What’s unique in Fetterman’s situation is that reading words seems to be easier than hearing them, hence the use of closed captioning during his NBC News interview and his debate.

Aphasia is a common symptom of stroke but can also occur in other neurological conditions including various types of dementia.

Most importantly, aphasias and auditory processing disorders do not necessarily imply other cognitive impairments. In other words, they typically do not alter one’s intelligence, behaviors or executive abilities – neurological functions that are orchestrated by the frontal lobes of the brain.

Quick response times are critical in the moments before and after a stroke.

What is the typical path of recovery following stroke?

Fetterman now joins the ranks of more than 7 million Americans and many more around the world who have suffered a stroke, a significant portion of whom remain disabled as a result. Yet, advances in life saving treatments – like the ones Fetterman received – provide hope for stroke patients who were once destined for permanent disability to now walk out of the hospital and return to independent, high-functioning lives.

Typically, recovery from stroke occurs along a continuum, from the early hospitalization to a prolonged period of rehabilitation over weeks to months. Depending on the severity of the stroke and resulting deficits, this may require a period of time in an inpatient rehabilitation facility and possibly working with physical, occupational and speech therapists in an outpatient setting. In either case, stroke rehabilitation and recovery is a team sport, requiring collaboration from a multi-disciplinary group of providers along with the support of patient caregivers.

In the field of stroke recovery, patients gain the most ground in the first few months following a stroke event. However, recovery experts know that patients can continue to see gradual improvements well into the first year and beyond.

One thing that’s certain is that stroke survivors like Fetterman are a testament to the advances in clinical research and practice that paved the way for the life-saving treatments like the ones he received. And there’s nothing debatable about that.The Conversation

Andrew M. Southerland, Professor of Neurology and Public Health Sciences, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why the GOP’s battle for the soul of 'character conservatives' may center on Utah: US politics expert

U.S. Sen. Mike Lee is seeking reelection in Utah – a typically uneventful undertaking for an incumbent Republican in a state that hasn’t had a Democratic senator since 1977. But he faces a unique challenger: Evan McMullin.


The former CIA operative, investment banker and Republican policy adviser left the GOP in 2016 because of Donald Trump. McMullin then ran for president as an independent, styling himself as a principled conservative, and won 21% of Utahans’ votes.

Lee himself voted for McMullin in 2016, saying Trump was “wildly unpopular” in Utah because of “religiously intolerant” statements about Muslims. Some 62% of the state’s residents belong to the Church of Jesus Christ of Latter-day Saints, which has its own history of suffering persecution. Yet Lee embraced Trump after his election, and now McMullin is trying to upend him.

Both men are devoted members of the Church of Jesus Christ of Latter-day Saints, often known as the Mormon church or the LDS church. As a scholar of U.S. elections and author of two books on LDS politics, I see their November face-off as part of a larger fight over what it means to be a “character conservative.” This battle has been raging around the country, not only in Utah; but LDS voters have become an especially interesting example since Trump’s rise.

Road to acceptance

Over two centuries, Latter-day Saints have transformed themselves from among the most persecuted religious groups in U.S. history to a global religion of almost 17 million members, by their own count, with an estimated US$100 billion in resources.

Politics has always been woven into this history. Early Latter-day Saints were forced gradually westward from state to state because of neighbors’ distrust, mob justice and government oppression – most notably, an extermination order was issued by the state of Missouri in 1838. The church ultimately fled the U.S. after founder Joseph Smith was killed and settled around Salt Lake, which was a Mexican territory when church members first arrived.

Utah was granted statehood in 1896, and the Senate provided a building block for increased LDS immersion into American culture – though it didn’t look that way at first. In the early 1900s, the church was so widely reviled that Sen.-elect Reed Smoot was blocked from taking his seat over accusations that his role in the church made him inherently hostile to the government.

Yet Smoot was exonerated, and his three-decade tenure significantly enhanced the church’s acceptance in national politics. The soft-spoken senator became a leading voice of conservative morality and embodiment of Mormonism in wider American culture, replacing Brigham Young, the bearded patriarch with multiple wives.

LDS ascendance throughout the 20th century culminated in Mitt Romney’s 2012 presidential nomination and wider cultural attention dubbed “the Mormon moment.” Some LDS beliefs and practices – such as the teaching that Smith discovered scripture on golden plates buried in upstate New York – have long generated curiosity, if not derision, from other Americans. Many Latter-day Saints and observers felt Romney’s nomination suggested greater acceptance of the religion.

In particular, LDS conservatives have become political allies with white evangelicals when it comes to social issues such as opposing gay marriage. In popular culture, Latter-day Saints are often seen as the embodiment of 1950s conservative Americana. LDS cultural norms such as patriotism, abstinence from tobacco and alcohol and prioritizing child rearing, family life and devotion to service have forged a conception of character widely embraced by conservatives.

This all helped position Latter-day Saints as a small but influential group within the Christian right.

And then Trump decided to run for president.

An inconvenient candidate

Trump galvanized parts of the Republican Party. Yet conservatives were divided over the candidate’s character – especially his unorthodox attacks on primary rivals and former GOP presidential candidates, the “Access Hollywood” video in which he bragged about groping women, and numerous allegations of sexual assault.

Latter-day Saints are the most Republican religious group in the country, making them a particularly interesting case study of character conservatism. Trump’s overlap with the LDS community “starts and stops” with his GOP affiliation, as Brigham Young University political scientist Quin Monson told the Los Angeles Times in 2016.

Romney thoroughly criticized Trump and encouraged Republicans to vote for any other primary candidate. Grounded in his LDS faith, which prioritizes family on Earth and for eternity, Romney urged Utahans: “Think of Donald Trump’s personal qualities. The bullying, the greed, the showing off, the misogyny, the absurd third-grade theatrics. … Imagine your children and your grandchildren acting the way he does.”

Deseret News, the church-owned newspaper in Salt Lake, opposed Trump for not upholding “the ideals and values of this community.” Just 16% of Latter-day Saints thought he was a moral person.

When McMullin ran in 2016, Trump still won Utah, but with 45% – the lowest for a Republican nominee there since 1992. Nationwide, just over 50% of Latter-day Saints voted for Trump in 2016, almost 30 percentage points lower than white evangelicals. The second time around, he won over 60% of the LDS vote, but most church members who are people of color or are under 40 did not vote for him.

GOP soul-searching

Jan. 6, 2021, was a pivotal moment for the Trump presidency and character conservatives. Half of Republicans believed Trump bore at least some responsibility for what happened. Voters’ disapproval was compounded by further activities, such as Trump’s trying to overturn the 2020 election and taking highly classified documents. Still, GOP candidates face strategic pressure to pledge allegiance to Trump: The Republican National Committee, for example, has directed millions of dollars to his legal defense.


Character conservatives are reckoning with two different impulses. Trump is not a role model, but he has demonstrated willingness to fight for some religious-conservative values, such as reconfiguring the Supreme Court to enable the overturning of Roe v. Wade. Some character conservatives support Trump, believing the ends justify means. Others reject Trump’s behavior as immoral and unacceptable for democracy – and the majority are probably somewhere in the middle.

The Utah Senate contest will provide some clarity to these countervailing trends. Lee has previously compared Trump with Captain Moroni, a hero from LDS scripture. McMullin, meanwhile, contends that Lee’s efforts to overturn the 2020 election results were “brazen treachery.”

Independent polling has Lee and McMullin in a virtual tie. Incumbency advantage is powerful, but Utah’s Democratic Party has uncharacteristically decided to support McMullin rather than field its own candidate.

The character divide between Trump-supporting candidates and McMullin questions the extent to which LDS values and the carefully crafted public identity of the church can be disentangled from the modern Republican Party. Lee remains the favorite, but the fact that this is a competitive race speaks to how ongoing concerns continue to trouble the former president’s party, even in deeply red Utah.The Conversation

Luke Perry, Professor of Political Science, Utica University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Criminology scholar dismantles GOP hail mail plot to blame 'Democrats for an increase in crime'

In the lead-up to the 2022 midterm elections, Republican candidates across the nation are blaming Democrats for an increase in crime.

But as a scholar of criminology and criminal justice, I believe it’s important to note that, despite the apparently confident assertions of politicians, it’s not so easy to make sense of fluctuations in the crime rate. And whether it’s going up or down depends on a few key questions:
  • What you mean by “crime,”
  • What the “up” or “down” comparisons are in reference to, and
  • The location or area being examined.

Here’s an explanation of those elements – and why there is no one answer to whether crime has increased in the past year, or over the past decade.

What is ‘crime,’ anyway?

An email message reads: Three fires in residential neighborhoods in ONE WEEK!    Three homeless encampment evictions in that same week!    Multiple vehicles broken into in just one neighborhood!    A homecoming game interrupted by youth with unmarked guns!

Cicely Davis campaign email

Usually when politicians, public officials and scholars talk about crime statistics, they’re referring to the most serious crimes, which the FBI officially calls “index” or “Part 1” offenses: criminal homicide, rape, robbery, aggravated assault, burglary, larceny, motor vehicle theft and arson.

Because these crimes vary a great deal in terms of seriousness, experts break this list up into “violent” and “property” offenses, so as not to confuse a surge in thefts with an increase in killings.

Each month, state and local police departments tally up the crimes they have handled and send the data to the FBI for inclusion in the nation’s annual Uniform Crime Report.

But that system has limitations. According to the U.S. Bureau of Justice Statistics, fewer than half of all events that could count as crimes actually get reported to police in the first place. And police departments are not required to send information about known crimes to the FBI. So each year what are presented as national crime statistics are derived from whichever of the roughly 17,000 police departments across the country decide to send in their data.

In 2021, the optional nature of reporting crime statistics was a particular problem, because the FBI asked for more detailed information than it had in the past. Historically, the bureau received data from police departments covering about 90% of the U.S. population. But fewer agencies supplied the more detailed data requested in 2021. That data covered only 66% of the nation’s population. And the patchwork wasn’t even: In some states, such as Texas, Ohio and South Carolina, nearly all agencies reported. But in other states, such as Florida, California and New York, participation was abysmal.

With those caveats in mind, the 2021 data estimates that criminal homicide rose about 4% nationally from 2020 levels. Robberies were down 9%, and aggravated assaults remained relatively unchanged.

Rapes are notoriously underreported to police, but the 2021 National Crime Victimization Survey suggests there was no significant change from 2020.

What’s the benchmark?

Those comparisons look at the prior year to assess whether certain types of crime are up or down. Such comparisons may seem straightforward, but violent crime, particularly homicide, is statistically rare enough that a rise or fall from one year to the next doesn’t necessarily mean there is reason to panic or celebrate.

Another way to assess trends is to look at as much data as possible. Over the past 36 years, clear trends have emerged. The national homicide rate in 2021 wasn’t as high as it was in the early 1990s, but 2021’s figure is the highest in nearly 25 years.

Meanwhile, robberies have been trending steadily downward for the better part of 30 years. And though the aggravated assault rate didn’t change much from 2020 to 2021, it is clearly higher now than at any time during the 2010s.

Crime is highly localized

These figures are imperfect in other ways, too. The data being used in today’s assertions about crime rates is more than 10 months old and presents national figures that mask a substantial amount of local variation. The FBI won’t release 2022 crime data until the fall of 2023.

But there is more current data available: The consulting firm AH Datalytics has a free dashboard that compiles more up-to-date murder data from 99 big cities.

As of October 2022, it indicates that murder in big cities is down about 5% in 2022 when compared with the first 10 months of 2021. But this aggregate change masks the fact that murder is up 85% in Colorado Springs, Colo.; 33% in Birmingham, Ala.; 28% in New Orleans; and 27% in Charlotte, N.C. Meanwhile, murder is down 38% in Columbus, Ohio; 29% in Richmond, Va.; and 18% in Chicago.

Even these city-level statistics don’t tell the whole story. It is now well established that crime is not randomly distributed across communities. Instead, it clusters in small areas that criminologists and police departments often refer to as “hot spots.” What this means is that regardless of whether crime is up or down in cities, a handful of neighborhoods in those cities are likely still significantly and disproportionately affected by violence.The Conversation

Justin Nix, Associate Professor of Criminology and Criminal Justice, University of Nebraska Omaha

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Bob Dylan used the ancient practice of 'imitatio' to craft some of the most original songs of his time

Over the course of six decades, Bob Dylan steadily brought together popular music and poetic excellence. Yet the guardians of literary culture have only rarely accepted Dylan’s legitimacy.
His 2016 Nobel Prize in Literature undermined his outsider status, challenging scholars, fans and critics to think of Dylan as an integral part of international literary heritage. My new book, “No One to Meet: Imitation and Originality in the Songs of Bob Dylan,” takes this challenge seriously and places Dylan within a literary tradition that extends all the way back to the ancients.

I am a professor of early modern literature, with a special interest in the Renaissance. But I am also a longtime Dylan enthusiast and the co-editor of the open-access Dylan Review, the only scholarly journal on Bob Dylan.

After teaching and writing about early modern poetry for 30 years, I couldn’t help but recognize a similarity between the way Dylan composes his songs and the ancient practice known as “imitatio.”

Poetic honey-making

Although the Latin word imitatio would translate to “imitation” in English, it doesn’t mean simply producing a mirror image of something. The term instead describes a practice or a methodology of composing poetry.

The classical author Seneca used bees as a metaphor for writing poetry using imitatio. Just as a bee samples and digests the nectar from a whole field of flowers to produce a new kind of honey – which is part flower and part bee – a poet produces a poem by sampling and digesting the best authors of the past.

Dylan’s imitations follow this pattern: His best work is always part flower, part Dylan.

Consider a song like “A Hard Rain’s A-Gonna Fall.” To write it, Dylan repurposed the familiar Old English ballad “Lord Randal,” retaining the call-and-response framework. In the original, a worried mother asks, “O where ha’ you been, Lord Randal, my son? / And where ha’ you been, my handsome young man?” and her son tells of being poisoned by his true love.

In Dylan’s version, the nominal son responds to the same questions with a brilliant mixture of public and private experiences, conjuring violent images such as a newborn baby surrounded by wolves, black branches dripping blood, the broken tongues of a thousand talkers and pellets poisoning the water. At the end, a young girl hands the speaker – a son in name only – a rainbow, and he promises to know his song well before he’ll stand on the mountain to sing it.

“A Hard Rain’s A-Gonna Fall” resounds with the original Old English ballad, which would have been very familiar to Dylan’s original audiences of Greenwich Village folk singers. He first sang the song in 1962 at the Gaslight Cafe on MacDougal Street, a hangout of folk revival stalwarts. To their ears, Dylan’s indictment of American culture – its racism, militarism and reckless destruction of the environment – would have echoed that poisoning in the earlier poem and added force to the repurposed lyrics.

Drawing from the source

Because Dylan “samples and digests” songs from the past, he has been accused of plagiarism.

This charge underestimates Dylan’s complex creative process, which closely resembles that of early modern poets who had a different concept of originality – a concept Dylan intuitively understands. For Renaissance authors, “originality” meant not creating something out of nothing, but going back to what had come before. They literally returned to the “origin.” Writers first searched outside themselves to find models to imitate, and then they transformed what they imitated – that is, what they found, sampled and digested – into something new. Achieving originality depended on the successful imitation and repurposing of an admired author from a much earlier era. They did not imitate each other, or contemporary authors from a different national tradition. Instead, they found their models among authors and works from earlier centuries.

In his book “The Light in Troy,” literary scholar Thomas Greene points to a 1513 letter written by poet Pietro Bembo to Giovanfrancesco Pico della Mirandola.

“Imitation,” Bembo writes, “since it is wholly concerned with a model, must be drawn from the model … the activity of imitating is nothing other than translating the likeness of some other’s style into one’s own writings.” The act of translation was largely stylistic and involved a transformation of the model.

Romantics devise a new definition of originality

However, the Romantics of the late 18th century wished to change, and supersede, that understanding of poetic originality. For them, and the writers who came after them, creative originality meant going inside oneself to find a connection to nature.

As scholar of Romantic literature M.H. Abrams explains in his renowned study “Natural Supernaturalism,” “the poet will proclaim how exquisitely an individual mind … is fitted to the external world, and the external world to the mind, and how the two in union are able to beget a new world.”

Instead of the world wrought by imitating the ancients, the new Romantic theories envisioned the union of nature and the mind as the ideal creative process. Abrams quotes the 18th-century German Romantic Novalis: “The higher philosophy is concerned with the marriage of Nature and Mind.”

The Romantics believed that through this connection of nature and mind, poets would discover something new and produce an original creation. To borrow from past “original” models, rather than producing a supposedly new work or “new world,” could seem like theft, despite the fact, obvious to anyone paging through an anthology, that poets have always responded to one another and to earlier works.

Unfortunately – as Dylan’s critics too often demonstrate – this bias favoring supposedly “natural” originality over imitation continues to color views of the creative process today.

For six decades now, Dylan has turned that Romantic idea of originality on its head. With his own idiosyncratic method of composing songs and his creative reinvention of the Renaissance practice of imitatio, he has written and performed – yes, imitation functions in performance too – over 600 songs, many of which are the most significant and most significantly original songs of his time.

To me, there is a firm historical and theoretical rationale for what these audiences have long known – and the Nobel Prize committee made official in 2016 – that Bob Dylan is both a modern voice entirely unique and, at the same time, the product of ancient, time-honored ways of practicing and thinking about creativity.The Conversation

Raphael Falco, Professor of English, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Clarence Thomas’ conservative activism defies 'a fundamental principle' of US democracy: political scholar

Neil Roberts, University of Toronto

With the opening of the U.S. Supreme Court’s new session on Oct. 3, 2022, Clarence Thomas is arguably the most powerful justice on the nation’s highest court.

In 1991, after Thomas became an associate justice and only the second African American to do so, his power was improbable to almost everyone except him and his wife, Virginia “Ginni” Thomas.

He received U.S. Senate confirmation despite lawyer Anita Hill’s explosive workplace sexual harassment allegations against him.

Today, Thomas rarely speaks during oral arguments, yet he communicates substantively through his prolific written opinions that reflect a complicated mix of self-help, racial pride and the original intent of America’s Founding Fathers.

He isn’t chief justice. John Roberts Jr. is.

But with Thomas’ nearly 31 years of service, he’s the longest-serving sitting justice and on track to have the lengthiest court tenure ever.

June Jordan, pioneering poet and cultural commentator, observed in 1991 when President George H.W. Bush nominated Thomas that people “focused upon who the candidate was rather than what he has done and will do.”

As a scholar of political theory and Black politics, I contend we haven’t learned from this vital insight.

Conservative activism

Thomas’ service is under increasing scrutiny as his wife, a conservative activist, testified on Sept. 27, 2022, before the House committee investigating the Jan. 6 attack on the U.S. Capitol that she still believes false claims that the 2020 election was rigged against Donald Trump.

According to documents obtained by that committee, Ginni Thomas was instrumental in coordinating efforts to keep former President Donald Trump in office. Her efforts included sending emails to not only former White House Chief of Staff Mark Meadows but also state officials in Arizona and Wisconsin.

Of particular concern to the Jan. 6 committee is testimony from Thomas on her email correspondence with John Eastman, her husband’s former law clerk, who is considered to be the legal architect of Trump’s last-ditch bid to subvert the 2020 election.

In my view, Clarence and Ginni Thomas’ intertwined lives highlight a distressing underside to their personal union: the blurring of their professional and personal lives, which has had the appearance of fracturing the independence of the executive and judicial branches of government.

In this light, Thomas’ sole dissent in the case involving Trump’s turning over documents to the Jan. 6 committee is all the more alarming.

‘What he has done and will do’

Clarence Thomas has cultivated a distinct judicial philosophy and vision of the world – and a view of his place in it.

From what can be gleaned from his own writings and speeches, his vision has been derived from Black nationalism, capitalism, conservatism, originalism and his own interpretations of the law.

Since Thomas’ confirmation, his ideas and rulings have attracted many critics.

But his interpetations of the law are now at the center of the high court’s jurisprudence.

In his concurring opinion of the court’s decision to overturn Roe v. Wade, Thomas argued that the court should reconsider reversing other related landmark rulings, including access to contraception in Griswold v. Connecticut, LGBTQ+ sexual behavior and sodomy laws in Lawrence v. Texas and same-sex marriage in Obergefell v. Hodges.

In short, Thomas’ sentiments reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.

The rulings in those cases, Thomas wrote, relied on the due process clause of the 14th Amendment and “were demonstrably erroneous decisions.”

“In future cases,” Thomas explained, “we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell … we have a duty to ‘correct the error’ established in those precedents.”

Other recent Supreme Court rulings, on Second Amendment rights, Miranda rights, campaign finance regulations and tribal sovereignty, are also evidence of Thomas’ impact on the nation’s highest court.

The long game

In his memoir and public speeches, Thomas identifies as a self-made man.

Though he has benefited from affirmative action programs – and the color of his skin played a role in his Supreme Court nomination – Thomas has staunchly opposed such efforts to remedy past racial discrimination. Like other notable Black conservatives, Thomas argues that group-based preferences reward those who seek government largesse rather than individual initiative.

With the exception of guidance of Catholic Church institutions and his grandfather Myers Anderson, Thomas claims he earned his accomplishments by effort, hard work and his own initiative.

In a 1998 speech, Thomas foreshadowed his judicial independence and made clear that his attendance before the National Bar Association, the nation’s largest Black legal association, was not to defend his conservative views – or further anger his critics.

“But rather,” he explained, “to assert my right to think for myself, to refuse to have my ideas assigned to me as though I was an intellectual slave because I’m black.”

“I come to state that I’m a man, free to think for myself and do as I please,” Thomas went on. “I’ve come to assert that I am a judge and I will not be consigned the unquestioned opinions of others. But even more than that, I have come to say that, isn’t it time to move on?”

But like many of Thomas’ complexities, his own self-made narrative distorts the ideas of the first prominent Black Republican who remains one of his intellectual heroesFrederick Douglass, the statesman, abolitionist and fugitive ex-slave whose portrait has hung on the wall of Thomas’ office.

But in “Self-Made Men,” a speech he first delivered in 1859. Douglass disagreed with the idea that accomplishments result from solely individual upliftment.

“Properly speaking,” Douglass wrote, “there are in the world no such men as self-made men. That term implies an individual independence of the past and present which can never exist.”

Law against the people

Thomas’ view of the law is rooted in the originalism doctrine of an immutable rather than living U.S. Constitution.

Since the 1776 Declaration of Independence, modern America for Thomas has been predominantly a republic, where laws are made for the people through their elected representatives. Unlike a pure democracy, where the people vote directly and the majority rules, the rights of the minority are protected in a republic.

Dating back to ancient Rome, the history of republicanism is a story of denouncing domination, rejecting slavery and championing freedom.

Yet in my view, American republicanism has an underside: its long-standing basis in inequality that never intended its core ideals to apply beyond a small few.

Thomas claims consistency with America’s original founding.

In my view, Thomas’ perilous conservative activism works against a fundamental principle of the U.S. Constitution – “to form a more perfect union.”

Thomas’ rulings reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.The Conversation

Neil Roberts, Professor of Political Science, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How gonorrhea became more of a drug-resistant 'superbug' during the COVID-19 pandemic

COVID-19 has rightfully dominated infectious disease news since 2020. However, that doesn’t mean other infectious diseases took a break. In fact, U.S. rates of infection by gonorrhea have risen during the pandemic.

Unlike COVID-19, which is a new virus, gonorrhea is an ancient disease. The first known reports of gonorrhea date from China in 2600 BC, and the disease has plagued humans ever since. Gonorrhea has long been one of the most commonly reported bacterial infections in the U.S.. It is caused by the bacterium Neisseria gonorrhoeae, which can infect mucous membranes in the genitals, rectum, throat and eyes.

Gonorrhea is typically transmitted by sexual contact. It is sometimes referred to as “the clap.”

Prior to the pandemic, there were around 1.6 million new gonorrhea infections each year. Over 50% of those cases involved strains of gonorrhea that had become unresponsive to treatment with at least one antibiotic.

In 2020, gonorrhea infections initially went down 30%, most likely due to pandemic lockdowns and social distancing. However, by the end of 2020 – the last year for which data from the Centers for Disease Control and Prevention is available – reported infections were up 10% from 2019.

It is unclear why infections went up even though some social distancing measures were still in place. But the CDC notes that reduced access to health care may have led to longer infections and more opportunity to spread the disease, and sexual activity may have increased when initial shelter-in-place orders were lifted.

As a molecular biologist, I have been studying bacteria and working to develop new antibiotics to treat drug-resistant infections for 20 years. Over that time, I’ve seen the problem of antibiotic resistance take on new urgency.

Gonorrhea, in particular, is a major public health concern, but there are concrete steps that people can take to prevent it from getting worse, and new antibiotics and vaccines may improve care in the future.

How to recognize gonorrhea

Around half of gonorrhea infections are asymptomatic and can only be detected through screening. Infected people without symptoms can unknowingly spread gonorrhea to others.

Typical early signs of symptomatic gonorrhea include a painful or burning sensation when peeing, vaginal or penal discharge, or anal itching, bleeding or discharge. Left untreated, gonorrhea can cause blindness and infertility. Antibiotic treatment can cure most cases of gonorrhea as long as the infection is susceptible to at least one antibiotic.

There is currently only one recommended treatment for gonorrhea in the U.S. – an antibiotic called ceftriaxone – because the bacteria have become resistant to other antibiotics that were formerly effective against it. Seven different families of antibiotics have been used to treat gonorrhea in the past, but many strains are now resistant to one or more of these drugs.

The CDC tracks the emergence and spread of drug-resistant gonorrhea strains.

Why gonorrhea is on the rise

A few factors have contributed to the increase in infections during the COVID-19 pandemic.

Early in the pandemic, most U.S. labs capable of testing for gonorrhea switched to testing for COVID-19. These labs have also been contending with the same shortages of staff and supplies that affect medical facilities across the country.

Many people have avoided clinics and hospitals during the pandemic, which has decreased opportunities to identify and treat gonorrhea infections before they spread. In fact, because of decreased screening over the past two and a half years, health care experts don’t know exactly how much antibiotic-resistant gonorrhea has spread.

Also, early in the pandemic, many doctors prescribed antibiotics to COVID-19 patients even though antibiotics do not work on viruses like SARS-CoV-2, the virus that causes COVID-19. Improper use of antibiotics can contribute to greater drug resistance, so it is reasonable to suspect that this has happened with gonorrhea.

Overuse of antibiotics

Even prior to the pandemic, resistance to antibiotic treatment for bacterial infections was a growing problem. In the U.S., antibiotic-resistant gonorrhea infections increased by over 70% from 2017-2019.

Neisseria gonorrhoeae is a specialist at picking up new genes from other pathogens and from “commensal,” or helpful, bacteria. These helpful bacteria can also become antibiotic-resistant, providing more opportunities for the gonorrhea bacterium to acquire resistant genes.

Strains resistant to ceftriaxone have been observed in other countries, including Japan, Thailand, Australia and the U.K., raising the possibility that some gonorrhea infections may soon be completely untreatable.

Steps toward prevention

Currently, changes in behavior are among the best ways to limit overall gonorrhea infections – particularly safer sexual behavior and condom use.

However, additional efforts are needed to delay or prevent an era of untreatable gonorrhea.

Scientists can create new antibiotics that are effective against resistant strains; however, decreased investment in this research and development over the past 30 years has slowed the introduction of new antibiotics to a trickle. No new drugs to treat gonorrhea have been introduced since 2019, although two are in the final stage of clinical trials.

Vaccination against gonorrhea isn’t possible presently, but it could be in the future. Vaccines effective against the meningitis bacterium, a close relative of gonorrhea, can sometimes also provide protection against gonorrhea. This suggests that a gonorrhea vaccine should be achievable.

The World Health Organization has begun an initiative to reduce gonorrhea worldwide by 90% before 2030. This initiative aims to promote safe sexual practices, increase access to high-quality health care for sexually transmitted diseases and expand testing so that asymptomatic infections can be treated before they spread. The initiative is also advocating for increased research into vaccines and new antibiotics to treat gonorrhea.

Setbacks in fighting drug-resistant gonorrhea during the COVID-19 pandemic make these actions even more urgent.The Conversation

Kenneth Keiler, Professor of Biochemistry and Molecular Biology, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Supreme Court to hear arguments challenging a California law requiring humane animal treatment

Should Californians be able to require higher welfare standards for farm animals that are raised in other states if products from those animals are to be sold in California? The U.S. Supreme Court will confront that question when it hears oral argument in National Pork Producers Council v. Ross on Oct. 11, 2022.

Pork producers are challenging a law that California voters adopted in 2018 via ballot initiative with over 63% approval. It set new conditions for raising hogs, veal calves and egg-laying chickens, whose meat or eggs are sold in California. The state represents about 15% of the U.S. pork market.

At most commercial hog farms, pregnant sows are kept in “gestation crates” that measure 2 feet by 7 feet – enough room for the animals to sit, stand and lie down, but not enough to turn around. California’s law requires that each sow must have at least 24 square feet of floor space – nearly double the amount that most now get. It does not require farmers to raise free-range pigs, just to provide more square feet when they keep hogs in buildings.

Pork farmers say gestation crates keep pregnant sows from fighting, but animal welfare advocates call the devices inhumane.

The National Pork Producers Council argues that this requirement imposes heavy compliance costs on farmers across the U.S., since large hog farms may house thousands of sows and that it restricts interstate commerce. The Constitution’s commerce clause delegates authority to regulate interstate commerce to the federal government. In a series of cases over the past 50 years, the Supreme Court has made clear that it will strike down any state law that seeks to control commerce in another state or give preference to in-state commerce.

Farmers and animal welfare advocates understand that if California wins, states with the most progressive animal welfare policies – primarily West Coast and Northeast states – will be able to effectively set national standards for the well-being of many agricultural animals, including chickens, dairy and cattle. Conceivably, California might also be able to require basic conditions for human labor, such as minimum wage standards, associated with products sold in California.

Nine other states have already adopted laws requiring pork producers to phase out gestation crates. Massachusetts’s law would also apply to retail sales of pork raised elsewhere, like California’s, but its enforcement is on hold pending the Supreme Court’s ruling in the California case.

States control farm animal welfare

The main federal law that regulates living conditions for animals is the Animal Welfare Act, which was signed into law in 1966. Among other things, it requires the Department of Agriculture to adopt humane regulations for the keeping of animals that are exhibited in zoos and circuses or sold as pets. However, farm animals are explicitly exempted from the definition of “animal.”

While the federal government is mute on farm animal welfare, each state clearly has the power to regulate this issue within its borders. For example, in recent years, nine states have outlawed housing egg-laying chickens in “battery cages” that have been the industry standard for decades. These wire enclosures are so small that the birds cannot spread their wings.

Since many states still permit battery cages, egg-laying chickens’ quality of life depends on the state in which they reside.

Shelves lined with small wire cages, each holding multiple chickens.

Chickens in battery cages on an Iowa poultry farm.

AP Photo/Charlie Neibergall

It is also clear that the state of California has no power to adopt laws that are binding on the farmers of other states. This case falls between those two points – here’s how:

California’s market power

The California law says that if producers want to sell pork in California, they must raise pigs under conditions that comply with the state’s regulations. Farmers do not have to meet these standards unless they want to sell in California. The same requirement is applied to producers located in California and those based elsewhere, so the law does not directly discriminate between states in a way that would constitute a clear commerce clause violation.

Producers of eggs and veal who sell in California are on track to implement new space requirements for their animals under the law. In my view, however, much of the pork industry appears to be in denial. Instead of working out how to comply, the National Pork Producers Council wants the courts to set the California law aside.

Even as this case moves forward, however, major producers including Hormel and Tyson have said they will be able to comply with the California standard. Niman Ranch, a network of family farmers and ranchers who raise livestock humanely and sustainably, has filed an amicus brief with the Supreme Court supporting California.

Admittedly, pork farmers have invested millions of dollars in their existing facilities, and the system efficiently produces huge quantities of cheap pork. But Californians have taken the position that this output comes at an ethically unacceptable cost to animals in the system.

Weighing ethics against compliance costs

In considering this case, the Supreme Court will confront two questions. First, does California’s requirement constitute a burden on interstate commerce? A U.S. District Court in California held that the answer was no, and the U.S. 9th Circuit Court of Appeals affirmed this ruling.

There is no magical formula for what constitutes a burden on interstate commerce, so it is impossible to know in advance what the Supreme Court will say about this point of the case. The present court has not addressed this issue.

If the court should decide that the California law does restrict interstate commerce, it then must consider whether the measure meets the “Pike test,” which was set forth in the 1970 ruling Pike v. Bruce Church, Inc.. In this case, the court held that a state law that “regulates even-handedly” must be upheld unless the burden that the law imposes on interstate commerce “is clearly excessive in relation to the putative local benefits.” Put another way, is Californians’ social interest in better welfare for pigs substantially outweighed by the economic cost to producers?

In another 2010 ruling, United States v. Stevens, the court acknowledged that “the prohibition of animal cruelty itself has a long history in American law, starting with the early settlement of the Colonies.” However, the court concluded that depictions of animal cruelty – the plaintiff had been convicted for producing and distributing dogfighting videos – qualified as protected speech under the First Amendment and that this protection outweighed society’s interest in promoting animal welfare.

This video from the Rodale Institute, a nonprofit that conducts research, training and consumer education on organic agriculture, compares raising pigs on pasture to the large-scale confined model that dominates the pork industry.

Is a national standard in the cards?

Many animal welfare questions involve striking this kind of balance between ethical positions and economic consequences in a political context. It is like mixing oil and water, which makes predictions difficult.

The biggest unknown is what views the newest Supreme Court justices will bring to this case. Only four current justices – John Roberts, Clarence Thomas, Samuel Alito and Sonia Sotomayor – were members of the court when it ruled on the Stevens case in 2010. Will today’s court support California’s right to regulate products sold within its borders, or meat corporations’ economic arguments? How many justices will see farm animal welfare as an important public concern?

I expect that the court will uphold the California law – and that if this happens, within five years livestock producers will be proposing national legislation setting uniform welfare standards for farm animals. It is impossible to predict now whether a national law would improve animal welfare or adopt existing poor welfare practices.The Conversation

David Favre, Professor of Law at Michigan State University College of Law, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Hurricane Ian capped two weeks of extreme storms: How climate change fuels tropical cyclones

When Hurricane Ian hit Florida, it was one of the United States’ most powerful hurricanes on record, and it followed a two-week string of massive, devastating storms around the world.

A few days earlier in the Philippines, Typhoon Noru gave new meaning to rapid intensification when it blew up from a tropical storm with 50 mph winds to a Category 5 monster with 155 mph winds the next day. Hurricane Fiona flooded Puerto Rico, then became Canada’s most intense storm on record. Typhoon Merbok gained strength over a warm Pacific Ocean and tore up over 1,000 miles of the Alaska coast.

Major storms hit from the Philippines in the western Pacific to the Canary Islands in the eastern Atlantic, to Japan and Florida in the middle latitudes and western Alaska and the Canadian Maritimes in the high latitudes.

A lot of people are asking about the role rising global temperatures play in storms like these. It’s not always a simple answer.

Record-setting cyclones in late September 2022.

Mathew Barlow

It is clear that climate change increases the upper limit on hurricane strength and rain rate and that it also raises the average sea level and therefore storm surge. The influence on the total number of hurricanes is currently uncertain, as are other aspects. But, as hurricanes occur, we expect more of them to be major storms. Hurricane Ian and other recent storms, including the 2020 Atlantic season, provide a picture of what that can look like.

Our research has focused on hurricanes, climate change and the water cycle for years. Here’s what scientists know so far.

Rainfall: Temperature has a clear influence

The temperature of both the ocean and atmosphere are critical to hurricane development.

Hurricanes are powered by the release of heat when water that evaporates from the ocean’s surface condenses into the storm’s rain.

A warmer ocean produces more evaporation, which means more water is available to the atmosphere. A warmer atmosphere can hold more water, which allows more rain. More rain means more heat is released, and more heat released means stronger winds.

Simplified cross section of a hurricane.

Mathew Barlow

These are basic physical properties of the climate system, and this simplicity lends a great deal of confidence to scientists’ expectations for storm conditions as the planet warms. The potential for greater evaporation and higher rain rates is true in general for all types of storms, on land or sea.

That basic physical understanding, confirmed in computer simulations of these storms in current and future climates, as well as recent events, leads to high confidence that rainfall rates in hurricanes increase by at least 7% per degree of warming.

Storm strength and rapid intensification

Scientists also have high confidence that wind speeds will increase in a warming climate and that the proportion of storms that intensify into powerful Category 4 or 5 storms will increase. Similar to rainfall rates, increases in intensity are based on the physics of extreme rainfall events.

Damage is exponentially related to wind speed, so more intense storms can have a bigger impact on lives and economies. The damage potential from a Category 4 storm with 150 mph winds, like Ian at landfall, is roughly 256 times that of a category 1 storm with 75 mph winds.

Two women stand in a wind-damaged kitchen looking up at the sky through a missing section of roof.

Hurricane Ian tore up roofs on homes, businesses and at least one hospital.

Bryan R. Smith / AFP via Getty Images

Whether warming causes storms to intensify more rapidly is an active area of research, with some models offering evidence that this will probably happen. One of the challenges is that the world has limited reliable historical data for detecting long-term trends. Atlantic hurricane observations go back to the 1800s, but they’re only considered reliable globally since the 1980s, with satellite coverage.

That said, there is already some evidence that an increase in rapid intensification is distinguishable in the Atlantic.

Within the last two weeks of September 2022, both Noru and Ian exhibited rapid intensification. In the case of Ian, successful forecasts of rapid intensification were issued several days in advance, when the storm was still a tropical depression. They exemplify the significant progress in intensity forecasts in the past few years, although improvements are not uniform.

There is some indication that, on average, the location where storms reach their maximum intensity is moving poleward. This would have important implications for the location of the storms’ main impacts. However, it is still not clear that this trend will continue in the future.

Storm surge: Two important influences

Storm surge – the rise in water at a coast caused by a storm – is related to a number of factors including storm speed, storm size, wind direction and coastal sea bottom topography. Climate change could have at least two important influences.

Homes across entire neighborhoods seen from a helicopter are surrounded by floodwater.

The day after Hurricane Ian made landfall, homes were surrounded by water in Fort Myers, Fla.

AP Photo/Marta Lavandier

Stronger storms increase the potential for higher surge, and rising temperatures are causing sea level to rise, which increases the water height, so the storm surge is now higher than before in relation to the land. As a result, there is high confidence for an increase in the potential for higher storm surges.

Speed of movement and potential for stalling

The speed of the storm can be an important factor in total rainfall amounts at a given location: A slower-moving storm, like Hurricane Harvey in 2017, provides a longer period of time for rain to accumulate.

There are indications of a global slowdown in hurricane speed, but the quality of historical data limits understanding at this point, and the possible mechanisms are not yet understood.

Frequency of storms in the future is less clear

How the number of hurricanes that form each year may change is another major question that is not well understood.

There is no definitive theory explaining the number of storms in the current climate, or how it will change in the future.

Besides having the right environmental conditions to fuel a storm, the storm has to form from a disturbance in the atmosphere. There is currently a debate in the scientific community about the role of these pre-storm disturbances in determining the number of storms in the current and future climates.

Natural climate variations, such as El Niño and La Niña, also have a substantial impact on whether and where hurricanes develop. How they and other natural variations will change in the future and influence future hurricane activity is a topic of active research.

How much did climate change influence Ian?

Scientists conduct attribution studies on individual storms to gauge how much global warming likely affected them, and those studies are currently underway for Ian.

However, individual attribution studies are not needed to be certain that the storm occurred in an environment that human-caused climate change made more favorable for a stronger, rainier and higher-surge disaster. Human activities will continue to increase the odds for even worse storms, year over year, unless rapid and dramatic reductions in greenhouse gas emissions are undertaken.

Mathew Barlow, Professor of Climate Science, UMass Lowell and Suzana J. Camargo, Lamont Research Professor of Ocean and Climate Physics, Columbia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tactical nuclear weapons: Security expert assesses what they mean for the war in Ukraine

Tactical nuclear weapons have burst onto the international stage as Russian President Vladimir Putin, facing battlefield losses in eastern Ukraine, has threatened that Russia will “make use of all weapon systems available to us” if Russia’s territorial integrity is threatened. Putin has characterized the war in Ukraine as an existential battle against the West, which he said wants to weaken, divide and destroy Russia.

U.S. President Joe Biden criticized Putin’s overt nuclear threats against Europe. Meanwhile, NATO Secretary-General Jens Stoltenberg downplayed the threat, saying Putin “knows very well that a nuclear war should never be fought and cannot be won.” This is not the first time Putin has invoked nuclear weapons in an attempt to deter NATO.

I am an international security scholar who has worked on and researched nuclear restraint, nonproliferation and costly signaling theory applied to international relations for two decades. Russia’s large arsenal of tactical nuclear weapons, which are not governed by international treaties, and Putin’s doctrine of threatening their use have raised tensions, but tactical nuclear weapons are not simply another type of battlefield weapon.

Tactical by the numbers

Tactical nuclear weapons, sometimes called battlefield or nonstrategic nuclear weapons, were designed to be used on the battlefield – for example, to counter overwhelming conventional forces like large formations of infantry and armor. They are smaller than strategic nuclear weapons like the warheads carried on intercontinental ballistic missiles.

While experts disagree about precise definitions of tactical nuclear weapons, lower explosive yields, measured in kilotons, and shorter-range delivery vehicles are commonly identified characteristics. Tactical nuclear weapons vary in yields from fractions of 1 kiloton to about 50 kilotons, compared with strategic nuclear weapons, which have yields that range from about 100 kilotons to over a megaton, though much more powerful warheads were developed during the Cold War.

For reference, the atomic bomb dropped on Hiroshima was 15 kilotons, so some tactical nuclear weapons are capable of causing widespread destruction. The largest conventional bomb, the Mother of All Bombs or MOAB, that the U.S. has dropped has a 0.011-kiloton yield.

Delivery systems for tactical nuclear weapons also tend to have shorter ranges, typically under 310 miles (500 kilometers) compared with strategic nuclear weapons, which are typically designed to cross continents.

Because low-yield nuclear weapons’ explosive force is not much greater than that of increasingly powerful conventional weapons, the U.S. military has reduced its reliance on them. Most of its remaining stockpile, about 150 B61 gravity bombs, is deployed in Europe. The U.K. and France have completely eliminated their tactical stockpiles. Pakistan, China, India, Israel and North Korea all have several types of tactical nuclear weaponry.

Russia has retained more tactical nuclear weapons, estimated to be around 2,000, and relied more heavily on them in its nuclear strategy than the U.S. has, mostly due to Russia’s less advanced conventional weaponry and capabilities.

Russia’s tactical nuclear weapons can be deployed by ships, planes and ground forces. Most are deployed on air-to-surface missiles, short-range ballistic missiles, gravity bombs and depth charges delivered by medium-range and tactical bombers, or naval anti-ship and anti-submarine torpedoes. These missiles are mostly held in reserve in central depots in Russia.

Russia has updated its delivery systems to be able to carry either nuclear or conventional bombs. There is heightened concern over these dual capability delivery systems because Russia has used many of these short-range missile systems, particularly the Iskander-M, to bombard Ukraine.

Russia’s Iskander-M mobile short-range ballistic missile can carry conventional or nuclear warheads. Russia has used the missile with conventional warheads in the war in Ukraine.

Tactical nuclear weapons are substantially more destructive than their conventional counterparts even at the same explosive energy. Nuclear explosions are more powerful by factors of 10 million to 100 million than chemical explosions, and leave deadly radiation fallout that would contaminate air, soil, water and food supplies, similar to the disastrous Chernobyl nuclear reactor meltdown in 1986. The interactive simulation site NUKEMAP by Alex Wellerstein depicts the multiple effects of nuclear explosions at various yields.

Can any nuke be tactical?

Unlike strategic nuclear weapons, tactical weapons are not focused on mutually assured destruction through overwhelming retaliation or nuclear umbrella deterrence to protect allies. While tactical nuclear weapons have not been included in arms control agreements, medium-range weapons were included in the now-defunct Intermediate-range Nuclear Forces treaty (1987-2018), which reduced nuclear weapons in Europe.

Both the U.S. and Russia reduced their total nuclear arsenals from about 19,000 and 35,000 respectively at the end of the Cold War to about 3,700 and 4,480 as of January 2022. Russia’s reluctance to negotiate over its nonstrategic nuclear weapons has stymied further nuclear arms control efforts.

The fundamental question is whether tactical nuclear weapons are more “useable” and therefore could potentially trigger a full-scale nuclear war. Their development was part of an effort to overcome concerns that because large-scale nuclear attacks were widely seen as unthinkable, strategic nuclear weapons were losing their value as a deterrent to war between the superpowers. The nuclear powers would be more likely to use tactical nuclear weapons, in theory, and so the weapons would bolster a nation’s nuclear deterrence.

Yet, any use of tactical nuclear weapons would invoke defensive nuclear strategies. In fact, then-Secretary of Defense James Mattis notably stated in 2018: “I do not think there is any such thing as a tactical nuclear weapon. Any nuclear weapon use any time is a strategic game changer.”

This documentary explores how the risk of nuclear war has changed – and possibly increased – since the end of the Cold War.

The U.S. has criticized Russia’s nuclear strategy of escalate to de-escalate, in which tactical nuclear weapons could be used to deter a widening of the war to include NATO.

While there is disagreement among experts, Russian and U.S. nuclear strategies focus on deterrence, and so involve large-scale retaliatory nuclear attacks in the face of any first-nuclear weapon use. This means that Russia’s threat to use nuclear weapons as a deterrent to conventional war is threatening an action that would, under nuclear warfare doctrine, invite a retaliatory nuclear strike if aimed at the U.S. or NATO.

Nukes and Ukraine

I believe Russian use of tactical nuclear weapons in Ukraine would not achieve any military goal. It would contaminate the territory that Russia claims as part of its historic empire and possibly drift into Russia itself. It would increase the likelihood of direct NATO intervention and destroy Russia’s image in the world.

Putin aims to deter Ukraine’s continued successes in regaining territory by preemptively annexing regions in the east of the country after holding staged referendums. He could then declare that Russia would use nuclear weapons to defend the new territory as though the existence of the Russian state were threatened. But I believe this claim stretches Russia’s nuclear strategy beyond belief.

Putin has explicitly claimed that his threat to use tactical nuclear weapons is not a bluff precisely because, from a strategic standpoint, using them is not credible. In other words, under any reasonable strategy, using the weapons is unthinkable and so threatening their use is by definition a bluff.The Conversation

Nina Srinivasan Rathbun, Professor of International Relations, USC Dornsife College of Letters, Arts and Sciences

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Ron DeSantis and Greg Abbott channel segregationists with their anti-immigration stunts

As a historian of racism and white supremacy in the United States, I’ve become accustomed to callous actions like those of Republican governors who organized transportation for Latin American migrants to states run by their political opponents.

Governors Greg Abbott in Texas and Ron DeSantis in Florida are following the playbook of segregationists who provided one-way bus tickets to Northern cities for Black Southerners in the 1960s. At that time, the fight for racial equality was attracting national attention and support from many white Americans, inspiring some to join interracial Freedom Rides organized by civil rights groups to challenge segregation on interstate bus lines.

Then, as now, the message Southern racists aimed to send with their “reverse freedom rides” was, “Here, you love them so much, you take care of them.”

But these acts were more than just political stunts designed to embarrass Northern political leaders who sympathized with the civil rights movement. They were part of a broader effort by white supremacists to remove Black Americans from their communities and avoid dealing with the social consequences of centuries of racial discrimination.

Slavery, sharecropping and displacement

In the slavery and Jim Crow eras, racist policies backed by extreme violence limited access to education and economic opportunities for Black people to ensure that they had few options other than working for white employers.

Black sharecropping families in the early 20th century depended on their landlords to provide food, clothing and housing throughout the year until harvest time, when the costs of these goods were deducted from their share of the money made from sales of the crop. Plantation owners controlled the process, frequently using it to cheat workers out of their earnings and keep them perpetually in debt.

By the 1960s, however, most of these workers were no longer needed. Mechanization eliminated millions of agricultural jobs and generated massive unemployment in rural Southern communities. Rather than invest in job training programs or other initiatives to help displaced farm laborers, political leaders enacted policies designed to drive poor people out.

Strict eligibility requirements and arbitrary administration of state public assistance programs excluded many Black families from receiving aid. State legislators were slow to take advantage of federal funds that were available to expand anti-poverty programs, arguing that these were ploys to force integration on the South.

Government inaction left thousands of people without homes or income and exacerbated the suffering of the unemployed.

Segregationists’ ‘final solution’

Civil rights workers who came to the South to help local Black activists with desegregation and voter registration efforts were shocked by the economic deprivation that existed in the communities they visited. They reported seeing widespread hunger, dilapidated housing, unsanitary conditions, high infant mortality rates and other adverse health effects.

Raymond Wheeler, a doctor who visited Mississippi in 1967, described the state as “a vast concentration camp, in which live a great group of poor uneducated, semi-starving people, from whom all but token public support has been withdrawn.”

Others took the analogy to Nazi Germany further, arguing that this was white supremacists’ “final solution to the race question.” By denying Black Americans access to the basic means of survival, they left them with no options but to migrate away.

Political and economic motivations

The motivations behind these policies were both political and economic. White racists understood that providing assistance to displaced workers would encourage Black people to stay in the South. That posed a threat to their power, especially after passage of the Voting Rights Act in 1965 enabled more Black people to register to vote, participate in elections and run for office.

Moreover, the candidates Black Southerners supported ran on platforms that advocated policies to ensure racial and economic justice: investment in schools and other public services, enhanced assistance for unemployed people, more affordable health care and a stronger social safety net for those who were unable to work.

These proposals were anathema to wealthy white people who would face higher tax rates to pay for them. Warning of the consequences should Black Southerners be allowed to vote, Mississippi Citizens’ Council leader Ellett Lawrence asserted that property owners could see tax increases of “100%, 200% or more” if Black people were elected to office.

In a study of Wilcox County, Alabama, the National Education Association found that many landowners were afraid “the Negro majority will obtain control and raise land taxes to finance education and other services.” It concluded that this group showed “little taste for the anti-poverty programs of the sixties because it is more anxious to solve its problems through outmigration than it is to improve all of its people.”

Black and white photograph of people standing and sitting outside of a burning bus.

A group of Freedom Riders outside a bus that was set aflame by a group of white people in Alabama.

Underwood Archives/Getty Images

White supremacy then and now

In many ways, Republicans like Abbott and DeSantis are the political descendants of Southern segregationists whose cruelty horrified other Americans in the 1960s.

Immigration scholars have noted how U.S. foreign policies contributed to the poverty and violence in Central and South America that migrants are fleeing. Yet rather than acknowledge this – along with assuming the moral responsibilities it entails – some GOP leaders denigrate and dehumanize refugees to win support from voters drawn to xenophobic messaging.

Watching this resurgent nativism, racism and disregard for human rights gaining strength in the 21st century is an ominous sight for anyone familiar with where these ideas have led in the past.The Conversation

Greta de Jong, Professor of History, University of Nevada, Reno

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Alaska pounded by rare fall typhoon fueled by unusually warm Pacific Ocean waters

The powerful remnants of Typhoon Merbok pounded Alaska’s western coast on Sept. 17, 2022, pushing homes off their foundations and tearing apart protective berms as water flooded communities.

Storms aren’t unusual here, but Merbok built up over unusually warm water. Its waves reached 50 feet over the Bering Sea, and its storm surge sent water levels into communities at near record highs along with near hurricane-force winds.

Merbok also hit during the fall subsistence harvest season, when the region’s Indigenous communities are stocking up food for the winter. Rick Thoman, a climate scientist at the University of Alaska Fairbanks, explained why the storm was unusual and the impact it’s having on coastal Alaskans.

What stands out the most about this storm?

It isn’t unusual for typhoons to affect some portion of Alaska, typically in the fall, but Merbok was different.

It formed in a part of the Pacific, far east of Japan, where historically few typhoons form. The water there is typically too cold to support a typhoon, but right now, we have extremely warm water in the north-central Pacific. Merbok traveled right over waters that are the warmest on record going back about 100 years.

Map shows warm waters off Japan and Russia's Kamchatka region.

Sea surface temperatures show unusually warm water over the eastern Pacific Ocean, where Typhoon Merbok passed through.

Alaska Center for Climate Assessment

The Western Bering Sea, closer to Russia, has been running above normal sea surface temperature since last winter. The Eastern Bering Sea – the Alaska part – has been normal to slightly cooler than normal since spring. That temperature difference in the Bering Sea helped to feed the storm and was probably part of the reason the storm intensified to the level it did.

When Merbok moved in to the Bering Sea, it wound up being by far the strongest storm this early in the autumn. We’ve had stronger storms, but they typically occur in October and November.

Did climate change have a bearing on the storm?

There’s a strong likelihood that Merbok was able to form where it did because of the warming ocean.

With warm ocean water, there’s more evaporation going in the atmosphere. Because all the atmospheric ingredients came together, Merbok was able to bring that very warm moist air along with it. Had the ocean been a temperature more typical of 1960, there wouldn’t have been as much moisture in the storm.

Bar chart showing temperatures rising

Global ocean temperatures have been rising. The bars show how annual temperatures departed from the 20th century average.

NOAA

How extreme was the flooding compared to past storms?

The most outstanding feature as far as impact is the tremendous area that was damaged. All coastal regions north of Bristol Bay to just beyond the Bering Strait – hundreds of miles of coastline – had some impact.

At Nome – one of the very few places in western Alaska where we have long-term ocean level information – the ocean was 10.5 feet (3.2 meters) above the low-tide line on Sept. 17, 2022. That’s the highest there in nearly half a century, since the historic storm of November 1974.

In Golovin and Newtok, multiple houses floated off their foundations and are no longer habitable.

Shaktoolik lost its protective berm, which is very bad news. Prior to building the berm, the community’s freshwater supply was easily inundated with saltwater. The community is now at greater risk of flooding, and even a moderate storm could inundate their fresh water supply. They can rebuild it, but how fast is a matter of time and money and resources.

Another important impact is to hunting and fishing camps along the coasts. Because of the region’s subsistence economy, those camps are crucial, and they are expensive to rebuild.

There are no roads into these coastal communities, and getting lumber for rebuilding homes and these camps is difficult. And we’re moving into typically the stormiest time of year, which makes recovery harder and planes often can’t land.

Lots of places also lost power and cell phone communication. The power in these remote areas is generated in the community – if that goes out there is no alternative. People lose power to their freezers, which they’re stocking up for the winter. Towns might have one grocery store, and if that can’t open or loses power, there is no other option.

Winter is coming, and the time when it’s feasible to make repairs is running short. This is also the middle of hunting season, which in western Alaska is not recreation – it’s how you feed your family. These are almost all predominantly or almost exclusively Indigenous communities. Repairs are going to take time away for subsistence hunters, so all of these things are coming together at once.

Does the lack of sea ice as a buffer make a difference for erosion?

Historically, with storms later in the season, even a small bit of sea ice can offer protection to dampen the waves. But there’s no ice in the Bering Sea at all this time of year. The full wave action pounds right to the beach.

As sea ice declines with warming global temperatures, communities will see more damage from storms later in the year, too.

Are there lessons from this storm for Alaska?

As bad as this storm was, and it was very bad, others will be coming. This is a stormy part of the world, and state and federal governments need to do a better job of communicating risks and helping communities and tribes ahead of time.

That might mean evacuating vulnerable people. Because if you wait until it’s certain that there’s a problem, it’s too late. Almost all of these communities are isolated.

I would say this is a classic case of large-scale weather models showing a general idea of the risk far in advance, but it takes longer to respond for isolated communities like those in rural Alaska. By Sept. 12, Merbok’s storm track was clear, but if communities aren’t briefed until two to three days before the storm, there isn’t enough time for them to fully prepare.The Conversation

Rick Thoman, Alaska Climate Specialist, University of Alaska Fairbanks

This article is republished from The Conversation under a Creative Commons license. Read the original article.

9/11 survivors’ exposure to toxic dust and the chronic health conditions that followed are ongoing failures

The 9/11 terrorist attack on the World Trade Center in New York resulted in the loss of 2,753 people in the Twin Towers and surrounding area. After the attack, more than 100,000 responders and recovery workers from every U.S. state – along with some 400,000 residents and other workers around ground zero – were exposed to a toxic cloud of dust that fell as a ghostly, thick layer of ash and then hung in the air for more than three months.

The World Trade Center dust plume, or WTC dust, consisted of a dangerous mixture of cement dust and particles, asbestos and a class of chemicals called persistent organic pollutants. These include cancer-causing dioxins and polyaromatic hydrocarbons, or PAHs, which are byproducts of fuel combustion.

The dust also contained heavy metals that are known to be poisonous to the human body and brain, such as lead – used in the manufacturing of flexible electrical cables – and mercury, which is found in float valves, switches and fluorescent lamps. The dust also contained cadmium, a carcinogen toxic to the kidneys that is used in the manufacturing of electric batteries and pigments for paints.

Smoke pours from the Twin Towers of the World Trade Center in New York City on September 11, 2001.

One of the haunting images from 9/11: Smoke pours from the twin towers of the World Trade Center in New York after they were hit by two hijacked airliners.

Robert Giroux via Getty Images

Polychlorinated biphenyls, human-made chemicals used in electrical transformers, were also part of the toxic stew. PCBs are known to be carcinogenic, toxic to the nervous system and disruptive to the reproductive system. But they became even more harmful when incinerated at high heat from the jets’ fuel combustion and then carried by very fine particles.

WTC dust was made up of both “large” particulate matter and very small, fine and ultrafine ones. These particularly small particles are known to be highly toxic, especially to the nervous system since they can travel directly through the nasal cavity to the brain.

Many first responders and others who were directly exposed to the dust developed a severe and persistent cough that lasted for a month, on average. They were treated at Mount Sinai Hospital and received care at the Clinic of Occupational Medicine, a well-known center for work-related diseases.

I am a physician specializing in occupational medicine who began working directly with 9/11 survivors in my role as director of the WTC Health Program Data Center at Mount Sinai beginning in 2012. That program collects data, as well as monitors and oversees the public health of WTC rescue and recovery workers. After eight years in that role, I moved to Florida International University in Miami, where I am planning to continue working with 9/11 responders who are moving to Florida as they reach retirement age.

In lower Manhattan near Ground Zero, people run away as the North Tower of the World Trade Center collapses.

Remembering 9/11: As the north tower of the World Trade Center collapses, a cloud of toxic gas chases terrified residents and tourists.

Jose Jimenez/Primera Hora via Getty Images

From acute to chronic conditions

After the initial “acute” health problems that 9/11 responders faced, they soon began experiencing a wave of chronic diseases that continue to affect them 20 years later. The persistent cough gave way to respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD) and upper airway diseases such as chronic rhinosinusitis, laryngitis and nasopharyngitis.

The litany of respiratory diseases also put many of them at risk for gastroesophageal reflux disease (GERD), which occurs at a higher rate in WTC survivors than in the general population. This condition occurs when stomach acids reenter the esophagus, or food pipe, that connects the stomach to the throat. As a consequence of either the airway or the digestive disorders, many of these survivors also struggle with sleep apnea, which requires additional treatments.

Further compounding the tragedy, about eight years after the attacks, cancers began to turn up in 9/11 survivors. These include tumors of the blood and lymphoid tissues such as lymphoma, myeloma and leukemia, which are well known to affect workers exposed to carcinogens in the workplace. But survivors also suffer from other cancers, including breast, head and neck, prostate, lung and thyroid cancers.

Some have also developed mesothelioma, an aggressive form of cancer related to exposure to asbestos. Asbestos was used in the early construction of the north tower until public advocacy and broader awareness of its health dangers brought its use to a halt.

And the psychological trauma that 9/11 survivors experienced has left many suffering from persistent mental health challenges. One study published in 2020 found that of more than 16,000 WTC responders for whom data was collected, nearly half reported a need for mental health care, and 20% of those who were directly affected developed post-traumatic stress disorder.

Many have told me that the contact they had with parts of human bodies or with the deadly scene and the tragic days afterward left a permanent mark on their lives. They are unable to forget the images, and many of them suffer from mood disorders as well as cognitive impairments and other behavioral issues, including substance use disorder.

On 9/11, shortly after the terrorist attack in New York City, a distraught survivor sits outside the World Trade Center.

Remembering 9/11: A distraught survivor sits outside the World Trade Center after the terrorist attack.

Jose Jimenez/Primera Hora via Getty Images

An aging generation of survivors

Now, 20 years on, these survivors face a new challenge as they age and move toward retirement – a difficult life transition that can sometimes lead to mental health decline. Prior to retirement, the daily drumbeat of work activity and a steady schedule often helps keep the mind busy. But retirement can sometimes leave a void – one that for 9/11 survivors is too often filled with unwanted memories of the noises, smells, fear and despair of that terrible day and the days that followed. Many survivors have told me they do not want to return to Manhattan and certainly not to the WTC.

Aging can also bring with it forgetfulness and other cognitive challenges. But studies show that these natural processes are accelerated and more severe in 9/11 survivors, similar to the experience of veterans from war zones. This is a concerning trend, but all the more so because a growing body of research, including our own preliminary study, is finding links between cognitive impairment in 9/11 responders and dementia. A recent Washington Post piece detailed how 9/11 survivors are experiencing these dementia-like conditions in their 50s – far earlier than is typical.

The COVID-19 pandemic, too, has taken a toll on those who have already suffered from 9/11. People with preexisting conditions have been at far higher risk during the pandemic. Not surprisingly, a recent study found a higher incidence of COVID-19 in WTC responders from January through August 2020.

Honoring the 9/11 survivors

The health risks posed by direct exposure to the acrid dust was underestimated at the time, and poorly understood. Appropriate personal protective equipment, such as P100 half-face respirators, was not available at that time.

But now, over 20 years on, we know much more about the risks – and we have much greater access to protective equipment that can keep responders and recovery workers safe following disasters. Yet, too often, I see that we have not learned and applied these lessons.

For instance, in the immediate aftermath of the condominium collapse near Miami Beach last June, it took days before P100 half-face respirators were fully available and made mandatory for the responders. Other examples around the world are even worse: One year after the Beirut explosion in August 2020, very little action had been taken to investigate and manage the physical and mental health consequences among responders and the impacted community.

Applying the lessons learned from 9/11 is a critically important way to honor the victims and the brave men and women who took part in the desperate rescue and recovery efforts back on those terrible days.The Conversation

Roberto Lucchini, Professor of Occupational and Environmental Health Sciences, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Frank Drake has passed away but his equation for alien intelligence is more important than ever

How many intelligent civilizations should there be in our galaxy right now? In 1961, the US astrophysicist Frank Drake, who passed away on September 2 at the age of 92, came up with an equation to estimate this. The Drake equation, dating from a stage in his career when he was “too naive to be nervous” (as he later put it), has become famous and bears his name.

This places Drake in the company of towering physicists with equations named after them including James Clerk Maxwell and Erwin Schrödinger. Unlike those, Drake’s equation does not encapsulate a law of nature. Instead it combines some poorly known probabilities into an informed estimate.

Whatever reasonable values you feed into the equation (see image below) it is hard to avoid the conclusion that we shouldn’t be alone in the galaxy. Drake remained a proponent and a supporter of the search for extraterrestrial life throughout his days, but has his equation really taught us anything?

N = R∗ ⋅ fp ⋅ ne ⋅ fl ⋅ fi ⋅ fc ⋅ L

The expanded Drake equation.

Author provided

Drake’s equation may look complicated, but its principles are really rather simple. It states that, in a galaxy as old as ours, the number of civilizations that are detectable by virtue of them broadcasting their presence must equate to the rate at which they arise, multiplied by their average lifetime.

Putting a value on the rate at which civilizations occur might seem to be guesswork, but Drake realized that it can be broken down into more tractable components.

He stated that the total rate is equal to the rate at which suitable stars are formed, multiplied by the fraction of those stars that have planets. This is then multiplied by the number of planets that are capable of bearing life per system, times the fraction of those planets where life gets started, multiplied by the fraction of those where life becomes intelligent, times the fraction of those that broadcast their presence.

Tricky values

Frank Drake.

Frank Drake. wikipedia, CC BY-SA

When Drake first formulated his equation, the only term that was known with any confidence was the rate of star formation – about 30 per year.

As for the next term, back in the 1960s, we had no evidence that any other stars have planets, and one in ten may have seemed like an optimistic guess. However, observational discoveries of exoplanets (planets orbiting other stars) that began in the 1990s and have blossomed this century now makes us confident that most stars have planets.

Common sense suggests that most systems of multiple planets would include one at the right distance from its star to be capable of supporting life. Earth is that planet in our solar system. In addition, Mars may have been suitable for abundant life in the past – and it could still be clinging on.

Today we also realise that planets don’t need to be warm enough for liquid water to exist at the surface to support life. It can occur in the internal ocean of an ice-covered body, supported by heat generated either by radioactivity or tides rather than sunlight.

There are several likely candidates among the moons of Jupiter and Saturn, for example. In fact, when we add moons as being capable of hosting life, the average number of habitable bodies per planetary system could easily exceed one.

The values of the terms towards the right-hand side of the equation, however, remain more open to challenge. Some would hold that, given a few million years to play with, life will get started anywhere that is suitable.

That would be mean that the fraction of suitable bodies where life actually gets going is pretty much equal to one. Others say that we have as yet no proof of life starting anywhere other than Earth, and that the origin of life could actually be an exceedingly rare event.

Will life, once started, eventually evolve intelligence? It probably has to get past the microbial stage and become multicellular first.

There is evidence that multicellular life started more than once on Earth, so becoming multicellular may not be a barrier. Others, however, point out that on Earth the “right kind” of multicellular life, which continued to evolve, appeared only once and could be rare on the galactic scale.

Intelligence may confer a competitive advantage over other species, meaning its evolution could be rather likely. But we don’t know for sure.

And will intelligent life develop technology to the stage where it (accidentally or deliberately) broadcasts its existence across space? Perhaps for surface-dwellers such as ourselves, but it might be rare for inhabitants of internal oceans of frozen worlds with no atmosphere.

How long do civilizations last?

What about the average lifetime of a detectable civilization, L? Our TV transmissions began to make Earth detectable from afar in the 1950s, giving a minimum value for L of about 70 years in our own case.

In general though, L may be limited by the collapse of civilization (what are the odds of our own lasting a further 100 years?) or by the near total demise of radio broadcasting in favor of the internet, or by a deliberate choice to “go quiet” for fear of hostile galactic inhabitants.

Play with the numbers yourself - it’s fun! You’ll find that if L is more than 1,000 years, N (the number of detectable civilizations) is likely to be greater than a hundred. In an interview recorded in 2010, Drake said his best guess at N was about 10,000.

We are learning more about exoplanets every year, and are entering an era when measuring their atmospheric composition to reveal evidence of life is becoming increasingly feasible. Within the next decade or two, we can hope for a much more soundly based estimate of the fraction of Earth-like planets where life gets started.

This won’t tell us about life in the internal oceans, but we can hope for insights into that from missions to the icy moons of Jupiter, Saturn and Uranus. And we could, of course, detect actual signals from extraterrestrial intelligence.

Either way, Frank Drake’s equation, which has stimulated so many lines of research, will continue to give us a thought-provoking sense of perspective. For that we should be grateful.The Conversation

David Rothery, Professor of Planetary Geosciences, The Open University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Most human embryos naturally die after conception. Abortion bans conveniently ignore it

Many state legislatures are seriously considering human embryos at the earliest stages of development for legal personhood. Total abortion bans that consider humans to have full rights from the moment of conception have created a confusing legal domain that affects a wide range of areas, including assisted reproductive technologies, contraception, essential medical care and parental rights, among others.

However, an important biological feature of human embryos has been left out of a lot of ethical and even scientific discussion informing reproductive policy – most human embryos die before anyone, including doctors, even know they exist. This embryo loss typically occurs in the first two months after fertilization, before the clump of cells has developed into a fetus with immature forms of the body’s major organs. Total abortion bans that define personhood at conception mean that full legal rights exist for a 5-day-old blastocyst, a hollow ball of cells roughly 0.008 inches (0.2 millimeters) across with a high likelihood of disintegrating within a few days.

As an evolutionary biologist whose career has focused on how embryos develop in a wide variety of species over the course of evolution, I was struck by the extraordinarily high likelihood that most human embryos die due to random genetic errors. Around 60% of embryos disintegrate before people may even be aware that they are pregnant. Another 10% of pregnancies end in miscarriage, after the person knows they’re pregnant. These losses make clear that the vast majority of human embryos don’t survive to birth.

The emerging scientific consensus is that high rate of early embryo loss is a common and normal occurrence in people. Research on the causes and evolutionary reasons for early embryo loss provides insight into this fundamental feature of human biology and its implications for reproductive health decisions.

Intrinsic embryo loss is common in mammals

Intrinsic embryo loss, or embryo death due to internal factors like genetics, is common in many mammals, such as cows and sheep. This persistent “reproductive wastage” has frustrated breeders attempting to increase livestock production but who are unable to eliminate high embryonic mortality.

In contrast, most embryo loss in animals that lay eggs like fish and frogs is due to external factors, such as predators, disease or other environmental threats. These lost embryos are effectively “recycled” in the ecosystem as food. These egg-laying animals have little to no intrinsic embryo loss.

Each square shows the first 24 hours of embryo development in a different animal species. From left to right: 1. zebrafish (Danio rerio), 2. sea urchin (Lytechinus variegatus), 3. black widow spider (Latrodectus), 4. tardigrade (Hypsibius dujardini), 5. sea squirt (Ciona intestinalis), 6. comb jelly (Ctenophore, Mnemiopsis leidyi), 7. parchment tube worm (Chaetopterus variopedatus), 8. roundworm (Caenorhabditis elegans), and 9. slipper snail (Crepidula fornicata).

In people, the most common outcome of reproduction by far is embryo loss due to random genetic errors. An estimated 70% to 75% of human conceptions fail to survive to birth. That number includes both embryos that are reabsorbed into the parent’s body before anyone knows an egg has been fertilized and miscarriages that happen later in the pregnancy.

An evolutionary drive for embryo loss

In humans, an evolutionary force called meiotic drive plays a role in early embryo loss. Meiotic drive is a type of competition within the genome of unfertilized eggs, where variations of different genes can manipulate the cell division process to favor their own transmission to the offspring over other variations.

Statistical models attempting to explain why most human embryos fail to develop usually start by observing that a massive number of random genetic errors occur in the mother’s eggs even before fertilization.

When sperm fertilize eggs, the resulting embryo’s DNA is packaged into 46 chromosomes – 23 from each parent. This genetic information guides the embryo through the development process as its cells divide and grow. When random mistakes occur during chromosome replication, fertilized eggs can inherit cells with these errors and result in a condition called aneuploidy, which essentially means “the wrong number of chromosomes.” With the instructions for development now disorganized due to mixed-up chromosomes, embryos with aneuploidy are usually doomed.

Microscopy image of four early human embryos

As many as three out of four human embryos naturally die in the development process.

Red Hayabusa/iStock via Getty Images Plus

Because human and other mammal embryos are highly protected from environmental threats – unlike animals that lay eggs outside their bodies – researchers have theorized that these early losses have little effect on the reproductive success of the parent. This may allow humans and other mammals to tolerate meiotic drive over evolutionary time.

Counterintuitively, there may even be benefits to the high rates of genetic errors that result in embryo loss. Early loss of aneuploid embryos can direct maternal resources to healthier single newborns rather than twins or multiples. Also, in the deeper evolutionary history of a species, having a huge pool of genetic variants could occasionally provide a beneficial new adaptation that could aid in human survival in changing environments.

Spontaneous abortion is natural

Biological data on human embryos brings new questions to consider for abortion policies.

Although required in some states, early embryo loss is typically not documented in the medical record. This is because it occurs before the person knows they are pregnant and often coincides with the next menstrual period. Until relatively recently, researchers were unaware of the extremely high rate of early embryo loss in people, and “conception” was an imagined moment estimated from last menstruation.

How does naturally built-in, massive early embryo loss affect legal protections for human embryos?

Errors that occur during chromosomal replication are essentially random, which means development can be disrupted in different ways in different embryos. However, while both early embryos and late fetuses can become inviable due to genetic errors, early and late abortions are regulated very differently. Some states still require doctors to wait until the health of the pregnant person is endangered before allowing induced abortion of nonviable fetuses.

In the wake of anti-abortion laws, doctors have refused to treat patients with miscarriages because it uses the same procedures as abortions.

Since so many pregnancies end naturally in their very earliest days, early embryo loss is exceedingly common, though most people won’t know they’ve experienced it. I believe that new laws ignoring this natural occurrence lead to a slippery slope that can put lives and livelihoods at risk.

Between 1973 and 2005, over 400 women were arrested for miscarriage in the U.S. With the current shift toward restrictive abortion policies, the continued criminalization of pregnancies that don’t result in birth, despite how common they are, is a growing concern.

I believe that acknowledging massive early embryo loss as a normal part of human life is one step forward in helping society make rational decisions about reproductive health policy.The Conversation

Kathryn Kavanagh, Associate Professor of Biology, UMass Dartmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How modern warfare uses a 'staggering' amount of 'tactical' slavery: scholars

Some 40 million people are enslaved around the world today, though estimates vary. Modern slavery takes many different forms, including child soldiers, sex trafficking and forced labor, and no country is immune. From cases of family-controlled sex trafficking in the United States to the enslavement of fishermen in Southeast Asia’s seafood industry and forced labor in the global electronics supply chain, enslavement knows no bounds.

As scholars of modern slavery, we seek to understand how and why human beings are still bought, owned and sold in the 21st century, in hopes of shaping policies to eradicate these crimes.

Many of the answers trace back to causes like poverty, corruption and inequality. But they also stem from something less discussed: war.

In 2016, the United Nations Security Council named modern slavery a serious concern in areas affected by armed conflict. But researchers still know little about the specifics of how slavery and war are intertwined.

We recently published research analyzing data on armed conflicts around the world to better understand this relationship.

What we found was staggering: The vast majority of armed conflict between 1989 and 2016 used some kind of slavery.

Coding conflict

We used data from an established database about war, the Uppsala Conflict Data Program (UCDP), to look at how much, and in what ways, armed conflict intersects with different forms of contemporary slavery.

Our project was inspired by two leading scholars of sexual violence, Dara Kay Cohen and Ragnhild Nordås. These political scientists used that database to produce their own pioneering database about how rape is used as a weapon of war.

The Uppsala database breaks each conflict into two sides. Side A represents a nation-state, and Side B is typically one or more nonstate actors, such as rebel groups or insurgents.

Using that data, our research team examined instances of different forms of slavery, including sex trafficking and forced marriage, child soldiers, forced labor and general human trafficking. This analysis included information from 171 different armed conflicts. Because the use of slavery changes over time, we broke multiyear conflicts into separate “conflict-years” to study them one year at a time, for a total of 1,113 separate cases.

Coding each case to determine what forms of slavery were used, if any, was a challenge. We compared information from a variety of sources, including human rights organizations like Amnesty International and Human Rights Watch, scholarly accounts, journalists’ reporting and documents from governmental and intergovernmental organizations.

A woman in dark clothes sits, looking forlorn, over a crevice with rubble in it.

A Yazidi woman who was held captive by the Islamic State visits the mass grave where her husband is believed to be buried in Iraq.

AP Photo/Maya Alleruzzo

Alarming numbers

In our recently published analysis, we found that contemporary slavery is a regular feature of armed conflict. Among the 1,113 cases we analyzed, 87% contained child soldiers – meaning fighters age 15 and younger – 34% included sexual exploitation and forced marriage, about 24% included forced labor and almost 17% included human trafficking.

A global heat map of the frequency of these armed conflicts over time paints a sobering picture. Most conflicts involving enslavement take place in low-income countries, often referred to as the Global South.

About 12% of the conflicts involving some form of enslavement took place in India, where there are several conflicts between the government and nonstate actors. Teen militants are involved in conflicts such as the insurgency in Kashmir and the separatist movement in Assam. About 8% of cases took place in Myanmar, 5% in Ethiopia, 5% in the Philippines and about 3% in Afghanistan, Sudan, Turkey, Colombia, Pakistan, Uganda, Algeria and Iraq.

This evidence of enslavement predominately in the Global South may not be surprising, given how poverty and inequality can fuel instability and conflict. However, it helps us reflect upon how these countries’ historic, economic and geopolitical relationships to the Global North also fuel pressure and violence, a theme we hope slavery researchers can study in the future.

Strategic enslavement

Typically, when armed conflict involves slavery, it’s being used for tactical aims: building weapons, for example, or constructing roads and other infrastructure projects to fight a war. But sometimes, slavery is used strategically, as part of an overarching strategy. In the Holocaust, the Nazis used “strategic slavery” in what they called “extermination through labor.” Today, as in the past, strategic slavery is normally part of a larger strategy of genocide.

We found that “strategic enslavement” took place in about 17% of cases. In other words, enslavement was one of the primary objectives of about 17% of the conflicts we examined, and often served the goal of genocide. One example is the Islamic State’s enslavement of the Yazidi minority in the 2014 massacre in Sinjar, Iraq. In addition to killing Yazidis, the Islamic State sought to enslave and impregnate women for systematic ethnic cleansing, attempting to eliminate the ethnic identity of the Yazidi through forced rape.

The connections between slavery and conflict are vicious but still not well understood. Our next steps include coding historic cases of slavery and conflict going back to World War II, such as how Nazi Germany used forced labor and how Imperial Japan’s military used sexual enslavement. We have published a new data set, “Contemporary Slavery in Armed Conflict,” and hope other researchers will also use it to help better understand and prevent future violence.The Conversation

Monti Datta, Associate Professor of Political Science, University of Richmond; Angharad Smith, Modern Slavery Programme Officer, United Nations University, and Kevin Bales, Prof. of Contemporary Slavery, Research Director - The Rights Lab, University of Nottingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Research shows why Mehmet Oz should be worried about losing

Pennsylvania’s U.S. Senate race between Democrat John Fetterman and Republican Mehmet Oz has garnered a lot of media attention recently, thanks to the Fetterman campaign’s relentless trolling of his opponent, mainly for being a resident of neighboring New Jersey rather than the state he’s running to represent.

Fetterman has run ad after ad using Oz’s own words to highlight his deep Jersey roots. His campaign started a petition to nominate Oz for the New Jersey Hall of Fame. Fetterman even enlisted very-Jersey celebrities like Snooki of “Jersey Shore” to draw attention to his charge that Oz is a carpetbagger in the Pennsylvania race: a candidate with no authentic connection to an area, who moved there for the sole purpose of political ambition.

Fetterman’s attacks against Oz may be entertaining, but they aren’t unprecedented. Such characterizations can be helpful in elections.

Sen. Jon Tester, a Democrat, won a tight race in Montana in 2018 in part by dubbing his out-of-town opponent “Maryland Matt.” Democrat Joe Manchin has held on for so long to a Senate seat in a deep red state by “play[ing] up his West Virginia roots.” Meanwhile, Maine Democrat (and native Rhode Islander) Sara Gideon got caught – and derided for – sporting a Patagonia fleece in a state that famously is home to L.L. Bean. She lost to Maine native Susan Collins in the 2020 Senate race even as Joe Biden carried the state by nine points.

Given how heavily defined modern congressional elections are by partisanship and by the increasing focus on national rather than local issues, is this kind of messaging actually effective as a campaign strategy?

Do voters really still punish carpetbaggers and reward candidates with deep ties to their districts?

A large man in a blue shirt fistbumps a bunch of young men sitting on a flatbed truck at a rural gathering.

Sen. Jon Tester, a Democrat, talks with state basketball champions at the Crow Fair in Crow Agency, Montana, on Aug. 19, 2018.

Tom Williams/CQ Roll Call

Some politics is local

New research from my upcoming book, “Home Field Advantage,” shows that the answer is an emphatic “yes.”

In the book, I created a “Local Roots Index” for each modern member of the U.S. House of Representatives to measure how deeply rooted they are in the geography of the districts they represent. The index pulled from decades of geographic data about members’ pre-Congress lives, including whether they were born in their home district, went to school there or owned a local business.

High index scores meant members had most or all of these life experiences within the boundaries of their district; low scores meant they had little to no local life experience in their district.

I found that members of Congress with higher Local Roots Index scores perform far better in their elections than their more “carpetbagging” colleagues without local roots in their districts. Deeply rooted members are twice as likely to run unopposed in their primary elections, and they significantly outperform their party’s presidential nominees in their districts. They win more elections by bigger margins and don’t need to spend as much money to notch their victories.

Why do voters care about roots?

Why do voters respond positively to deeply rooted candidates and negatively to their carpetbagging counterparts?

One explanation is that deep roots offer candidates a number of practical campaign benefits. A deeply rooted candidate tends to have more intimate knowledge of the district, including its electorate, its economy and industries, its unique culture and its political climate. Deeply rooted candidates also enjoy naturally higher name recognition in the community, more extensive social and political networks and greater access to local donors and vendors for their campaigns.

Other work has theorized that local roots help candidates tap into a shared identity with their voters that is less tangible but meaningful. Scholars like Kal Munis have shown that when voters have strong psychological attachments to a particular place, it has major impacts on voting behavior. And in a recent survey I conducted with David Fontana, we found that voters consistently rated homegrown U.S. Senate candidates as more relatable and trustworthy, and cast votes for them at higher rates.

Just as you’d trust a true born-and-raised local to give you advice about where to eat in town over someone who just moved there, so too do voters trust deeply rooted candidates to represent them in Washington.

‘Intimate sympathy’ with the voters

A ruddy-cheeked older main with long white hair and a white shirt with a black cloak over it.

Founding father James Madison believed that political representatives should have an ‘intimate sympathy’ with the people.

DeAgostini/Getty Images

Political science tells us that voters care about candidates’ roots, and we know a bit about why. But should they? Deep ties to a place may create a sense of connection and familiarity that voters appreciate, but at what cost?

On the one hand, it’s natural to wonder whether the flood of media and campaign attention to Oz’s residency status is distracting from a discussion of more pressing issues like the economy, climate change and the state of American democracy. There’s also a reasonable concern that a healthy attachment to one’s home place could cross the line into outright nativism and unfair vilification of “outsiders” and immigrants.

On the other hand, the framers of the Constitution devised – for better or worse – a geographically focused system of elections and representation. Party is important, but places are different from each other even if they have similar partisan makeups – think San Francisco and New York City – and have different needs. This means having members of Congress who have lived in and understand the place they are elected to represent.

As a result, shared local ties could also serve as a line of defense against steadily declining levels of trust in government and politicians. Perhaps locally rooted representation can help imbue a sense of what James Madison and Alexander Hamilton called an “intimate sympathy” with the people – and reinvigorate faith in public officials and institutions.The Conversation

Charles R. Hunt, Assistant Professor of Political Science, Boise State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Are scientists discovering how to eliminate toxic 'forever chemicals' from the environment fast enough?

How to destroy a ‘forever chemical’ – scientists are discovering ways to eliminate PFAS, but this growing global health problem isn’t going away soon.

PFAS chemicals seemed like a good idea at first. As Teflon, they made pots easier to clean starting in the 1940s. They made jackets waterproof and carpets stain-resistant. Food wrappers, firefighting foam, even makeup seemed better with perfluoroalkyl and polyfluoroalkyl substances.

Then tests started detecting PFAS in people’s blood.

Today, PFAS are pervasive in soil, dust and drinking water around the world. Studies suggest they’re in 98% of Americans’ bodies, where they’ve been associated with health problems including thyroid disease, liver damage and kidney and testicular cancer. There are now over 9,000 types of PFAS. They’re often referred to as “forever chemicals” because the same properties that make them so useful also ensure they don’t break down in nature.

Scientists are working on methods to capture these synthetic chemicals and destroy them, but it isn’t simple.

The latest breakthrough, published Aug. 18, 2022, in the journal Science, shows how one class of PFAS can be broken down into mostly harmless components using sodium hydroxide, or lye, an inexpensive compound used in soap. It isn’t an immediate solution to this vast problem, but it offers new insight.

Biochemist A. Daniel Jones and soil scientist Hui Li work on PFAS solutions at the Michigan State University and explained the promising PFAS destruction techniques being tested today.

How do PFAS get from everyday products into water, soil and eventually humans?

There are two main exposure pathways for PFAS to get into humans – drinking water and food consumption.

PFAS can get into soil through land application of biosolids, that is, sludge from wastewater treatment, and can they leach out from landfills. If contaminated biosolids are applied to farm fields as fertilizer, PFAS can get into water and into crops and vegetables.

For example, livestock can consume PFAS through the crops they eat and water they drink. There have been cases reported in Michigan, Maine and New Mexico of elevated levels of PFAS in beef and in dairy cows. How big the potential risk is to humans is still largely unknown.

Two cows look over a wooden hay trough with a barn in the background.

Cows were found with high levels of PFAS at a farm in Maine.

Adam Glanzman/Bloomberg via Getty Images

Scientists in our group at Michigan State University are working on materials added to soil that could prevent plants from taking up PFAS, but it would leave PFAS in the soil.

The problem is that these chemicals are everywhere, and there is no natural process in water or soil that breaks them down. Many consumer products are loaded with PFAS, including makeup, dental floss, guitar strings and ski wax.

How are remediation projects removing PFAS contamination now?

Methods exist for filtering them out of water. The chemicals will stick to activated carbon, for example. But these methods are expensive for large-scale projects, and you still have to get rid of the chemicals.

For example, near a former military base near Sacramento, California, there is a huge activated carbon tank that takes in about 1,500 gallons of contaminated groundwater per minute, filters it and then pumps it underground. That remediation project has cost over $3 million, but it prevents PFAS from moving into drinking water the community uses.

Filtering is just one step. Once PFAS is captured, then you have to dispose of PFAS-loaded activated carbons, and PFAS still moves around. If you bury contaminated materials in a landfill or elsewhere, PFAS will eventually leach out. That’s why finding ways to destroy it are essential.

What are the most promising methods scientists have found for breaking down PFAS?

The most common method of destroying PFAS is incineration, but most PFAS are remarkably resistant to being burned. That’s why they’re in firefighting foams.

PFAS have multiple fluorine atoms attached to a carbon atom, and the bond between carbon and fluorine is one of the strongest. Normally to burn something, you have to break the bond, but fluorine resists breaking off from carbon. Most PFAS will break down completely at incineration temperatures around 1,500 degrees Celsius (2,730 degrees Fahrenheit), but it’s energy intensive and suitable incinerators are scarce.

There are several other experimental techniques that are promising but haven’t been scaled up to treat large amounts of the chemicals.

A group at Battelle has developed supercritical water oxidation to destroy PFAS. High temperatures and pressures change the state of water, accelerating chemistry in a way that can destroy hazardous substances. However, scaling up remains a challenge.

Others are working with plasma reactors, which use water, electricity and argon gas to break down PFAS. They’re fast, but also not easy to scale up.

The method described in the new paper, led by scientists at Northwestern, is promising for what they’ve learned about how to break up PFAS. It won’t scale up to industrial treatment, and it uses dimethyl sulfoxide, or DMSO, but these findings will guide future discoveries about what might work.

What are we likely to see in the future?

A lot will depend on what we learn about where humans’ PFAS exposure is primarily coming from.

If the exposure is mostly from drinking water, there are more methods with potential. It’s possible it could eventually be destroyed at the household level with electro-chemical methods, but there are also potential risks that remain to be understood, such as converting common substances such as chloride into more toxic byproducts.

The big challenge of remediation is making sure we don’t make the problem worse by releasing other gases or creating harmful chemicals. Humans have a long history of trying to solve problems and making things worse. Refrigerators are a great example. Freon, a chlorofluorocarbon, was the solution to replace toxic and flammable ammonia in refrigerators, but then it caused stratospheric ozone depletion. It was replaced with hydrofluorocarbons, which now contribute to climate change.

If there’s a lesson to be learned, it’s that we need to think through the full life cycle of products. How long do we really need chemicals to last?The Conversation

A. Daniel Jones, Professor of Biochemistry, Michigan State University and Hui Li, Professor of Environmental and Soil Chemistry, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

No spying needed: Why prosecuting Espionage Act violations is 'controversial and complicated'

The federal court-authorized search of former President Donald Trump’s Florida estate has brought renewed attention to the obscure but infamous law known as the Espionage Act of 1917. A section of the law was listed as one of three potential violations under Justice Department investigation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Espionage Act has historically been employed most often by law-and-order conservatives. But the biggest uptick in its use occurred during the Obama administration, which used it as the hammer of choice for national security leakers and whistleblowers. Regardless of whom it is used to prosecute, it unfailingly prompts consternation and outrage.

We are both attorneys who specialize in and teach national security law. While navigating the sound and fury over the Trump search, here are a few things to note about the Espionage Act.

Espionage Act seldom pertains to espionage

When you hear “espionage,” you may think spies and international intrigue. One portion of the act – 18 U.S.C. section 794 – does relate to spying for foreign governments, for which the maximum sentence is life imprisonment.

That aspect of the law is best exemplified by the convictions of Jonathan Pollard in 1987, for spying for and providing top-secret classified information to Israel; former Central Intelligence Agency officer Aldrich Ames in 1994, for being a double agent for the Russian KGB; and, in 2002, former FBI agent Robert Hanssen, who was caught selling U.S. secrets to the Soviet Union and Russia over a span of more than 20 years. All three received life sentences.

But spy cases are rare. More typically, as in the Trump investigation, the act applies to the unauthorized gathering, possessing or transmitting of certain sensitive government information.

Transmitting can mean moving materials from an authorized to an unauthorized location – many types of sensitive government information must be maintained in secure facilities. It can also apply to refusing a government demand for its return. All of these prohibited activities fall under the separate and more commonly applied section of the act – 18 U.S.C. section 793.

A man in a military uniform is escorted onto a vehicle by a man in a dark shirt and khakis.

Chelsea Manning, in uniform, after being sentenced on Aug. 21, 2013, to 35 years in prison after being found guilty of several counts under the Espionage Act.

Photo by Mark Wilson/Getty Images

A violation does not require an intention to aid a foreign power

Willful unauthorized possession of information that, if obtained by a foreign government, might harm U.S. interests is generally enough to trigger a possible sentence of 10 years.

Current claims by Trump supporters of the seemingly innocuous nature of the conduct at issue – simply possessing sensitive government documents – miss the point. The driver of the Department of Justice’s concern under Section 793 is the sensitive content and the connection to national defense information, known as “NDI.”

One of the most famous Espionage Act cases, known as “Wikileaks,” in which Julian Assange was indicted for obtaining and publishing secret military and diplomatic documents in 2010, is not about leaks to help foreign governments. It concerned the unauthorized soliciting, obtaining, possessing and publishing of sensitive information that might be of help to a foreign nation if disclosed.

Two recent senior Democratic administration officials – Sandy Berger, national security adviser during the Clinton administration, and David Petraeus, CIA director under during the Obama administration – each pleaded guilty to misdemeanors under the threat of Espionage Act prosecution.

Berger took home a classified document – in his sock – at the end of his tenure. Petraeus shared classified information with an unauthorized person for reasons having nothing to do with a foreign government.

The act is not just about classified information

Some of the documents the FBI sought and found in the Trump search were designated “top secret” or “top secret-sensitive compartmented information.”

Both classifications tip far to the serious end of the sensitivity spectrum.

Top secret-sensitive compartmented information is reserved for information that would truly be damaging to the U.S. if it fell into foreign hands.

One theory floated by Trump defenders is that by simply handling the materials as president, Trump could have effectively declassified them. It actually doesn’t work that way – presidential declassification requires an override of Executive Order 13526, must be in writing, and must have occurred while Trump was still president – not after. If they had been declassified, they should have been marked as such.

And even assuming the documents were declassified, which does not appear to be the case, Trump is still in the criminal soup. The Espionage Act applies to all national defense information, or NDI, of which classified materials are only a portion. This kind of information includes a vast array of sensitive information including military, energy, scientific, technological, infrastructure and national disaster risks. By law and regulation, NDI materials may not be publicly released and must be handled as sensitive.

A number of court documents, with the one on top saying prominently 'Search and seizure warrant' in bold type and all capital letters.

A judge unsealed a search warrant that shows that the FBI is investigating Donald Trump for a possible violation of the Espionage Act.

AP Photo/Jon Elswick

The public can’t judge a case based on classified information

Cases involving classified information or NDI are nearly impossible to referee from the cheap seats.

None of us will get to see the documents at issue, nor should we. Why?

Because they are classified.

Even if we did, we would not be able to make an informed judgment of their significance because what they relate to is likely itself classified – we’d be making judgments in a void.

And even if a judge in an Espionage Act case had access to all the information needed to evaluate the nature and risks of the materials, it wouldn’t matter. The fact that documents are classified or otherwise regulated as sensitive defense information is all that matters.

Historically, Espionage Act cases have been occasionally political and almost always politicized. Enacted at the beginning of U.S. involvement in World War I in 1917, the act was largely designed to make interference with the draft illegal and prevent Americans from supporting the enemy.

But it was immediately used to target immigrants, labor organizers and left-leaning radicals. It was a tool of Cold War anti-communist politicians like Sen. Joe McCarthy in the 1940s and 1950s. The case of Julius and Ethel Rosenberg, executed for passing atomic secrets to the Soviet Union, is the most prominent prosecution of that era.

In the 1960s and 1970s, the act was used against peace activists, including Pentagon Paper whistleblower Daniel Ellsberg. Since Sept. 11, 2001, officials have used the act against whistleblowers like Edward Snowden. Because of this history, the act is often assailed for chilling First Amendment political speech and activities.

The Espionage Act is serious and politically loaded business. Its breadth, the potential grave national security risks involved and the lengthy potential prison term have long sparked political conflict. These cases are controversial and complicated in ways that counsel patience and caution before reaching conclusions.The Conversation

Joseph Ferguson, Co-Director, National Security and Civil Rights Program, Loyola University Chicago and Thomas A. Durkin, Distinguished Practitioner in Residence, Loyola University Chicago

Texas politicians seek to control classroom discussions about racism and slavery

Of all the subjects taught in the nation’s public schools, few have generated as much controversy of late as the subjects of racism and slavery in the United States.

The attention has come largely through a flood of legislative bills put forth primarily by Republicans over the past year and a half. Commonly referred to as anti-critical race theory legislation, these bills are meant to restrict how teachers discuss race and racism in their classrooms.

One of the more peculiar byproducts of this legislation came out of Texas, where, in June 2022, an advisory panel made up of nine educators recommended that slavery be referred to as “involuntary relocation.”

The measure ultimately failed.

As an educator who trains teachers on how to educate young students about the history of slavery in the United States, I see the Texas proposal as part of a disturbing trend of politicians seeking to hide the horrific and brutal nature of slavery – and to keep it divorced from the nation’s birth and development.

The Texas proposal, for instance, grew out of work done under a Texas law that says slavery and racism can’t be taught as part of the “true founding” of the United States. Rather, the law states, they must be taught as a “failure to live up to the authentic founding principles of the United States, which include liberty and equality.”

To better understand the nature of slavery and the role it played in America’s development, it helps to have some basic facts about how long slavery lasted in the territory now known as the United States and how many enslaved people it involved. I also believe in using authentic records to show students the reality of slavery.

Before the Mayflower

Slavery in what is now known as the United States is often traced back to the year 1619. That is when – as documented by Colonist John Rolfe – a ship named the White Lion delivered 20 or so enslaved Africans to Virginia.

As for the notion that slavery was not part of the founding of the United States, that is easily refuted by the U.S. Constitution itself. Specifically, Article 1, Section 9, Clause 1 prevented Congress from prohibiting the “importation” of slaves until 1808 – nearly 20 years after the Constitution was ratified – although it didn’t use the word “slaves.” Instead, the Constitution used the phrase “such Persons as any of the States now existing shall think proper to admit.”

Congress ultimately passed the “Act Prohibiting the Importation of Slaves,” which took effect in 1808. Although the act imposed heavy penalties on international traders, it did not end slavery itself nor the domestic sale of slaves. Not only did it drive trade underground, but many ships caught illegally trading were also brought into the United States and their “passengers” sold into slavery.

The last known slave ship – the Clotilda – arrived in Mobile, Alabama, in 1860, more than half a century after Congress outlawed the importation of enslaved individuals.

A map of Africa showing slave trade routes

An 1880 map shows where enslaved people originated from and in which directions they were forced out.

Hulton Archive/Stringer via Getty Images

According to the Trans-Atlantic Slave Trade database, which derives it numbers from shipping records from 1525 to 1866, approximately 12.5 million enslaved Africans were transported to the Americas. About 10.7 million survived the Middle Passage and arrived in North America, the Caribbean and South America. Of these, only a small portion – 388,000 – arrived in North America.

Most enslaved people in the United States, then, entered slavery not through importation or “involuntary relocation,” but by birth.

From the arrival of those first 20 so enslaved Africans in 1619 until slavery was abolished in 1865, approximately 10 million slaves lived in the United States and contributed 410 billion hours of labor. This is why slavery is a “crucial building block” to understanding the U.S. economy from the nation’s founding up until the Civil War.

The value of historical records

As an educator who trains teachers on how to deal with the subject of slavery, I don’t see any value in politicians’ restricting what teachers can and can’t say about the role that slaveholders – at least 1,800 of whom were congressmen, not to mention the 12 who were U.S. presidents – played in the upholding of slavery in American society.

What I see value in is the use of historical records to educate schoolchildren about the harsh realities of slavery. There are three types of records that I recommend in particular.

1. Census records

Since enslaved people were counted in each census that took place from 1790 to 1860, census records enable students to learn a lot about who specifically owned slaves. Census records also enable students to see differences in slave ownership within states and throughout the nation.

The censuses also show the growth of the slave population over time – from 697,624 during the first census in 1790, shortly after the nation’s founding, to 3.95 million during the 1860 census, as the nation stood at the verge of civil war.

2. Ads for runaway slaves

An advertisement for two men who ran away from slavery

Advertisements for fugitive slaves offer a glimpse into their lives.

Few things speak to the horrors and harms of slavery like ads that slave owners took out for runaway slaves.

It’s not hard to find ads that describe fugitive slaves whose bodies were covered with various scars from beatings and marks from branding irons.

For instance, consider an ad taken out on July 3, 1823, in the Star, and North-Carolina State Gazette by Alford Green, who offers $25 for a fugitive slave named Ned, whom he described as follows:

… about 21 years old, his weight about 150, well made, spry and active tolorably fierce look, a little inclined to be yellow, his upper fore teeth a little defective, and, I expect, has some signs of the whip on his hips and thighs, as he was whipped in that way the day before he went off.

Advertisements for runaway slaves can be accessed via digital databases, such as Freedom on the Move, which contains more than 32,000 ads. Another database – the North Carolina Runaway Slave Notices project – contains 5,000 ads published in North Carolina newspapers from 1751 to 1865. The sheer number of these advertisements sheds light on how many enslaved Black people attempted to escape bondage.

3. Personal narratives from the enslaved

Though they are few in number, recordings of interviews with formerly enslaved people exist.

Some of the interviews are problematic for various reasons. For instance, some of the interviews were heavily edited by interviewers or did not include complete, word-for-word transcripts of the interviews.

Yet the interviews still provide a glimpse at the harshness of life in bondage. They also expose the fallacy of the argument that slaves – as one slave owner claimed in his memoir – “loved ‘old Marster’ better than anybody in the world, and would not have freedom if he offered it to them.”

For instance, when Fountain Hughes – a descendant of a slave owned by Thomas Jefferson who spent his boyhood in slavery in Charlottesville, Virginia – was asked if he would rather be free or enslaved, he told his interviewer:

You know what I’d rather do? If I thought, had any idea, that I’d ever be a slave again, I’d take a gun and just end it all right away, because you’re nothing but a dog. You’re not a thing but a dog. A night never come that you had nothing to do. Time to cut tobacco? If they want you to cut all night long out in the field, you cut. And if they want you to hang all night long, you hang tobacco. It didn’t matter about you’re tired, being tired. You’re afraid to say you’re tired.

It’s ironic, then, that when it comes to teaching America’s schoolchildren about the horrors of American slavery and how entrenched it was in America’s political establishment, some politicians would prefer to shackle educators with restrictive laws. What they could do is grant educators the ability to teach freely about the role the slavery played in the forming of a nation that was founded – as the Texas law states - on principles of liberty and equality.The Conversation

Raphael E. Rogers, Professor of Practice in Education, Clark University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

After Trump, Christian nationalist ideas are going mainstream – despite a history of violence

In the run-up to the U.S. midterm elections, some politicians continue to ride the wave of what's known as "Christian nationalism" in ways that are increasingly vocal and direct.

GOP Rep. Marjorie Taylor Greene, a far-right Donald Trump loyalist from Georgia, told an interviewer on July 23, 2022, that the Republican Party "need[s] to be the party of nationalism. And I'm a Christian, and I say it proudly, we should be Christian nationalists."

Similarly, Rep. Lauren Boebert, a Republican from Colorado, recently said, "The church is supposed to direct the government. The government is not supposed to direct the church." Boebert called the separation of church and state "junk."

Many Christian nationalists repeat conservative activist David Barton's argument that the Founding Fathers did not intend to keep religion out of government.

As a scholar of racism and communication who has written about white nationalism during the Trump presidency, I find the amplification of Christian nationalism unsurprising. Christian nationalism is prevalent among Trump supporters, as religion scholars Andrew Whitehead and Samuel L. Perry argue in their book "Taking Back America for God."

Perry and Whitehead describe the Christian nationalist movement as being "as ethnic and political as it is religious," noting that it relies on the assumption of white supremacy. Christian nationalism combines belief in a particular form of Christianity with nativist and populist political platforms. American Christian nationalism is a worldview based on the belief that America is superior to other countries, and that that superiority is divinely established. In this mindset, only Christians are true Americans.

Parts of the movement fit into a broader right-wing extremist history of violence, which has been on the rise over the past few decades and was particularly on display during the Capitol attack on Jan. 6, 2021.

The vast majority of Christian nationalists never engage in violence. Nonetheless, Christian nationalist thinking suggests that unless Christians control the state, the state will suppress Christianity.

From siege to militia buildup

Violence perpetrated by Christian nationalists has manifested in two primary ways in recent decades. The first is through their involvement in militia groups; the second is seen in attacks on abortion providers.

The catalyst for the growth of militia activity among contemporary Christian nationalists stems from two events: the 1992 Ruby Ridge standoff and the 1993 siege at Waco.

At Ruby Ridge, former Army Green Beret Randy Weaver engaged federal law enforcement in an 11-day standoff at his rural Idaho cabin over charges relating to the sale of sawed-off shotguns to an ATF informant investigating Aryan Nation white supremacist militia meetings.

Weaver ascribed to the Christian Identity movement, which emphasizes adherence to Old Testament laws and white supremacy. Christian Identity members believe in the application of the death penalty for adultery and LBGTQ relationships in accordance with their reading of some biblical passages.

During the standoff, Weaver's wife and teenage son were shot and killed before he surrendered to federal authorities.

In the Waco siege a year later, cult leader David Koresh and his followers entered a standoff with federal law enforcement at the group's Texas compound, once again concerning weapons charges. After a 51-day standoff, federal law enforcement laid siege to the compound. A fire took hold at the compound in disputed circumstances, leading to the deaths of 76 people, including Koresh.

The two events spurred a nationwide militia buildup. As sociologist Erin Kania argues: "Ruby Ridge and Waco confrontations drove some citizens to strengthen their belief that the government was overstepping the parameters of its authority. … Because this view is one of the founding ideologies of the American Militia Movement, it makes sense that interest and membership in the movement would sharply increase following these standoffs between government and nonconformists."

Distrust of the government blended with strains of Christian fundamentalism have brought together two groups with formerly disparate goals.

Christian nationalism and violence

Christian fundamentalists and white supremacist militia groups both figured themselves as targeted by the government in the aftermath of the standoffs at Ruby Ridge and Waco. As scholar of religion Ann Burlein argues, "Both the Christian right and right-wing white supremacist groups aspire to overcome a culture they perceive as hostile to the white middle class, families, and heterosexuality."

Significantly, in 1995, Oklahoma City bomber Timothy McVeigh and accomplice Terry Nichols cited revenge for the Waco siege as a motive for the bombing of the Alfred Murrah federal building. The terrorist act killed 168 people and injured hundreds more.

Since 1993, at least 11 people have been murdered in attacks on abortion clinics in cities across the U.S., and there have been numerous other plots.

They have involved people like the Rev. Michael Bray, who attacked multiple abortion clinics. Bray was the spokesman for Paul Hill, a Christian Identity adherent who murdered physician John Britton and his bodyguard James Barrett in 1994 outside of a Florida abortion clinic.

In yet another case, Eric Rudolph bombed the 1996 Atlanta Olympics. In his confession, he cited his opposition to abortion and anti-LGBTQ views as motivation to bomb Olympic Square.

These men cited their involvement with the Christian Identity movement in their trials as motivation for engaging in violence.

Mainstreaming Christian nationalist ideas

The presence of Christian nationalist ideas in recent political campaigns is concerning, given its ties to violence and white supremacy.

Trump and his advisers helped to mainstream such rhetoric with events like his photo op with a Bible in Lafayette Square in Washington following the violent dispersal of protesters, and making a show of pastors laying hands on him. But that legacy continues beyond his administration.

Candidates like Doug Mastriano, the Republican gubernatorial candidate in Pennsylvania who attended the Jan. 6 Trump rally, are now using the same messages.

In some states, such as Texas and Montana, hefty funding for far-right Christian candidates has helped put Christian nationalist ideas in the mainstream.

Blending politics and religion is not necessarily a recipe for Christian nationalism, nor is Christian nationalism a recipe for political violence. At times, however, Christian nationalist ideas can serve as a prelude.

Virologist tackles monkeypox vaccine questions

Monkeypox isn’t going to be the next COVID-19. But with the outbreak having bloomed to thousands of infections, with cases in nearly every state, on Aug. 4, 2022, the U.S. declared monkeypox a national public health emergency. One reason health experts did not expect monkeypox to become so widespread is that the U.S. had previously approved two vaccines for the virus. Maureen Ferran, a virologist at Rochester Institute of Technology, has been keeping tabs on the two vaccines that can protect against monkeypox.

1. What are the available monkeypox vaccines?

Two vaccines are currently approved in the U.S. that can provide protection against monkeypox, the Jynneos vaccine – known as Imvamune/Imvanex in Europe – and ACAM2000, an older smallpox vaccine.

The Jynneos vaccine is produced by Bavarian Nordic, a small company in Denmark. The vaccine is for the prevention of smallpox and monkeypox disease in adults ages 18 and older who are at high risk for infection with either virus. It was approved in Europe in 2013 and by the U.S. Food and Drug Administration in 2019.

The Jynneos vaccine is given in two doses four weeks apart and contains a live vaccinia virus. Vaccinia normally infects cattle and is a type of poxvirus, a family of viruses that includes smallpox and monkeypox. The virus in this vaccine has been crippled – or attenuated – so that it is no longer able to replicate in cells.

This vaccine is good at protecting those who are at high risk for monkeypox from getting infected before exposure and can also lessen the severity of disease post-infection. It is effective against smallpox as well as monkeypox. Until the recent monkeypox outbreak, this vaccine was primarily given to health care workers or people who have had confirmed or suspected monkeypox exposure.

A circular mass of squiggly lines.

Both the Jynneos and ACAM2000 vaccines use the vaccinia virus, shown here, to produce immunity to smallpox and monkeypox.

CDC/ Cynthia Goldsmith

The ACAM2000 vaccine was approved by the FDA in 2007 for protection against smallpox disease. This vaccine is also based on vaccinia virus, however the version of the vaccinia virus in the ACAM2000 vaccine is able to replicate in a person’s cells. Because of this, the ACAM2000 vaccine can be associated with serious side effects. These can include severe skin infections as well as potentially life-threatening heart problems in vulnerable people. Another potential issue with the ACAM2000 vaccine is that it is more complicated to administer compared to a normal shot.

The U.S. government has over 200 million doses of ACAM2000 stockpiled in case of a biological weapon attack of smallpox. But despite the adequate supply of the vaccine, ACAM2000 is not being used to vaccinate against monkeypox because of the risk of serious adverse side effects. For now, only designated U.S. military personnel and laboratory researchers who work with certain poxviruses may receive this vaccine.

2. How effective are these vaccines?

According to the U.S. Centers for Disease Control, there is not yet any data available on the effectiveness of either vaccine in the current outbreak of monkeypox. But there is older data available from animal studies, clinical trials and studies in Africa.

A number of clinical trials done during the approval process for the Jynneos vaccine show that when given to a person, it triggers a strong antibody response on par with the ACAM2000 vaccine. An additional study done in nonhuman primates showed that vaccinated animals that were infected with monkeypox survived 80% to 100% of the time, compared with zero to 40% survival in unvaccinated animals.

Another use of the Jynneos vaccine is as a post-exposure prophylaxis, or PEP, meaning the vaccine can be effective even when given after exposure to the virus. Because the monkeypox virus incubates in a person’s body for six to 14 days, the body of someone who gets the Jynneos vaccine shortly after being exposed will produce antibodies that can help fight off infection and protect against a serious monkeypox case.

The ACAM2000 data is older and less precise but shows strong protection. Researchers tested the vaccine during an outbreak of monkeypox in central Africa in the 1980s. Although the study was small and didn’t directly test vaccine efficacy, the authors concluded that unvaccinated people faced an 85% higher risk of being infected than vaccinated people.

3. Does a smallpox vaccine protect against monkeypox?

According to the CDC, a previous smallpox vaccination does provide some protection against monkeypox, though that protection wanes over time. Experts advise that anyone who had the smallpox vaccine more than three years ago and is at increased risk for monkeypox get the monkeypox vaccine.

People lining up for monkeypox vaccines.

In California and New York City, demand for vaccines has been high among at-risk communities.

AP Photo/Marcio Jose Sanchez

4. Who should get vaccinated?

At the national level, anyone who has had close contact with an infected person, who has a weakened immune system or who had dermatitis or eczema is eligible for a Jynneos vaccine.

Some state and local governments are also making vaccines available to people in communities at higher risk for monkeypox. For example, New York City is allowing men who have sex with men and who have had multiple sexual partners in the past 14 days to get vaccinated.

5. What is the supply like for the Jynneos vaccine?

As of July 29, 2022, a little over 300,000 doses have been shipped to points of care or administered, with another 700,000 already allocated to states across the U.S. However, demand is far outpacing supply. Public health officials acknowledge that vaccine supply shortages have resulted in long lines and clinics having to close when they run out of vaccines. The issues have been magnified by technical problems with online booking systems, particularly in New York City.

To help boost supply, the U.S. has ordered nearly 7 million doses of the Jynneos vaccine, which are expected to arrive over the coming months.

6. What about just using one dose of Jynneos?

Although federal health officials advise against withholding the second dose, some places – including Washington, D.C., and New York City – are withholding the second dose until more become available. This strategy is being used in Britain and Canada as well to vaccinate as many people as possible at least one time.

A previous study reported that a single shot of the Jynneos vaccine protected monkeys infected with monkeypox and that this protection lasted for at least two years. If this holds up in the real world, it would support withholding second doses in favor of immunizing more Americans. This would be key as many health experts expect the virus to continue spreading, furthering increasing demand of the vaccine.The Conversation

Maureen Ferran, Associate Professor of Biology, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Long COVID-19 may be caused by overactive immune systems

Viruses that cause respiratory diseases like the flu and COVID-19 can lead to mild to severe symptoms within the first few weeks of infection. These symptoms typically resolve within a few more weeks, sometimes with the help of treatment if severe. However, some people go on to experience persistent symptoms that last several months to years. Why and how respiratory diseases can develop into chronic conditions like long COVID-19 are still unclear.

I am a doctoral student working in the Sun Lab at the University of Virginia. We study how the immune system sometimes goes awry after fighting off viral infections. We also develop ways to target the immune system to prevent further complications without weakening its ability to protect against future infections. Our recently published review of the research in this area found that it is becoming clearer that it might not be an active viral infection causing long COVID-19 and similar conditions, but an overactive immune system.

Long COVID-19 patients can experience persistent respiratory, cognitive and neurological symptoms.

The lungs in health and disease

Keeping your immune system dormant when there isn’t an active infection is essential for your lungs to be able to function optimally.

Your respiratory tract is in constant contact with your external environment, sampling around 5 to 8 liters (1.3 to 2 gallons) of air – and the toxins and microorganisms in it – every minute. Despite continuous exposure to potential pathogens and harmful substances, your body has evolved to keep the immune system dormant in the lungs. In fact, allergies and conditions such as asthma are byproducts of an overactive immune system. These excessive immune responses can cause your airways to constrict and make it difficult to breathe. Some severe cases may require treatment to suppress the immune system.

During an active infection, however, the immune system is absolutely essential. When viruses infect your respiratory tract, immune cells are recruited to your lungs to fight off the infection. Although these cells are crucial to eliminate the virus from your body, their activity often results in collateral damage to your lung tissue. After the virus is removed, your body dampens your immune system to give your lungs a chance to recover.

An overactive immune system, as in the case of asthma, can damage the lungs.

Over the past decade, researchers have identified a variety of specialized stem cells in the lungs that can help regenerate damaged tissue. These stem cells can turn into almost all the different types of cells in the lungs depending on the signals they receive from their surrounding environment. Recent studies have highlighted the prominent role the immune system plays in providing signals that facilitate lung recovery. But these signals can produce more than one effect. They can not only activate stem cells, but also perpetuate damaging inflammatory processes in the lung. Therefore, your body tightly regulates when, where and how strongly these signals are made in order to prevent further damage.

While the reasons are still unclear, some people are unable to turn off their immune system after infection and continue to produce tissue-damaging molecules long after the virus has been flushed out. This not only further damages the lungs, but also interferes with regeneration via the lung’s resident stem cells. This phenomenon can result in chronic disease, as seen in several respiratory viral infections including COVID-19, Middle East Respiratory Syndrome (MERS), respiratory syncytial virus (RSV) and the common cold.

The immune system’s role in chronic disease

In our review, my colleagues and I found that many different types of immune cells are involved in the development of chronic disease after respiratory viral infections, including long COVID-19.

Scientists so far have identified one particular type of immune cells, killer T cells, as potential contributors to chronic disease. Also known as cytotoxic or CD8+ T cells, they specialize in killing infected cells either by interacting directly with them or by producing damaging molecules called cytokines.

Killer T cells are essential to curbing the virus from spreading in the body during an active infection. But their persistence in the lungs after the infection has resolved is linked to extended reduced respiratory function. Moreover, animal studies have shown that removing killer T cells from the lungs after infection may improve lung function and tissue repair.

A legion of immune cells work together to remove invading pathogens.

Another type of immune cells called monocytes are also involved in fighting respiratory infections, serving among the first responders by producing virus- and tissue-damaging cytokines. Research has found that these cells also continue to accumulate in the lungs of long COVID-19 patients and promote a pro-inflammatory environment that can cause further damage.

Understanding the immunological mechanisms underlying long COVID-19 is the first step to addressing a quickly worsening public health problem. Identifying the subtle differences in how the same immune cells that protect you during an active infection can later become harmful could lead to earlier diagnosis of long COVID-19. Moreover, based on our findings, my team and I believe treatments that target the immune system could be an effective approach to manage long COVID-19 symptoms. We believe that this strategy may turn out to be useful not only for COVID-19, but also for other respiratory viral infections that lead to chronic disease as well.The Conversation

Harish Narasimhan, PhD Candidate in Immunology, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why Nancy Pelosi's Asia trip is so controversial

U.S. House Speaker Nancy Pelosi arrived in Taiwan on Aug. 2, 2022 – a highly controversial trip that has been strongly opposed by China.

Such is the sensitivity over the island’s status that even before Pelosi’s plane touched down in the capital of Taipei, mere reports of the proposed trip prompted a warning by China of “serious consequences.” In the hours before she set foot on the island, Chinese fighter jets flew close to the median line separating Taiwan and China, while Chinese foreign minister Wang Yi commented that U.S. politicians who “play with fire” on Taiwan would “come to no good end.”

For its part, the U.S. has distanced itself from the visit. Before the trip President Joe Biden said it was “not a good idea.”

As someone who has long studied the U.S.‘s delicate diplomatic dance over Taiwan, I understand why this trip has sparked reaction in both Washington and Beijing, given the current tensions in the region. It also marks the continuation of a process that has seen growing U.S. political engagement with Taiwan – much to China’s annoyance.

Cutting diplomatic ties

The controversy over Pelosi’s visit stems from the “one China” policy – the diplomatic stance under which the U.S. recognizes China and acknowledges Beijing’s position that Taiwan is part of China. The policy has governed U.S. relations with Taiwan for the past 40-plus years.

In 1979, the U.S. abandoned its previous policy of recognizing the government of Taiwan as that of all of China, instead shifting recognition to the government on the mainland.

As part of this change, the U.S. cut off formal diplomatic ties with Taiwan, with the U.S. embassy in Taiwan replaced by a nongovernmental entity called the American Institute in Taiwan.

The institute was a de facto embassy – though until 2002, Americans assigned to the institute would have to resign from U.S. State Department to go there, only to be rehired once their term was over. And contact between the two governments was technically unofficial.

As the government in Taiwan pursued democracy – starting from the lifting of martial law in 1987 through the first fully democratic elections in 1996 – it shifted away from the assumption once held by governments in both China and Taiwan of eventual reunification with the mainland. The government in China, however, has never abandoned the idea of “one China” and rejects the legitimacy of Taiwanese self-government. That has made direct contact between Taiwan and U.S. representatives contentious to Chinese officials.

Indeed, in 1995, when Lee Teng-hui, Taiwan’s first democratically elected president, touched down in Hawaii en route to Central America, he didn’t even set foot on the tarmac. The U.S. State Department had already warned that the president would be refused an entry visa to the U.S., but had allowed for a brief, low-level reception in the airport lounge during refueling. Apparently feeling snubbed, Lee refused to leave the airplane.

Previous political visits

Two years after this incident came a visit to Taiwan by then-House Speaker Newt Gingrich.

Similarly to the Pelosi visit, the one by Gingrich annoyed Beijing. But it was easier for the White House to distance itself from Gingrich – he was a Republican politician visiting Taiwan in his own capacity, and clearly not on behalf of then-President Bill Clinton.

Pelosi’s visit my be viewed differently by Beijing, because she is a member of the same party as President Joe Biden. China may assume she has Biden’s blessing, despite his comments to the contrary.

Asked on July 20 about his views on the potential Pelosi trip, Biden responded that the “military thinks it’s not a good idea right now.”

The comment echoes the White House’s earlier handling of a comment by Biden in which he suggested in May 2022 that the U.S. would intervene “militarily” should China invade Taiwan. Officials in the Biden administration rolled back the comment, which would have broken a long-standing policy of ambiguity over what the U.S. would do if China tried to take Taiwan by force.

Similarly with Pelosi, the White House is distancing itself from a position that suggests a shift in U.S.-Taiwanese relations following a period in which the U.S. had already been trying to rethink how it interacts with Taiwan.

Shifting policy?

In 2018, Congress passed the bipartisan Taiwan Travel Act. This departed from previous policy in that it allowed bilateral official visits between the U.S. and Taiwan, although they are still considered to be subdiplomatic.

In the wake of that act, Donald Trump’s Health and Human Services secretary, Alex Azar, became the highest-ranking U.S. official to visit Taiwan since 1979. Then in 2020, Keith Krach, undersecretary for economic growth, energy and the environment, visited Taiwan.

And in April 2022, a U.S. congressional delegation visited Taiwan. Pelosi herself was reportedly due to visit the island that same month, but canceled after testing positive for COVID-19.

Each of these visits has provoked angry statements from Beijing.

A high-profile visit – even one without the public backing of the White House – would signal support to the island at a time when the invasion of Ukraine by Russia has raised questions over the international community’s commitment to protect smaller states from more powerful neighbors.

Meanwhile, the erosion of democracy in Hong Kong has undermined China’s commitment to the idea of “one nation, two systems.” The principle, which allowed Hong Kong to maintain its economic, political and social systems while returning to the mainland after the end of British rule, had been cited as a model for reunification with Taiwan. The Chinese Communist Party also plans to hold its 20th congress in the coming months, making the timing sensitive for a Taiwan visit from a high-profile U.S. political figure such as Pelosi.

Editor’s note: This is an updated version of an article originally published on July 26, 2022.The Conversation

Meredith Oyen, Associate Professor of History and Asian Studies, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Flood maps show underestimated contamination risk at defunct industrial sites

Climate science is clear: Floodwaters are a growing risk for many American cities, threatening to displace not only people and housing but also the land-based pollution left behind by earlier industrial activities.

In 2019, researchers at the U.S. Government Accountability Office investigated climate-related risks at the 1,571 most polluted properties in the country, also known as Superfund sites on the federal National Priorities List. They found an alarming 60% were in locations at risk of climate-related events, including wildfires and flooding.

As troubling as those numbers sound, our research shows that that’s just the proverbial tip of the iceberg.

Many times that number of potentially contaminated former industrial sites exist. Most were never documented by government agencies, which began collecting data on industrially contaminated lands only in the 1980s. Today, many of these sites have been redeveloped for other uses such as homes, buildings or parks.

For communities near these sites, the flooding of contaminated land is worrisome because it threatens to compromise common pollution containment methods, such as capping contaminated land with clean soil. It can also transport legacy contaminants into surrounding soils and waterways, putting the health and safety of urban ecosystems and residents at risk.

A boat sits by a dock outside a new building along the waterway.

New York developers are planning thousands of housing units along the Gowanus Canal, a notoriously contaminated industrial area and waterway.

Epics/Getty Images

We study urban pollution and environmental change. In a recent study, we conducted a comprehensive assessment by combining historical manufacturing directories, which locate the majority of former industrial facilities, with flood risk projections from the First Street Foundation. The projections use climate models and historic data to assess future risk for each property.

The results show that the GAO’s 2019 report vastly underestimated the scale and scope of the risks many communities will face in the decades ahead.

Pollution risks in 6 cities

We started our study by collecting the location and flood risk for former industrial sites in six very different cities facing varying types of flood risk over the coming years: Houston; Minneapolis; New Orleans; Philadelphia; Portland, Oregon; and Providence, Rhode Island.

These former industrial sites have been called ghosts of polluters past. While the smokestacks and factories of these relics may no longer be visible, much of their legacy pollution likely remains.

In just these six cities, we found over 6,000 sites at risk of flooding in the next 30 years – far more than recognized by the EPA. Using census data, we estimate that nearly 200,000 residents live on blocks with at least one flood-prone relic industrial site and its legacy contaminants.

Without detailed records, we can’t assess the extent of contamination at each relic site or how that contamination might spread during flooding. But the sheer number of flood-prone sites suggests the U.S. has a widespread problem it will need to solve.

The highest-risk areas tended to be clustered along waterways where industry and worker housing once thrived, areas that often became home to low-income communities.

Legacy of the industrial Northeast

In Providence, an example of an older industrial city, we found thousands of at-risk relic sites scattered along Narragansett Bay and the floodplains of the Providence and Woonasquatucket Rivers.

Over the decades, as these factories manufactured textiles, machine tools, jewelry and other products, they released untold quantities of environmentally persistent contaminants, including heavy metals like lead and cadmium and volatile organic chemicals, into the surrounding soils and water.

Map with dots, primarily along waterways.

Flood-prone relic industrial sites in Providence, R.I.

Marlow, et al. 2022, CC BY-ND

For example, the Rhode Island Department of Health recently reported widespread drinking water contamination from PFAS, often referred to as “forever chemicals,” which are used to create stain- and water-resistant products and can be toxic.

The tendency for older factories to locate close to the water, where they would have easy access to power and transportation, puts these sites at risk today from extreme storms and sea-level rise. Many of these were small factories easily overlooked by regulators.

Chemicals, oil and gas

Newer cities, like Houston, are also vulnerable. Houston faces especially high risks given the scale of nearby oil, gas and chemical manufacturing infrastructure and its lack of formal zoning regulations.

In August 2017, historic rains from Hurricane Harvey triggered more than 100 industrial spills in the greater Houston area, releasing more than a half-billion gallons of hazardous chemicals and wastewater into the local environment, including well-known carcinogens such as dioxin, ethylene and PCBs.

Maps with dots widespread in the city.

Flood-prone relic industrial sites in Houston.

Marlow, et al. 2022, CC BY-ND

Even that event doesn’t reflect the full extent of the industrially polluted lands at growing risk of flooding throughout the city. We found nearly 2,000 relic industrial sites at an elevated risk of flooding in the Houston area; the GAO report raised concerns about only 15.

Many of these properties are concentrated in or near communities of color. In all six cities in our study, we found that the strongest predictor of a neighborhood containing a flood-prone site of former hazardous industry is the proportion of nonwhite and non-English-speaking residents.

Keeping communities safe

As temperatures rise, air can hold more moisture, leading to strong downpours. Those downpours can trigger flooding, particularly in paved urban areas with less open ground for the water to sink in. Climate change also contributes to sea-level rise, as coastal communities like Annapolis, Maryland, and Miami are discovering with increasing days of high-tide flooding.

Keeping communities safe in a changing climate will mean cleaning up flood-prone industrial relic sites. In some cases, companies can be held financially responsible for the cleanup, but often, the costs fall to taxpayers.

The infrastructure bill that Congress passed in 2021 includes $21 billion for environmental remediation. As a key element of new “green” infrastructure, some of that money could be channeled into flood-prone areas or invested in developing pollution remediation techniques that do not fail when flooded.

A large brick housing complex with people sitting in lawn chairs outside. A sign on the lawn is in Spanish.

The West Calumet Housing Complex in East Chicago, Ind., was built on the site of an old lead refinery. It was closed down after children there were found to have elevated levels of lead in their blood. The sign reads: ‘Do not play in the dirt or next to shredded wood mulch.’

AP Photo/Tae-Gyun Kim

Our findings suggest the entire process for prioritizing and cleaning up relic sites needs to be reconsidered to incorporate future flood risk.

Flood and pollution risks are not separate problems. Dealing with them effectively requires deepening relationships with local residents who bear disproportionate risks. If communities are involved from the beginning, the benefits of green redevelopment and mitigation efforts can extend to a much larger population.

One approach suggested by our work is to move beyond individual properties as the basis of environmental hazard and risk assessment and concentrate on affected ecosystems.

Focusing on individual sites misses the historical and geographical scale of industrial pollution. Concentrating remediation on meaningful ecological units, such as watersheds, can create healthier environments with fewer risks when the land floods.The Conversation

Thomas Marlow, Postdoctoral Fellow in the Center for Interacting Urban Networks (CITIES) at NYU Abu Dhabi, New York University; James R. Elliott, Professor of Sociology, Rice University, and Scott Frickel, Professor of Sociology and Environment and Society, Brown University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

BRAND NEW STORIES
@2022 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.