March for Science will take scientists’ activism to a new level

Lab coats aren’t typical garb for mass demonstrations, but they may be on full display April 22. That’s when thousands of scientists, science advocates and science-friendly citizens are expected to flood the streets in the March for Science. Billed by organizers as both a celebration of science and part of a movement to defend science’s vital role in society, the event will include rallies and demonstrations in Washington, D.C., and more than 400 other cities around the world.

“Unprecedented,” says sociologist Kelly Moore, an expert on the intersection of science and politics at Loyola University Chicago. “This is the first time in American history where scientists have taken to the streets to collectively protest the government’s misuse and rejection of scientific expertise.”

Some scientists have expressed concern that marching coats science in a partisan sheen; others say that cat is long out of the bag. Keeping science nonpartisan is a laudable goal, but scientists are human beings who work and live in societies — and have opinions as scientists and citizens when it comes to the use, or perceived misuse, of science.

Typically when scientists get involved with a political issue, it’s as an expert sharing knowledge that can aid in creating informed policy. There are standard venues for this: Professional societies review evidence and make statements about a particular issue, researchers publish findings or consensus statements in reports or journals, and sometimes scientists testify before Congress.

In extreme circumstances, though, scientists have embraced other forms of activism. To broadly categorize, there are:

Celebrity voices
In 1938, amid the rise of fascism and use of false scientific claims to support the racism embedded in Nazism, prominent German-American anthropologist Franz Boas released his “Scientists Manifesto.” Signed by nearly 1,300 scientists, including three Nobel laureates, the manifesto denounced the unscientific tenets of Nazism and condemned fascist attacks on scientific freedom.
Fear of war of a different sort prompted Albert Einstein, Bertrand Russell and nine other scientists to compose a manifesto in 1955 calling for nuclear disarmament. The Russell-Einstein Manifesto led to the first Pugwash Conference on Science and World Affairs, which sought “a world free of nuclear weapons and other weapons of mass destruction.”
Wildlife biologist Rachel Carson eloquently synthesized research on the effects of pesticides in her wildly popular book Silent Spring, published in 1962 (she would later testify before Congress). Despite attacks from industry and some in government, Carson’s work helped launch the modern environmental movement, paving the way for the establishment of the Environmental Protection Agency.

Advocacy groups
In the 1930s, chapters of the American Association of Scientific Workers (based loosely on a similar British organization) formed in various cities including Philadelphia, Boston and Chicago. Despite broad goals — promoting science for the benefit of society, stressing public science education, taking a moral stand against government and industry misuse of science — infighting and members’ opposing views limited the group’s effectiveness.

In the decades since, other broadly focused groups — for example, Science for the People (born out of a group started in 1969 by physicists frustrated by their professional society’s lack of action against the Vietnam War), the Union of Concerned Scientists, the American Association for the Advancement of Science — have picked up the banner, speaking out, circulating petitions and more. Single-issue groups such as the Environmental Defense Fund and the Council for Responsible Genetics have proliferated as well.

Protest marchers
Many scientists have traded pocket protectors for placards, hitting the streets as concerned scientist-citizens. Academic scientists frequently joined university students in rallies against the Vietnam War in the 1960s and early ’70s. Linus Pauling famously protested nuclear testing in a march outside the White House in 1962 (he was in town for a dinner with the Kennedys honoring Nobel laureates). Carl Sagan was one of hundreds arrested for protesting nuclear testing at a Nevada site in 1987. And plenty of scientist-citizens joined the inaugural Women’s March on Washington in January and the annual People’s Climate March (the 2017 one is scheduled for April 29, just a week after the March for Science).

But the March for Science feels different, say the science historians. Transforming concern into sign-toting, pavement-pounding, slogan-shouting activism is motivated by a collective — and growing — sense of outrage that the federal government is undermining, ignoring, even discarding and stifling science. That’s hitting many scientists not just in their livelihoods, but in the very fabric of their DNA. “Part of [President] Trump’s message is that science is not going to be thought of as part of a collective good that’s essential for decision making in a democracy,” Moore says. “We have not seen this outright rejection of science by the state.”

That rejection has come in many forms, says David Kaiser, a science historian at MIT. “It’s a cluster of issues: cutbacks in basic research across many domains, the censure and censorship regarding data collected by the government or the ability of government scientists to speak, and a range of threats to academic freedom and the research process generally.”

It’s a sign of the times, too, says Al Teich, a science policy expert at George Washington University in Washington, D.C. President Reagan, for example, slashed science in his budget in 1981. But many more people today are aware of science’s role in society, says Teich, the former director for science and policy programs at AAAS. This awareness may be fueling the upcoming march. “The number of people engaged and the range of scientists involved is not something that I’ve ever seen before.”

Measuring the impact of any of these efforts is difficult. They aren’t controlled laboratory experiments, after all. But one thing this march may do is spawn a new form of activism, says Moore: more scientists running for political office.

Beetles have been mooching off insect colonies for millions of years

Mooching roommates are an ancient problem. Certain species of beetles evolved to live with and leech off social insects such as ants and termites as long ago as the mid-Cretaceous, two new beetle fossils suggest. The finds date the behavior, called social parasitism, to almost 50 million years earlier than previously thought.

Ants and termites are eusocial — they live in communal groups, sharing labor and collectively raising their young. The freeloading beetles turn that social nature to their advantage. They snack on their hosts’ larvae and use their tunnels for protection, while giving nothing in return.

Previous fossils have suggested that this social parasitism has been going on for about 52 million years. But the new finds push that date way back. The specimens, preserved in 99-million-year-old Burmese amber, would have evolved relatively shortly after eusociality is thought to have popped up.

One beetle, Mesosymbion compactus, was reported in Nature Communications in December 2016. A different group of researchers described the other, Cretotrichopsenius burmiticus, in Current Biology on April 13. Both species have shielded heads and teardrop-shaped bodies, similar to modern termite-mound trespassers. Those adaptations aren’t just for looks. Like a roommate who’s found his leftovers filched one too many times, termites frequently turn against their pilfering housemates.

40 more ‘intelligence’ genes found

Smarty-pants have 40 new reasons to thank their parents for their powerful brains. By sifting through the genetics of nearly 80,000 people, researchers have uncovered 40 genes that may make certain people smarter. That brings the total number of suspected “intelligence genes” to 52.

Combined, these genetic attributes explain only a very small amount of overall smarts, or lack thereof, researchers write online May 22 in Nature Genetics. But studying these genes, many of which play roles in brain cell development, may ultimately help scientists understand how intelligence is built into brains.
Historically, intelligence research has been mired in controversy, says neuroscientist Richard Haier of the University of California, Irvine. Scientists disagreed on whether intelligence could actually be measured and if so, whether genes had anything at all to do with the trait, as opposed to education and other life experiences. But now “we are so many light-years beyond that, as you can see from studies like this,” says Haier. “This is very exciting and very positive news.”

The results were possible only because of the gigantic number of people studied, says study coauthor Danielle Posthuma, a geneticist at VU University Amsterdam. She and colleagues combined data from 13 earlier studies on intelligence, some published and some unpublished. Posthuma and her team looked for links between intelligence scores, measured in different ways in the studies, and variations held in the genetic instruction books of 78,308 children and adults. Called a genome-wide association study or GWAS, the method looks for signs that certain quirks in people’s genomes are related to a trait.

This technique pointed out particular versions of 22 genes, half of which were not previously known to have a role in intellectual ability. A different technique identified 30 more intelligence genes, only one of which had been previously found. Many of the 40 genes newly linked to intelligence are thought to help with brain cell development. The SHANK3 gene, for instance, helps nerve cells connect to partners.

Together, the genetic variants identified in the GWAS account for only about 5 percent of individual differences in intelligence, the authors estimate. That means that the results, if confirmed, would explain only a very small part of why some people are more intelligent than others. The gene versions identified in the paper are “accounting for so little of the variance that they’re not telling us much of anything,” says differential developmental psychologist Wendy Johnson of the University of Edinburgh.

Still, knowing more about the genetics of intelligence might ultimately point out ways to enhance the trait, an upgrade that would help people at both the high and low ends of the curve, Haier says. “If we understand what goes wrong in the brain, we might be able to intervene,” he says. “Wouldn’t it be nice if we were all just a little bit smarter?”

Posthuma, however, sees many roadblocks. Beyond ethical and technical concerns, basic brain biology is incredibly intricate. Single genes have many jobs. So changing one gene might have many unanticipated effects. Before scientists could change genes to increase intelligence, they’d need to know everything about the entire process, Posthuma says. Tweaking genetics to boost intelligence “would be very tricky.”

Babies categorize colors the same way adults do

Lots of newborn decorations come in black and white, so that young babies can better see the shapes. But just because it’s easier for babies to see bold blacks and whites doesn’t mean they can’t see color.

Very few studies of color vision in newborns exist, says Anna Franklin, a color researcher at the University of Sussex in England. “But those that have been conducted suggest that newborns can see some color, even if their color vision is limited,” she says. Newborns may not be great at distinguishing maroon from scarlet, but they can certainly see a vivid red.

But as babies get a little older, they get remarkably adept at discerning the world’s palette, new research shows. Babies ages 4 months to 6 months old are able to sort colors into five categories, researchers report in the May 23 Proceedings of the National Academy of Sciences.

These preverbal color capabilities offer insight into something scientists have long wondered: Without words for individual colors, how do babies divvy up the hues across the color wheel, telling when blue turns to green, for instance?

Along with Franklin and colleagues, psychologist Alice Skelton, also of the University of Sussex, bravely approached this question. The team coaxed 179 4- to 6-month-old babies to calmly and repeatedly look at two squares, each 1 of 14 various colors.

After showing babies two squares of the same color over and over, the researchers made one of the squares a new color. Gazing at the new hue longer was a sign that the baby recognized the color as new. In this meticulous way, the researchers worked their way around the entire color wheel for each baby.

The experiment required stamina, from both Skelton and the young participants. “Sometimes you can have whole weeks where the babies just don’t want to do it,” she says. Despite that, she found the process enjoyable: “Babies are nice people.”
Babies, like adults, bin hues into red, yellow, green, blue and purple, Skelton and her colleagues found. “Given the commonalities and patterns you see in the way that languages divide up the color spectrum, we did expect that we would see some evidence of these same patterns in the way babies divide up the spectrum,” Skelton says. “What was surprising, for me at least, was how nicely it fell out.”

That discernment comes even though the babies probably don’t know the words for the colors. This suggests that babies are most likely born with these categories preprogrammed in their brains. The babies in the study came from just one culture. But “we anticipate that infants from different cultures would categorize color similarly,” Franklin says.

The results offer an interesting window into what’s happening in a baby’s brain as she learns about her world. And the results also come with a gentle suggestion: Don’t restrict your newborn’s art to black and white. She may already harbor a fondness for blue.

New heart attack treatment uses photosynthetic bacteria to make oxygen

Acting like miniature trees that soak up sunlight and release oxygen, photosynthetic bacteria injected into the heart may lighten the damage from heart attacks, a new study in rats suggests.

When researchers injected the bacteria into rats’ hearts, the microbes restored oxygen to heart tissue after blood supply was cut off as in a heart attack, researchers at Stanford University report June 14 in Science Advances.

“It’s really out of the box,” says Himadri Pakrasi, a systems biologist at Washington University in St. Louis who was not involved in the research. “It reads like science fiction to me, but it’s fantastic if it works.”
The organism, called Synechococcus elongatus, has been used recently to produce biofuels, but this may be the first time the cyanobacteria have ever been used in a medical setting, he says.

Other researchers also reacted enthusiastically to the study. “It’s outrageous, but outrageous in a good way,” says Susan Golden, who studies cyanobacteria at the University of California, San Diego. Cardiovascular scientist Matthias Nahrendorf of Massachusetts General Hospital in Boston says, “I enjoy the idea. It’s really fresh.”

Bringing oxygen to starved tissues is what Stanford cardiovascular surgeon Joseph Woo had in mind when he and colleagues dreamed up the plan to put light-harvesting bacteria into the heart. In a heart attack, clogged arteries or blood clots cut off blood flow to the organ. Without oxygen supplied by the blood, heart cells die.

Woo wanted a way for the heart to make its own oxygen or access another supply until doctors could open blocked vessels and restore blood flow. Plants make oxygen from carbon dioxide and sunlight, so Woo wondered, “Why not bring the tree to your heart?”

He and colleagues started by grinding up kale and spinach to harvest chloroplasts, the organelles within plant cells that carry out photosynthesis. But the chloroplasts didn’t survive outside the cells. That’s when the researchers learned about S. elongatus, a photosynthetic organism that Golden and other researchers have long used to study circadian rhythms.
After finding that cyanobacteria could provide oxygen to heart cells in a lab dish, the next step was to see how the cyanobacteria would fare in an animal. The researchers stopped blood flow to part of rats’ hearts and after 15 minutes injected either cyanobacteria or a saline solution. Oxygen in tissue with bacteria increased to about three times the levels measured right after the heart attack, similar to what saline-treated rats experienced.
That was in the dark: When researchers exposed the heart to light, rats that got the bacteria had 25 times higher oxygen levels than they did after the heart attack. Four weeks after the treatment, these rats had less heart damage than untreated rodents, indicating long-term benefits. In fact, the hearts of photosynthesis-treated rats were beating strongly: Blood flow out of the heart was 30 percent higher in rats treated with cyanobacteria and light than those treated with the bacteria in the dark. That extra blood flow could make the difference between life and death for some patients, Woo says. The results indicate that the bacteria need light to supply heart cells with enough oxygen to stave off damage. That presents a difficulty if the cyanobacteria are ever to be used in people: Getting light into the heart is a major hurdle.

“It will be next to impossible to open the chest to light,” says Nahrendorf. “A day on the beach won’t do the trick.” Woo says the researchers are working with engineers at Stanford to make devices that can shine light through bones and skin to reach the heart and other deep tissues.

Injecting bacteria into the heart is also a risky proposition. “What you’re doing is infecting a tissue, and that’s rarely a good thing,” says Nahrendorf. But the cyanobacteria were cleared from the rats’ bodies within 24 hours and didn’t provoke the immune system to attack the heart, the researchers found. Some other cyanobacteria produce toxins, says Golden. “But this organism is benign,” she says.

Cyanobacteria might also supply oxygen to tissues in other diseases, such as brain injuries, strokes or nonhealing wounds in people with diabetes, says Arnar Geirsson, a cardiovascular scientist at Yale University. Photosynthetic bacteria might also help preserve organs for transplant.

“I’m quite impressed,” Geirsson says. “It’s a really unique way to deliver oxygen.”

Every breath you take contains a molecule of history

Julius Caesar could have stayed home on March 15, 44 B.C. But mocking the soothsayer who had predicted his death, the emperor rode in his litter to Rome’s Forum. There he met the iron daggers of 60 senators.

As he lay in a pool of blood, he may have gasped a final incrimination to his protégé Brutus: You too, my son? Or maybe not. But he certainly would have breathed a dying breath, a final exhalation of some 25 sextillion gas molecules. And it’s entirely possible that you just breathed in one of them.
In fact, calculating the probability of a particle of Caesar’s dying breath appearing in any given liter of air (the volume of a deep breath) has become a classic exercise for chemistry and physics students. If you make a few assumptions about the mixing of gases and the lifetimes of molecules in the atmosphere, it turns out that, on average, one molecule of “Caesar air” — or any other historic liter of air, for that matter — appears in each breath you take.

Author Sam Kean begins his book Caesar’s Last Breath with this exercise, noting that “we can’t escape the air of those around us.” It’s all recycled, and every day we breathe in a bit of our, and Earth’s, history. “The story of Earth,” he writes, “is the story of its gases.”

Kean, author of a best seller about the periodic table, The Disappearing Spoon, then tells that story. As he did in his fascinating portraits of the elements, Kean profiles individual gases such as nitrogen and oxygen primarily through the scientists and entrepreneurs who discovered or sought to harness them. These are quirky men (and they are mostly men) — every bit as obsessed, greedy and brilliant as one could hope for in a page-turner.

Along with lesser-known backstories of textbook heroes such as James Watt, Antoine-Laurent Lavoisier and Albert Einstein (who was surprisingly obsessed with building a better refrigerator), Kean clearly delights in weaving in the unexpected. In the discussion of helium, we learn about Joseph-Michel Montgolfier, the papermaker who was inspired to build the first hot-air balloon as he watched his wife’s pantaloons billowing suggestively above a fire. And in a chapter on the radioactive elements carried in nuclear fallout, there’s Pig 311, a sow that survived a nuclear test blast only to be used as propaganda for the weapons’ supposed safety.

Along the way, Kean threads in the history of Earth’s atmosphere in a surprisingly compelling narrative of geologic history. He steps aside from Lavoisier’s work on life-giving oxygen, for example, to describe the Great Oxygenation Event, which infused the atmosphere a couple billion years ago with a gas that, at the time, was toxic to most living things. The explanations of science here and throughout the book are written clearly and at a level that should be understandable with a high school education. And while they’re straightforward, the explanations have enough depth to be satisfying; by the end of the book, you realize you’ve learned quite a bit.
Even those who rarely read science will enjoy the drama — death, for instance, plays a big role in these stories. Over and over, we learn, men have taken gases’ powers too lightly, or wielded their own power too cruelly, and paid the price. Fritz Haber, for instance, could have died a hero for finding a way to make fertilizer from the nitrogen in air. Instead, he died broke and loathed for his World War I work on gas warfare.

Then there was Harry Truman — not that Truman, but the one who refused to leave his home when scientists warned of an impending volcanic eruption. Truman contended that officials were “lying like horses trot” right up until Mount St. Helens blew searing gases that erased him from the mountainside.

The links between these stories can seem at first as ephemeral as the gases, but together they tell the story of the birth of the atmosphere and humans’ history in it. In the end, like Caesar’s breath, it all comes full circle.

50 years ago, a millionth of a degree above absolute zero seemed cold

A common pin dropped on a table from a height of one-eighth of an inch generates about 10 ergs of energy, obviously a minuscule amount. That 10 ergs raises temperature, and even that tiny amount is “much too much” to be allowed in the experiment during which Dr. Arthur Spohr of the Naval Research Laboratory reached the lowest temperature yet achieved — within less than a millionth of a degree of absolute zero. — Science News, July 8, 1967.

Update
Today, scientists can make clouds of atoms at temperatures as low as 50 trillionths of a degree above absolute zero (SN: 5/16/15, p. 4). Late this year or early next year, NASA will launch its Cold Atom Laboratory to the International Space Station so scientists can study ultra­cold atoms reaching 100 trillionths of a degree or less. In orbit, gravity doesn’t drag atoms down, so the clouds can stay intact for scientists’ observations for up to 10 seconds — longer than is possible on Earth.

Brain activity helps build an alpha male

Boosting the activity of certain brain cells can help a mouse climb the social ladder.

Nerve cells in a region called the dorsomedial prefrontal cortex appear to control whether male mice are dominant or submissive to other males, researchers report in the July 14 Science. The finding adds to previous evidence that this brain region is involved in social interactions in mammals.

Like men flexing muscles or flaunting sports cars to win status, male mice compete to establish a social pecking order. When every mouse knows his place, there can be less social conflict in the long run, says James Curley, a neurobiologist at the University of Texas at Austin who wasn’t part of the study.
In dominance tests, researchers pitted mice head-to-head in a plastic tube too narrow for the animals to pass each other. With no way forward, the lower-ranking mouse eventually retreats, pushed out of the tube by the more dominant mouse.

Researchers recorded the activity of individual nerve cells, or neurons, in mice’s brains while they engaged in the tube test. A group of neurons in the dorsomedial prefrontal cortex fired faster when mice were pushing forward to claim space in the tube, and fired more slowly as the mice retreated, says study coauthor Hailan Hu, a neuroscientist at Zhejiang University in Hangzhou, China.

Hu’s team then manipulated the activity of those neurons and once again measured the mice’s performance in the tube. Mice with these neurons inactivated via druglike small molecules didn’t try as hard on the tube task and were more likely to lose the competition, the researchers found. Mice with those neurons amped up with light, on the other hand, won against opponents who had previously beaten them. If those mice won enough times in one day, they’d even keep their newly elevated status for two or three days.
Other studies have also suggested a role for the prefrontal cortex in controlling social dynamics in several species, including humans, Curley says. The new study adds detail by allowing the researchers to track how neural firing influences behavior immediately and then follow the effect over time.

However, the tube experiment measures dominance dynamics in pairs of mice, Curley points out, rather than in larger groups. “Whether the same mechanism underlies social dominance under all contexts is yet to be discovered,” he says.

Other factors, such as an animal’s size, can also influence its ability to win a fight. But Hu says that persistence is key, and that this group of neurons appears to affect that quality. “In risk tests, what’s important is how much effort you want to put into the competition,” she says. “Some mice quit easily.”

Mouse studies like this one don’t translate directly to humans. But they allow scientists to study the neurobiology of dominance behaviors in levels of detail that aren’t possible in human subjects.

The study tested only male mice. In the future, Hu wants to find out whether a similar brain mechanism holds for female mice, too.

This history book offers excellent images but skimps on modern science

Books about the history of science, like many other histories, must contend with the realization that others have come before. Their tales have already been told. So such a book is worth reading, or buying, only if it offers something more than the same old stories.

In this case, The Oxford Illustrated History of Science offers most obviously an excellent set of illustrations and photographs from science’s past, from various ancient Egyptian papyruses to the Hubble Space Telescope’s ultradeep view of distant galaxies. Some of the images will be familiar to science fans; many others are obscure but apt; nearly all help illustrate various aspects of science’s history.
And yet the pictures, while many may be worth more than 10,000 words, are still just complements to the text. Oxford attempts a novel organization for recounting the story of science: a sometimes hard-to-follow mix of chronological and topical. The first section, “Seeking Origins,” has six chapters that cover ancient Mediterranean science, science in ancient China, medieval science (one chapter for the Islamic world and Europe, one for China), plus the scientific revolution and science in the Enlightenment. The second section, “Doing Science,” shifts to experimenting, fieldwork, biology, cosmology, theory and science communication.
Each chapter has a different author, which has the plus of bringing distinct expertise to each subject matter but the minus of vast divergence in readability and caliber of content. Some chapters (see “Exploring Nature,” on field science) are wordy, repetitive and lack scientific substance. Others (“Mapping the Universe”) are compelling, engaging and richly informative. A particularly disappointing chapter on biology (“The Meaning of Life”) focuses on 19th century evolution, with only a few paragraphs for the life science of the 20th and 21st centuries. That chapter closes with an odd, antiscientific tone lamenting the “huge numbers of people … addicted to antidepressants” and complaining that modern biology (and neuroscience) “threatens to undermine traditional values of moral responsibility.”

Some of the book’s strongest chapters are the earliest, especially those that cover aspects of science often missing in other histories, such as science in China. Who knew that the ancient Chinese had their own set of ancient elements — not the Greeks’ air, earth, water and fire, but rather wood, fire, water, soil and metal?

With the book’s second-half emphasis on how science was done rather than what science found out, the history that emerges is sometimes disjointed and out of order. Discussions of the modern view of the universe, which hinges on Einstein’s general theory of relativity, appear before the chapter on theory, where relativity is mentioned. In fact, both relativity and quantum theory are treated superficially in that chapter, as examples of the work of theorists rather than the components of a second scientific revolution.
No doubt lack of space prevented deeper treatment of science from the last century. Nevertheless the book’s merits outweigh its weaknesses. For an accessible account of the story of pre-20th century science, it’s informative and enjoyable. For more recent science, you can at least look at the pictures.

Sacrificed dog remains feed tales of Bronze Age ‘wolf-men’ warriors

Remains of at least two Late Bronze Age initiation ceremonies, in which teenage boys became warriors by eating dogs and wolves, have turned up in southwestern Russia, two archaeologists say. The controversial finds, which date to between roughly 3,900 and 3,700 years ago, may provide the first archaeological evidence of adolescent male war bands described in ancient texts.

Select boys of the Srubnaya, or Timber Grave, culture joined youth war bands in winter rites, where they symbolically became dogs and wolves by consuming canine flesh, contend David Anthony and Dorcas Brown, both of Hartwick College in Oneonta, N.Y. This type of initiation ceremony coincides with myths recorded in texts from as early as roughly 2,000 years ago by speakers of Indo-European languages across Eurasia, the researchers report in the December Journal of Anthropological Archaeology.
Those myths link dogs and wolves to youthful male war bands, warfare and death. In the ancient accounts, young warriors assumed names containing words for dogs or wolves, wore dog or wolf skins and, in some cases, ate dogs during initiation ceremonies.

Mythic themes involving dogs from 2,000 years ago may differ from the rites practiced 4,000 years ago, Anthony acknowledges. “But we should look at myths across Eurasia to understand this archaeological site,” he says.
But some researchers are unconvinced by the pair’s explanation for why at least 64 dogs and wolves were sacrificed at the Krasnosamarskoe settlement.
“Archaeologists can weave mythology and prehistory together, but only with extreme caution,” says archaeologist Marc Vander Linden of University College London.
At most, Indo-European mythology suggests that Late Bronze Age folks regarded dogs as having magical properties and perhaps ate them in rituals of some kind, Vander Linden says. But no other archaeological sites have yielded evidence for teenage male war bands or canine-consuming initiation rites, raising doubts about Anthony and Brown’s proposed scenario, he argues.

Some ancient Indo-European myths attribute healing powers to dogs, says archaeologist Paul Garwood of the University of Birmingham in England. In those myths, dogs absorb illness from people, making the canines unfit for consumption. Perhaps ritual specialists at Krasnosamarskoe sacrificed dogs and wolves as part of healing ceremonies without eating the animals, Garwood proposes.

Dog and wolf deposits at the Russian site align with myths connecting these animals to war bands and initiation rites, not healing, Anthony responds.

Michael Witzel, an authority on ancient texts of India and comparative mythology at Harvard University, agrees. Anthony and Brown have identified the first archaeological evidence in support of ancient Indo-European myths about young, warlike “wolf-men” who lived outside of society’s laws, he says.

Excavations at Krasnosamarskoe in 1999 and 2001 yielded 2,770 dog bones, 18 wolf bones and six more bones that came from either dogs or wolves. Those finds represent 36 percent of all animal bones unearthed at the site. Dogs account for no more than 3 percent of animal bones previously unearthed at each of six other Srubnaya settlements, so canines were not typically eaten and may have been viewed as a taboo food under most circumstances, the investigators say.

Bones from dogs’ entire bodies displayed butchery marks and burned areas produced by roasting. Dogs’ heads were chopped into 3- to 7-centimeter-wide pieces using a standardized sequence of cuts. It was a brutal, ritual behavior that demanded practice and skill, Anthony asserts. Cattle and sheep or goat remains at Krasnosamarskoe also show signs of butchery and cooking but do not include any sliced-and-diced skulls.

Separate arrays of dog bones indicate that at least two initiation ceremonies, and possibly several more, occurred over Krasnosamarskoe’s 200-year history. Microscopic analyses of annual tissue layers in tooth roots of excavated animals indicated that dogs almost always had been killed in the cold half of the year, from late fall through winter. Cattle were slaughtered in all seasons, so starvation can’t explain why dogs were sometimes killed and eaten, the researchers say.

DNA extracted from teeth of 21 dogs tagged 15 as definitely male and another four as possibly male, leaving two confirmed females. A focus on sacrificing male dogs at Krasnosamarskoe is consistent with a rite of passage for young men, Anthony says.

Excavations of a Srubnaya cemetery at the Russian site produced bones of two men, two women, an adult of undetermined sex and 22 children, most between ages 1 and 7. The two men, who both displayed injuries from activities that had put intense stress on their knees, ankles and lower backs, may have been ritual specialists, the researchers speculate. These men would have directed initiation ceremonies into war bands, Anthony says.