A gaping wound in Earth’s atmosphere is definitively healing. Since 2000, the average size of the Antarctic ozone hole in September has shrunk by about 4.5 million square kilometers, an area larger than India, researchers report online June 30 in Science. While the hole won’t close completely until at least midcentury, the researchers say the results are a testament to the success of the Montreal Protocol. That international treaty, implemented in 1989, banned ozone-depleting chemicals called chlorofluorocarbons worldwide.
Ozone helps shield life on Earth from hazardous ultraviolet radiation. Tracking the ozone layer’s recovery process is tricky because natural phenomena such as volcanic eruptions and weather variations can alter the size of the ozone hole. While some earlier studies suggested that the ozone had already begun healing (SN: 6/4/11, p. 15), many scientists questioned whether the work had been detailed enough to separate out the effects of natural variability.
MIT atmospheric scientist Susan Solomon and colleagues used a sophisticated 3-D atmospheric simulation to distinguish between the forces acting on atmospheric ozone. The work suggests that about half of the ozone hole’s recent shrinkage resulted from a drop in chlorofluorocarbons in the atmosphere; the remainder stemmed from weather changes.
Volcanic eruptions obscure healing signs. Last October, the ozone hole reached a record-setting average size of 25.3 million square kilometers — an area larger than Russia — thanks to the April 2015 eruption of Chile’s Calbuco volcano. That large size doesn’t disprove that the ozone hole is healing in the long run, though. Without the temporary 4.2-million-square-kilometer boost from the volcano, the hole’s average size would have peaked at a more modest 21.1 million square kilometers, the researchers estimate.
On the inevitability scale, death and taxes are at the top. Aging is close behind.
It’s unlikely that scientists will ever find a way to avoid death. And taxes are completely out of their hands. But aging, recent research suggests, is a problem that science just might be able to fix.
As biological scientists see it, aging isn’t just accumulating more candles on your birthday cake. It’s the gradual deterioration of proteins and cells over time until they no longer function and can’t replenish themselves. In humans, aging manifests itself outwardly as gray hair, wrinkles and frail, stooped bodies. Inside, the breakdown can lead to diabetes, heart disease, cancer, Alzheimer’s disease and a host of other problems. Scientists have long passionately debated why cells don’t stay vigorous forever. Research in mice, fruit flies, worms and other lab organisms has turned up many potential causes of aging. Some experts blame aging on the corrosive capability of chemically reactive oxygen molecules or “oxidants” churned out by mitochondria inside cells. DNA damage, including the shortening of chromosome endcaps (called telomeres) is also a prime suspect. Chronic, low-grade inflammation, which tends to get worse the older people get, wreaks so much havoc on tissues that some researchers believe it is aging’s prime cause, referring to aging as “inflammaging.” All these things and more have been proposed to be at the root of aging. Some researchers, like UCLA’s Steve Horvath, view aging as a biological program written on our DNA. He has seen evidence of a biological clock that marks milestones along life’s path. Some people reach those milestones more quickly than others, making them older biologically than the calendar suggests. Others take a more leisurely stroll, becoming biological youngsters compared with their chronological ages.
Many others, including Richard Miller, a geroscientist at the University of Michigan, deny that aging is programmed. Granted, a biological clock may measure the days of our lives, but it’s not a ticking time bomb set to go off on a particular date. After all, humans aren’t like salmon, which spawn, age and die on a schedule.
Instead, aging is a “by-product of running the engine of life,” says biodemographer Jay Olshansky of the University of Illinois at Chicago. Eventually bodies just wear out. That breakdown may be predictable, but it’s not premeditated. Despite all the disputes about what aging is or isn’t, scientists have reached one radical consensus: You can do something about it. Aging can be slowed (maybe even stopped or reversed). But exactly how to accomplish such a counterattack is itself hotly debated. Biotechnology and drug companies are developing several different potential remedies. Academic scientists are investigating many antiaging strategies in animal experiments. (Most of the research is still being done on mice and other organisms because human tests will take decades to complete.) Even researchers who think they have finally come up with real antiaging elixirs say they don’t have the recipe for immortality, though. Life span and health span, new research suggests, are two entirely separate things. Most researchers who work on aging aren’t bothered by that revelation. Their goal is not necessarily extending life span, but prolonging health span — the length of time people live without frailty and major diseases.
Aging as disease Many health problems are so commonly associated with aging that some researchers take the highly controversial stance that aging itself is a disease, says Saul Villeda of the University of California, San Francisco.
If aging is a disease, in Villeda’s lab it’s almost a contagious one: He can artificially spread aging from old lab mice to young ones. One mode of aging transmission is to give genetically identical mice transfusions of young or old blood. In another approach, researchers sew together pairs of mice so that their blood vessels will join up and link their circulatory systems.
This artificial joining of two separate animals, known as parabiosis, was a staple of physiology experiments for over a century before Irina Conboy got the idea to pair an old mouse with a young one. Conboy, a stem cell researcher at the University of California, Berkeley, made headlines with her experiments. Those headlines focused on the good news: Young blood rejuvenated old mice. In further studies by other researchers, infusions of young blood made broken bones in old mice heal better (SN Online: 5/19/15), gave their muscles extra spring and improved their memories (SN: 5/31/14, p. 8). Apparently some substances in the blood triggered the rejuvenation. Some candidates for those rejuvenation factors have been identified, although none are universally agreed on.
But news accounts mostly ignored the flip side of the experiment: Being tethered to an old mouse made young mice age faster. One substance in the blood of old mice, a protein called Beta‑2‑microglobulin, or B2M, seemed to prematurely age the young ones, Villeda and colleagues reported last year in Nature Medicine (SN: 8/8/15, p. 10). Parabiosis experiments don’t last very long, so no one knows whether youth or decrepitude will win in the end — or if the two mice would have settled into middle age together.
UCLA’s Horvath has evidence that the mice may never totally sync. He monitors aging by examining molecular tags called methyl groups, which attach to various locations on DNA in a process called methylation. Methylation is an epigenetic modification of DNA. Such modifications work something like flagging passages in a book with sticky notes. Attaching a tag doesn’t change the information in the book — it just draws attention to some passages and signals that others should be ignored.
Horvath measures DNA methylation changes at 353 different spots in the human genetic instruction book, or genome. As people age, 193 locations accumulate tags, like playbills plastered on urban buildings. At 160 others, methylation is gradually stripped away with age. Knowing how much methylation is normally found at each spot at a given chronological age allows Horvath to calculate biological age. Some people age at different rates than others, he discovered. For instance, semi-supercentenarians — people who live to be 105 to 109 — are about 8.6 years younger epigenetically than their chronological age. Their children are slow to age, too, though not as slow as their parents. Epigenetic clocks indicate that the offspring are about five years younger biologically than other people of the same chronological age.
People often joke about certain abilities, such as eyesight, memory or hearing being “the first to go.” Some of Horvath’s work suggests that the notion isn’t entirely far-fetched. He calculated the epigenetic age of specific organs and discovered that body parts can age at different rates. The cerebellum, the part of the brain that sits at the top of the brain stem and helps coordinate movement, speech and other activities, ages the slowest of the brain regions that Horvath analyzed. While there are natural differences in organ aging, some conditions, such as HIV infection and obesity, can prematurely age certain organs, Horvath and colleagues have found.
These experiments demonstrate that aging and its effects are malleable. “Aging is really plastic — it’s not set in stone,” says Conboy. Consequently, she and other researchers agree, something can be done to slow aging, or perhaps turn it around entirely. But exactly what can be done is vigorously disputed.
Interpret with caution Most scientists working on aging urge caution in extrapolating promising results in animal studies to humans. For instance, one of the most promising early candidates for a rejuvenation factor from young blood was a protein called GDF11. Reports in 2013 and 2014 concluded that GDF11 levels in blood decline with age; restoring the protein in old animals could reverse some heart problems, improve muscle strength and spur nerve cell growth in the brain. Since those reports, other researchers have disputed the protein’s revitalizing powers. In a recent study, researchers measured GDF11 levels in 140 people aged 21 to 93. Levels of the protein didn’t decline with age, Mayo Clinic researchers reported in the June 14 Cell Metabolism. Previous researchers may have gotten GDF11 mixed up with a similar protein called myostatin, which does dip as people get older. Not only does GDF11 not decline with age, having too much of it could be bad, the Mayo team found. People with higher blood levels of the protein were more likely to be frail, have diabetes and heart problems, and have a more difficult time recovering from surgery than people with lower levels of the protein.
Beyond the blood experiments, scientists have examined various ideas about what goes wrong in aging and have devised strategies to counteract it. For instance, some evidence suggests that stem cells run out of steam as they get older. Restoring old stem cells to youthful vigor may enable them to repair or replace damaged tissues and turn back the biological clock. Keeping stem cells youthful may involve sheltering them from inflammation or things that could damage their DNA.
One way to keep stem cells and other cells working is to avoid the loss of telomeres capping the ends of chromosomes. As cells divide, their telomeres grow shorter until they are so short that chromosomes can no longer safely replicate. That may be a signal for the cell to shut down or die. So some researchers think that lengthening telomeres could give cells the protection needed to survive longer.
One biotechnology company executive flew from the United States to Colombia to try out her company’s gene therapy for lengthening telomeres. That decision bypassed U.S. government and other safety measures designed to protect human study participants. And no one knows whether it will work or doom her to cancer, which often relies on long telomeres to keep growing.
Other researchers are exploring more measured approaches to antiaging therapies. One study in dogs is testing rapamycin, the first drug shown to lengthen mouse life spans. Rapamycin is an immune suppressant that also has anticancer effects. The rationale for using it came from research on caloric restriction, the world-champion method for making animals live longer. Animals on calorie-restricted diets typically eat at least 25 percent fewer calories than normal. Such low-cal treatment has increased life spans in mice, dogs, fruit flies, yeast, worms and other lab organisms. Results from primate studies have been mixed (SN: 8/1/09, p. 9; SN: 10/6/12, p. 8). Some people have put themselves on caloric-restriction regimes (SN: 10/25/08, p. 17). A handful of studies suggest that those people have better health, but it’s too soon to know whether they will outlive their peers.
Exactly why drastically reducing food intake can extend life isn’t known. But researchers have good evidence that a series of biochemical reactions known as the mTOR pathway is involved. The protein mTOR helps monitor nutrient levels in cells and regulates cell movement, protein production, and cell growth and survival. When starvation sets in, cells turn off mTOR’s activity, which allows a self-cannibalizing process called autophagy to scavenge nutrients by digesting some of the cell’s internal organs. This internal garbage disposal and recycling method also removes old, worn-out mitochondria and proteins that may otherwise keep cells from functioning efficiently. That process and other cellular activities governed by mTOR may be responsible for making cells, and organisms, live longer. Rapamycin gave mTOR its name — mechanistic target of rapamycin. Giving the drug might do what caloric restriction does without requiring superstrict diets (SN: 6/4/11, p. 22). Matt Kaeberlein, a geroscientist at the University of Washington, and colleagues conducted a safety study of the drug last year in 24 dogs. The study was only 10 weeks long, so the researchers can’t yet draw any conclusions about long-term effects on aging. But the dogs had no major side effects from taking low doses of the drug, a worry because rapamycin impairs immune system function and could make animals (including people) who take it more vulnerable to infection or cancer.
Rapamycin’s drawbacks make it unattractive for human studies. The diabetes drug metformin may instead be the antiaging drug of choice for people, says gerontologist Nir Barzilai. In addition to mTOR, metformin targets an insulin-like growth protein known as IGF-1. That protein has been implicated in a variety of biological processes that promote aging.
Barzilai, of Albert Einstein College of Medicine in New York City, and researchers at more than a dozen centers around the country plan to test metformin for its ability to fight aging in people 65 to 79. Barzilai and colleagues laid out the case for using metformin in the June 14 Cell Metabolism. Metformin is generally safe, with few major side effects. It has been shown to improve a variety of health measures and to impair cancer development in people with type 2 diabetes (SN: 11/30/13, p. 18). Barzilai says the drug may help people who don’t have diabetes also live healthier when they are elderly. If it does, commercials touting metformin might have to add another disclaimer, he jokes. “The commercials will go on: ‘This will make you healthy, but we have to apologize because you might live longer.’ ”
But studies of mice suggest that disclaimer may not be necessary. Research by Miller and others suggests that metformin may not prolong life. They have been dosing mice with various chemicals, including metformin and rapamycin, looking for drugs that will make mice healthier and live longer. In a new study, published online June 16 in Aging Cell, Miller and colleagues fed mice metformin starting when the rodents were 9 months old — middle age for a mouse. Combining metformin and rapamycin didn’t make the mice live much longer than rapamycin alone did in previous trials.
Cellular zombies Other researchers are hoping to stave off death by getting rid of the undead. Cellular zombies called senescent cells are stressed cells that have entered a type of stasis — they’re not dead, but they’re not functioning either. Stress for cells usually means severe DNA damage that could produce cancer, critically short telomeres or other molecular catastrophes that trigger shutdown mode. That lockdown is for the greater good, says aging researcher Judith Campisi, who studies senescence at the Buck Institute for Research on Aging in Novato, Calif. “It’s protective,” she says. “You don’t want defective cells to propagate.” (When damaged cells continue to grow they may become cancerous.)
Unfortunately, says Campisi, the senescent cells don’t die. Instead they send out messages to neighboring cells: “Hey, there’s a problem. Be prepared. What happened to me could happen to you.” Such messages are probably intended as public service announcements, but they could trigger mass panic and inflammation. Like zombies putting the bite on the living, senescent cells damage surrounding cells and accelerate aging.
Researchers have worked out methods for hitting the zombie cells with genetic shots to the head, effectively destroying the cells and removing them from the body. Mice from which senescent cells have been removed had increased median life span and improved health, researchers reported in Nature in February (SN: 3/5/16, p. 8).
Campisi and other researchers are working on ways to clear senescent cells from humans, too. But no antiaging treatment makes mice or any other animal live forever. Researchers have yet to increase a mouse’s life span (which rarely goes above two years) to five years, although one mouse fell just short of that mark.
Much research suggests that things that extend life span, such as rapamycin, might not stretch health spans. Mutations that make millimeter-long transparent worms known as Caenorhabditis elegans live longer also extend the proportion of their lives the worms spend being frail, Heidi Tissenbaum of the University of Massachusetts Medical School in Worcester and colleagues reported last year in the Proceedings of the National Academy of Sciences.
But living healthy doesn’t guarantee longevity either, a new study of sea urchins suggests. Red sea urchins (Mesocentrotus franciscanus) live well past 100 years old in the wild, while purple sea urchins (Strongylocentrotus purpuratus) make it to 50. But variegated (also called “green”) sea urchins (Lytechinus variegatus) normally die after four years. The difference in the species’ life spans might be due to different rates of aging, thought aging researcher Andrea Bodnar at the Bermuda Institute of Ocean Sciences in St. George’s and developmental biologist James Coffman of the MDI Biological Laboratory in Salisbury Cove, Maine. Instead, they found, none of the species seem to age at all . Young and old members of each species are similar in their abilities to reproduce and to regenerate spines and tube feet, the researchers reported online April 20 in Aging Cell . Even though the short-lived variegated urchins have no signs of slowing down, they still die. Why is a mystery, Coffman says. Ways to be wellderly A similar paradox is also seen in “wellderly” people that geneticist Ali Torkamani has been studying at the Scripps Research Institute in La Jolla, Calif. About eight years ago, Torkamani started bringing in people over 80 who had made it to an advanced age without any sign of chronic disease. The idea was to study their DNA and learn the secrets of healthy aging.
Despite living healthy, the wellderly didn’t carry genetic variants connected with extremely long lives, Torkamani and colleagues discovered. The wellderly also had no genetic advantage when it comes to cancer, stroke or diabetes. What they did have was a lower risk of getting Alzheimer’s and heart disease. Each of the wellderly seemed to have their own genetic recipe for success, suggesting there are lots of ways to stay healthy into old age. The researchers didn’t rule out that diet and lifestyle also help. “There’s hope for everybody,” Torkamani declares.
But his cloud of optimism may have a tarnished lining. His findings, along with the sea urchin and worm results, suggest that aging and longevity aren’t the same things. If that’s the case, it would mean that stopping aging would not extend human life span by much. The oldest (verified) person to have ever lived was Jeanne Louise Calment, a French woman who died at age 122 in 1997. People might top out at 130 if aging is controlled (and most people still would not make it that long because they just don’t have the necessary makeup). As a species, humans probably can’t go further without changing whatever controls longevity too, some researchers think.
Exactly how long people can live won’t be answered until proven antiaging therapies are developed. If aging and longevity are linked, then treating aging could very well make people live longer, healthier lives. If they are separate phenomena, then people could forgo the cancer, heart disease and other ailments of aging, but they would still have limited life spans. In that case Star Trek’s Mr. Spock might need to revise his usual parting words. When talking to humans, he should wish that they will live long or prosper. We may not get both.
Microbes may have played a role in making us, us. A new study shows similar patterns in the evolution of gut bacteria and the primates they live in, suggesting that germs and apes could have helped shaped one another.
For at least 10 million years, bacteria have been handed down from the common ancestor of humans and African apes. As apes split into separate species, so did the microbes inside them, researchers report July 22 in Science. Now, relationships between gut bacterial species mirror the family tree of gorillas, humans, bonobos and chimpanzees. Germs are a piece of our history, says evolutionary biologist Andrew Moeller who led the study while at both the University of Texas at Austin and the University of California, Berkeley. “Just like genes we’ve inherited from our ancestors,” he says, “we’ve inherited some of our bacteria from our ancestors as well.”
It’s well known that bacteria are key to human health (SN: 04/02/16, p. 23). They play major roles in the immune system and development. But very few researchers have turned to the past, Moeller says, to ask how humans got those handy bacteria in the first place. His team studied three families of bacteria living in the feces of people from Connecticut, as well as in that of wild chimps, bonobos and gorillas. The scientists used DNA evidence to build relationship trees for each bacterial family, then compared each tree with known relationships between humans and close primate relatives.
Two of three bacterial trees matched primate relationships. For those families, closely related bacteria live in closely related primates. For humans, “the closest relatives of our gut bacteria live in chimpanzees,” Moeller says, “just like our closest relatives are chimps.”
Scientists would expect that pattern to match only if apes and bacteria split into new species in unison. The fact that apes and bacteria split at roughly the same time, while bacteria were living inside of ape species, implies that they were influencing each other, and therefore that the evolution of one group could shape the evolution of the other.
Changing bacteria may have “allowed us to evolve,” says microbial geneticist Julia Segre of the National Human Genome Research Institute in Bethesda, Md., who was not involved in the new work. She and conservationist Nick Salafsky of the nonprofit Foundations of Success, also in Bethesda, wrote a perspective on it in the same issue of Science.
A “very intimate relationship with bacteria,” she says, “is part of who we are.” While the researchers agree that humans and bacteria probably shaped each other’s evolution, they caution that it’s too soon to tell if (and how) ancient apes and microbes changed each other.
Those ancient relationships may get harder to study over time. Industrialization and antibiotics have reduced the diversity of bacteria living in and on humans, Moeller says. And while the microbes in this study have stuck around, other groups may have disappeared or changed dramatically.
One caveat, Segre says, is that humans have been exposed to antibiotics and modern life. Wild African apes might still have their ancient gut flora, but people in Connecticut might not (SN: 12/13/14, p. 10). It’s especially important to do studies like this now, she says, “because it’s not going to get better.”
In the future, Moeller says, researchers should look deeper into the past to see if the gut bacteria living in all mammals share one common ancestor. Scientists could also go the other way, he says, to see if more recently divided human populations also have characteristic gut bacteria.
A debate over when the gap between North and South America closed has opened a rift in the scientific community.
Analyzing existing data from ancient rocks, fossils and genetic studies, a group of researchers has assembled a defense of the conventional view that the Isthmus of Panama formed around 3 million years ago. That work rebuts papers published last year that concluded that the continental connection started millions of years earlier (SN: 5/2/15, p. 10). The authors of the new paper, published August 17 in Science Advances, caution against the “uncritical acceptance” of the older formation date. “Those of us who are advocating the traditional view are in danger of being seen as old fuddy-duddy conservatives,” says study coauthor Harilaos Lessios, a molecular evolutionist at the Smithsonian Tropical Research Institute in Panama City. “But sometimes the traditional view is the correct one.” The American continents drifted apart following the breakup of the Pangaea supercontinent around 200 million years ago. Eventually, the landmasses slid back together. As they reconnected, a volcanic mound on the Caribbean tectonic plate collided with South America and rose above the ocean. This new land closed a seaway between the Pacific and Atlantic oceans, rerouted ocean currents and sparked animal migrations, leaving clues that scientists on both sides of the debate are using to determine the age of the Isthmus of Panama.
Aaron O’Dea, a paleontologist at the Smithsonian Tropical Research Institute, Lessios and colleagues revisited several of those lines of evidence to date the seaway closure. For instance, fossil records reveal that land animals began migrating more frequently between the Americas around 2.7 million years ago, possible evidence of a newly available land route, O’Dea’s team concludes. Critics, though, counter that those migrations were instead driven by climate and ecosystem changes that allowed animals to migrate. In the oceans, the closed seaway divided populations of marine organisms such as sand dollars. Over time, these populations’ genetic makeups diverged. Based on the degree of genetic change between the groups as well as fossil evidence, O’Dea’s team estimates that the seaway closed roughly 3 million years ago.
Christine Bacon, an evolutionary biologist at the University of Gothenburg in Sweden, and colleagues analyzed similar evidence last year but came to a different conclusion. The seaway closed between 23 million and 7 million years ago, Bacon and colleagues estimated in the Proceedings of the National Academy of Sciences. That study assumed a different rate of genetic divergence and looked at more species than the work by O’Dea and colleagues, Bacon says.
Rocks also trace the isthmus’s rise from the sea. Chemical traces from ancient ocean sediments record when seawater stopped mixing between the Atlantic and Pacific. Analyzing those traces, O’Dea and colleagues estimate that the seaway became relatively shallow around 12 million to 9.2 million years ago and completely shut around 2.7 million years ago.
Other rocky evidence tells a different story, proponents of the older age claim. Volcanically-forged crystals, known as zircons, found in South America date back to around 13 million to 15 million years ago. The only possible source of those crystals was in Panama, suggesting that a river washed the crystals down a land connection between Panama and South Americaaround that time, geologist Camilo Montes of the Universidad de los Andes in Bogotá, Colombia, and colleagues concluded last year in Science. Those South American crystals may have formed closer to home, O’Dea and colleagues argue in the new paper. Similar crystals have been found elsewhere in South America, so the crystals reported by Montes and colleagues may have originated from a source in South America, not Panama, O’Dea says.
Some of the disagreement between the two sides stems from the fact that the seaway closure wasn’t a single event, says Carlos Jaramillo, a paleontologist at the Smithsonian Tropical Research Institute who coauthored the studies by Montes and Bacon. The seaway would have closed in stages, with various segments shortened and closed off over millions of years, Jaramillo says. “You can’t just use one date for everything, it depends on what you’re looking at,”he says.
Bacon is holding her ground. “They basically rehashed a mishmash of old papers,” she says of the new work. “We need to gather new data and collaborate rather than hold on to old ideas bitterly.”
Female mosquitoes carrying the Zika virus can pass the infection to the next generation, lab tests show.
Among Aedes aegypti mosquitoes, thought to be the main species spreading Zika in the Americas, at least one out of every 290 lab offspring catches the virus from its mother, Texas researchers say August 29 in the Journal of Tropical Medicine and Hygiene. Infected eggs, which can survive for months on dry surfaces, could keep the virus circulating even after dry or cold spells, when adult mosquitoes die off, warns Robert Tesh of University of Texas Medical Branch in Galveston.
Earlier research had already shown that youngsters of this species can inherit related viruses, such as those causing dengue, West Nile and yellow fever. Mom-to-egg transmission though is not a given: The same research project also reported no evidence so far of this vertical transmission in 803 offspring of another possible Zika spreader, Ae. albopictus.
It’s not known how likely mosquito moms are to infect their young outside of the lab. Doing a reliable test with wild mosquitoes outdoors is a much more difficult project, the researchers say.
Contrary to many adorable children’s stories, hibernation is so not sleeping. And most animals can’t do both at the same time.
So what’s with Madagascar’s dwarf lemurs? The fat-tailed dwarf lemur slows its metabolism into true hibernation, and stays there even when brain monitoring shows it’s also sleeping. But two lemur cousins, scientists have just learned, don’t multitask. Like other animals, they have to rev their metabolisms out of hibernation if they want a nap. Hibernating animals, in the strictest sense, stop regulating body temperature, says Peter Klopfer, cofounder of the Duke Lemur Center in Durham, N.C. “They become totally cold-blooded, like snakes.” By this definition, bears don’t hibernate; they downregulate, dropping their body temperatures only modestly, even when winter den temperatures sink lower. And real hibernation lasts months, disqualifying short-termers such as subtropical hummingbirds. The darting fliers cease temperature regulation and go truly torpid at night. “You can pick them out of the trees,” Klopfer says.
The fat-tailed dwarf lemur, Cheirogaleus medius, was the first primate hibernator discovered, snuggling deep into the softly rotting wood of dead trees. “You’d think they’d suffocate,” he says. But their oxygen demands plunge to somewhere around 1 percent of usual. As trees warm during the day and cool at night, so do these lemurs. When both a tree and its inner lemur heat up, the lemur’s brain activity reflects mammalian REM sleep.
Klopfer expected much the same from two other dwarf lemurs from an upland forest with cold, wet winters. There, C. crossleyi and C. sibreei spend three to seven months curled up underground, below a thick cushion of fallen leaves. “If you didn’t know better, you might think they were dead because they’re cold to the touch,” Klopfer says.
Unlike the tree-hibernators, the upland lemurs take periodic breaks from hibernating to sleep, Klopfer, the Lemur Center’s Marina Blanco and colleagues report in the August Royal Society Open Science. The lemurs generated some body heat of their own about once a week, which is when their brains showed signs of sleep (REM-like and slow-wave). “My suspicion is that sleep during torpor is only possible at relatively high temperatures, above 20º Celsius,” Klopfer says. Sleep may be important enough for cold-winter lemurs to come out of the storybook “long winter’s nap.”
It’s a problem that sounds simple, but the best minds in mathematics have puzzled over it for generations: A salesman wants to hawk his wares in several cities and return home when he’s done. If he’s only visiting a handful of places, it’s easy for him to schedule his visits to create the shortest round-trip route. But the task rapidly becomes unwieldy as the number of destinations increases, ballooning the number of possible routes.
Theoretical computer scientist Shayan Oveis Gharan, an assistant professor at the University of Washington in Seattle, has made record-breaking advances on this puzzle, known as the traveling salesman problem. The problem is famous in mathematical circles for being deceptively easy to describe but difficult to solve. But Oveis Gharan has persisted. “He is relentless,” says Amin Saberi of Stanford University, Oveis Gharan’s former Ph.D. adviser. “He just doesn’t give up.” Oveis Gharan’s unwavering focus has enabled him to identify connections between seemingly unrelated areas of mathematics and computer science. He scrutinizes the work of the fields’ most brilliant minds and adapts those techniques to fit his purposes. This strategy — bringing new tools to old problems — is the basis for leaps he has made on two varieties of the traveling salesman problem.
“If you want to build a house, you need to have a sledgehammer and a level, a wrench, tape measure,” he says. “You need to have a lot of tools and use them one after another.” Oveis Gharan, age 30, stocks his toolkit with the latest advances in fields with obscure-sounding names, including spectral graph theory, polyhedral theory and geometry of polynomials. And in a twist that only Oveis Gharan saw coming, a recent solution to a long-standing problem originating in quantum mechanics turned out to be the missing piece to one aspect of the salesman’s puzzle. For a salesman’s tour of five cities, there are just 12 possible routes; it’s easy enough to pick the one that will save the most gas. But for 20 cities, there are 60 quadrillion possibilities, and for 80 cities, there are more routes than the number of atoms in the observable universe. Relying on brute force — calculating the distances of all the possible routes — is intractable for all but the easiest cases. Yet no one has found a simple method that can quickly find the shortest path for any number and arrangement of cities. The quandary has real-world importance: Companies like Amazon and Uber, for example, want to ferry goods and people to many destinations in the most efficient way possible.
Growing up in his home country of Iran, Oveis Gharan discovered a natural appreciation for challenging puzzles. In middle school, he acquired a book of problems from mathematics Olympiad competitions in the Soviet Union. As a student, “I tend to be one of the slower ones,” Oveis Gharan says, noting that he was usually not the first to grasp a new theorem. But within a few years, he had doggedly plowed through the 200-page book.
The effort also provided Oveis Gharan with his first taste of tool collecting, through collaboration with classmates who joined him in working through the math problems. Oveis Gharan found that solutions come easier when many minds contribute. “Each person thinks and solves problems differently,” he says. “Once someone is exposed to many different ideas and ways of thinking on a problem, that will help a lot to increase the breadth of problem-attacking directions.”
Oveis Gharan attended Sharif University of Technology in Tehran before making his first breakthroughs on the traveling salesman problem as a graduate student at Stanford University. He spent over a year cracking just one thorny facet, before moving on to a postdoctoral fellowship at the University of California, Berkeley. Rather than attacking the problem head-on, Oveis Gharan works on approximate solutions — routes that are slightly longer than the optimal path but can be calculated in a reasonable amount of time. Since the 1970s, computer scientists have known of a strategy for quickly finding a route that is at most 50 percent longer than the shortest possible path. That record held for decades, until Oveis Gharan tackled it along with Saberi and Mohit Singh, then of McGill University in Montreal.
In a paper published in 2011, the team made what might sound like an infinitesimal improvement, shrinking the 50-percent figure by four hundredths of a trillionth of a trillionth of a trillionth of a trillionth of a percentage point. “People make fun of our paper because of that small improvement,” says Oveis Gharan, “but the thing is that in our area, the actual number is not the major question.” Instead, the goal is to develop new ideas that can begin to crack the problem open, says Luca Trevisan, a computer scientist at Berkeley. “What’s so important is not the specific algorithm that he has devised, but that there is a whole new set of techniques that can potentially be applied to other problems.” Following the advance, other scientists revisited the traveling salesman problem, and decreased the number significantly; the selected route is now at most 40 percent longer than optimal.
To make his breakthroughs, Oveis Gharan keeps tabs on the scientific literature across a variety of mathematical fields. “Every time new papers or new techniques come out, he’s one of the first people who will pick up the paper and read it,” says Saberi. To discover tools outside his areas of expertise, Oveis Gharan poses pieces of the problem to researchers in other fields.
In 2015, Oveis Gharan and computer scientist Nima Anari, then at Berkeley, made further progress on an approximate solution for a more general, and more challenging, version of the traveling salesman problem. In this version, the distance to go from point A to point B might not be the same as going the opposite direction — a plausible situation in cities with many one-way streets. Researchers had a way to estimate the optimum tour length, but they didn’t understand how good the estimate was. Oveis Gharan and Anari showed it was exponentially better than known previously.
To make this advance, Oveis Gharan teased out connections to a seemingly unrelated problem in mathematics and quantum mechanics, known as the Kadison-Singer problem. “That was really surprising,” says computer scientist Daniel Spielman of Yale University, part of a team that solved the Kadison-Singer problem in 2013. “There was no obvious connection,” he says. “Shayan is incredibly brilliant and incredibly creative.”
Oveis Gharan is now focused on a furtherconquest of this version of the traveling salesman problem. Though his new advance helps approximate the optimal tour length, it can’t identify the corresponding route. Next, Oveis Gharan would like to produce an algorithm that can navigate the correct course.
You can bet he’ll continue to add to his tool collection by sampling from related mathematical and computational fields. “The grand plan is: Try to better understand how these different areas are connected to one another,” Oveis Gharan says. “There are many big open problems lying in this intersection.”
Methane wasn’t the cozy blanket that kept Earth warm hundreds of millions of years ago when the sun was dim, new research suggests.
By simulating the ancient environment, researchers found that abundant sulfate and scant oxygen created conditions that kept down levels of methane — a potent greenhouse gas — around 1.8 billion to 800 million years ago (SN: 11/14/15, p. 18). So something other than methane kept Earth from becoming a snowball during this dim phase in the sun’s life. Researchers report on this new wrinkle in the so-called faint young sun paradox (SN: 5/4/13, p. 30) the week of September 26 in the Proceedings of the National Academy of Sciences.
Limited oxygen increases the production of microbe-made methane in the oceans. With low oxygen early in Earth’s history, many scientists suspected that methane was abundant enough to keep temperatures toasty. Oxygen may have been too sparse, though. Recent work suggests that oxygen concentrations at the time were as low as a thousandth their present-day levels (SN: 11/28/14, p. 14).
Stephanie Olson of the University of California, Riverside and colleagues propose that such low oxygen concentrations thinned the ozone layer that blocks methane-destroying ultraviolet rays. They also estimate that high concentrations of sulfate in seawater at the time helped sustain methane-eating microbes. Together, these processes severely limited methane to levels similar to those seen today — far too low to keep Earth defrosted.
Scientists, politicians, clinicians, police officers and medical workers agree on one thing: The U.S. mental health system needs a big fix. Too few people get the help they need for mental ailments and emotional turmoil that can destroy livelihoods and lives.
A report in the October JAMA Internal Medicine, for instance, concludes that more than 70 percent of U.S. adults who experience depression don’t receive treatment for it.
Much attention focuses on developing better psychiatric medications and talk therapies. But those tactics may not be enough. New research suggests that the longstanding but understudied problem of stigma leaves many of those suffering mental ailments feeling alone, often unwilling to seek help and frustrated with treatment when they do. “Stigma about mental illness is widespread,” says sociologist Bernice Pescosolido of Indiana University in Bloomington. And the current emphasis on mental ills as diseases of individuals can unintentionally inflame that sense of shame. An effective mental health care system needs to address stigma’s suffocating social grip, investigators say. “If we want to explain problems such as depression and suicide, we have to see them in a social context, not just as individual issues,” Pescosolido says.
Stigma as a mark of disgrace that taints someone in others’ eyes goes back several millennia. Sociologist Erving Goffman wrote in 1963 of stigma as a “spoiled identity” caused by society’s negative attitudes toward conditions such as mental illness. New evidence supports the idea that stigma about psychological problems runs surprisingly deep. What’s more, it filters through families and communities in different ways.
Many depressed people experience their condition primarily as a family predicament, not a brain disease, says a team led by UCLA psychiatrist and medical anthropologist Elizabeth Bromley. Those who seek treatment from primary care physicians feel tremendous shame about depression-related problems, such as being unable to work, that put a burden on their families. They hide their depression and any treatments, fearing rejection by those closest to them, Bromley and her colleagues report in the October Current Anthropology. Even if antidepressants ease symptoms such as insomnia and fatigue, depressed individuals describe the treatment as a Band-Aid stuck on unresolved family fractures, which can include a violent spouse or drug-addicted child.
Bromley’s team examined data from 46 people, representing various ethnic backgrounds and economic classes, identified in primary care clinics in 1996 as having depression. After their diagnosis, participants completed surveys every six months for two years, then at the five-year and nine-year marks. Interviews about symptoms, treatments and coping occurred at a 10-year follow-up.
Only two people described the depression treatment they received as helpful and appropriate to their situation. Both had family and friends who had noticed their depression symptoms and encouraged them to seek help. The remaining 44 people spoke of depression as a threat to their closest relationships and family standing. They kept treatment secret to avoid intensifying family conflicts and for fear of rejection. Shame and emotional distance from family members remained even if depression treatments had positive effects. Participants commonly spoke of not wanting to burden their families with their condition. Several said that being singled out for treatment, which only required that one take antidepressants or, say, learn relaxation techniques, made them feel more estranged than ever from already fragile families and, what’s more, did nothing to resolve underlying family troubles.
“Individually focused, biomedical approaches can feel stigmatizing to many people with depression,” Bromley says.
Her team’s findings fit with previous observations that stigma discourages many people from discussing depression with their doctors for fear of breaking frayed family ties, writes psychologist Rob Whitley of Montreal’s McGill University in the same issue of Current Anthropology.
Excessively close ties among a network of families can also stoke stigma, researchers find. It can flourish in a wealthy, well-manicured community where everyone knows everyone else, if not in person than by word of mouth, say sociologists Anna Mueller of the University of Chicago and Seth Abrutyn of the University of Memphis.
In one such town, given the fictional name Poplar Grove by the researchers to protect privacy, teenagers struggle mightily under the weight of an “overactive grapevine of gossip.” Parents and peers constantly monitor whether teens live up to a community-wide standard of high academic achievement, the researchers report in the October American Sociological Review. Hard work is admired, but only if it yields superior grades with no signs of extra effort, such as using tutors. Academic struggles, anxiety and depression are stigmatized as signs of imperfection. As a result, most young people fear to seek any help from adults, including parents and teachers. That situation contributed to a rash of 19 suicides among current students and recent graduates of the town’s high school between 2000 and 2015, Mueller and Abrutyn propose.
The pair conducted interviews and focus groups in 2014 and 2015 with 110 volunteers, including teens who grew up in the town and lost a friend to suicide, parents whose children killed themselves, mental health workers in the town and high school teachers and counselors. In public forums held afterward, residents were surprised to hear from Mueller that one of Poplar Grove’s strengths — strong ties among neighbors concerned about the welfare of everyone’s kids — had a dark side. Parents talked about the shame they felt if a child experienced emotional problems and of feeling like bad parents when word got around. Teens expressed intense fear of failing to ace schoolwork and make it seem effortless. Students who had killed themselves were described by friends as having emotionally wilted under those pressures.
Bromley’s and Mueller’s findings underscore the need for mental health services that reach people where they live, Pescosolido says. Local services stand the best chance of getting troubled individuals to see help-seeking as acceptable behavior with the potential to change one’s life for the better.
Possible approaches include training pastors and other religious leaders in how to assist those with mental disorders and establishing public self-help groups and high school clubs devoted to open discussion and support. Local centers housing teams of social workers and counselors able to coordinate care for serious mental disorders would be a big advance, she says.
Job No. 1, Mueller says, involves getting beyond the popular assumption that mental illness and suicide arise solely in individuals. It’s long been known, for example, that chaotic communities where people feel isolated push suicide rates higher. But as Poplar Grove demonstrates, really tight-knit communities can have the same effect. “Deep psychological pain often has family and community sources,” she says.
Most of us spend our careers trying to meet — and hopefully exceed — expectations. Scientists do too. But the requirements for success in a job in academic science don’t always line up with the best scientific methods. The net result? Bad science doesn’t just happen — it gets selected for.
What does it mean to be successful in science? A scientist gets a job and funding by publishing a lot of high-impact papers with novel findings. Those papers and findings beget awards and funding to do more science — and publish more papers. “The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right,” says Brian Nosek, a psychologist at the University of Virginia in Charlottesville.
This idea of success has become so ingrained that scientists are even introduced when they give talks by the number of papers they have published or the amount of grant funding they have, says Marc Edwards, a civil engineer at Virginia Polytechnic Institute and State University in Blacksburg.
But rewarding researchers for the number of papers they publish results in a “natural selection” of sloppy science, new research shows. The idea of scientific “success” equated as number of publications promotes not just lazy science but also unethical science, another paper argues. Both articles proclaim that it’s time for a culture shift. But with many scientific labs to fund and little money to do it, what does a new, better scientific enterprise look like?
As young scientists apply for tenure-track academic jobs, they may bring an application filled with tens to dozens of papers. Hiring committees can often no longer read or evaluate all of them. So they may come to use numbers as shorthand — numbers of papers published, how many times those papers have been cited and whether the journals the papers are published in are high-impact. “Real evaluation of scientific quality is as hard as doing the science in the first place,” Nosek says. “So, just like everyone else, scientists use heuristics to evaluate each other’s work when they don’t have time to dig into it for a complete evaluation.”
Too much reliance on the numbers means that scientists can — unintentionally or not — game the system. They can publish novel results from experiments with low power and effort. Those novel results inflate publication numbers, increase grant funding and get the scientist a job. Ideally, other scientists would catch this careless behavior in peer review, before the studies are published, weeding out poorly done studies in favor of strong ones. But Paul Smaldino, a cognitive scientist at the University of California, Merced, suspected that when the scientific idea of “meeting expectations” on the job is measured in publication rates, bad science would always win out.
So Smaldino and his colleague Richard McElreath at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, decided to create a computer simulation of the scientific “ecosystem,” based on a model for natural selection in a biological ecosystem. Each “lab” in the simulation was represented by a number. Those labs that best met the parameters for success survived and reproduced, spawning other labs that behaved in the same way. Those labs that didn’t meet expectations “died out.” The model allowed Smaldino and McElreath to manipulate the definitions of “success.” And when that success was defined as publishing a lot of novel findings, labs succeeded when they did science that was “low effort” — sloppy and probably irreproducible. Research groups doing high-effort, careful science didn’t publish enough. And they went the way of the dinosaurs.
Even putting an emphasis on replication — in which labs got half credit for double-checking the findings of other groups — couldn’t save the system. “That was a surprise for us,” Smaldino says. He assumed that if the low-effort labs got caught by failures to replicate, their success would go down. But scientists can’t replicate every single study, and in the simulation, the lazy labs still thrived. “The most successful are still going to low effort,” he explains, “because not everyone gets caught.” Smaldino and McElreath published their findings September 21 in Royal Society Open Science.
“I think the results they get are probably reasonable,” says John Ioannidis, a methods researcher at Stanford University in California. “Once you have bad practices they can propagate and ruin the scientific process and become dominant. And I think there’s some truth to it, unfortunately.”
The publish-or-perish culture may be having negative consequences already, Edwards says. “I’ve … seen ethical researchers leave academia, not enter in the first place or become unethical,” he says. Scientists might slice their research findings thinner, trying to publish more findings with less data, breaking experiments down to the least publishable unit. That in itself is not unethical, but Edwards worries the high stakes places scientists on the edge of a slippery slope, from least publishable units to sliced-and-diced datasets. “With the wrong incentives you can make anyone behave unethically, and academia is no different.”
Using a theoretical model of his own, Edwards and his colleague Siddhartha Roy show that, at some point, the current academic system could lead a critical mass of scientists to cross the line to unethical behavior, corrupting the scientific enterprise and losing the public’s trust. “If we ever reach this tipping point where our institutions become inherently corrupt, it will have devastating consequences for humanity,” Edwards says. “The fate of the world depends as never before on good trustworthy science to solve our problems. Will we be there?” Edwards and Roy report their model September 22 in Environmental Engineering Science.
To stay away from the slippery slope, scientists will need to change what scientific success looks like. Here’s the rub, though. Scientists are the primary people watching scientists work. When papers go through peer review at scientific journals, ideas get examined in peer-review committees for grant funding or a scientist is being considered for an academic job, it’s other scientists who are guarding those gates to scientific success. A single scientist might be publishing papers, peer-reviewing other peoples’ papers, submitting grants, serving on review committees for other peoples’ grants, editing a journal, applying for a job and serving on a hiring committee — all at the same time. And so the standards for scientific integrity, for rigorous methods, do not reside with the institutions or the funders or the journals. Those standards are within the scientists themselves. The inmates really do run the scientific asylum.
This is not an inherently bad thing. Science needs people with appropriate expertise to read the highly specialized stuff. But it does mean that a movement for culture change needs to come from within the scientific enterprise itself. “This is more likely to happen if you have a grassroots movement where lots of scientists are convinced and are used to performing research in a given way, leading to more reliable results,” Ioannidis says.
What produces more reliable research, though, still requires … research. “I think these are questions that could be addressed with scientific studies,” Ioannidis says. “This is where I’m interested in taking the research, to get studies that are telling us to [do science] this way, [or] this type of leadership is better…. You can test policies.” Science needs more studies of science.
The first step is admitting that problems exist in the current structure. “We’re bought into it — we invested our whole career into the game as it exists,” Edwards says. “We are taught to be cowards when it comes to addressing these issues, because the personal and professional costs of revealing these problems is so high.” It can be painful to see sloppy science exposed. Especially when that science is performed by colleagues and friends. But Edwards says fixing the system will be worth the pain. “I don’t want to wake up someday and realize I’m in a culture akin to professional cycling, where you have to cheat to compete.”
The solution is to add incentives for having an excellent research process, regardless of outcome, Nosek says. Scientists need to be rewarded, funded and promoted for careful, thorough research — even if it doesn’t produce huge differences and groundbreaking results. Nosek points to ideas like registered reports. These are systems where scientists report their experimental plans and methods to a journal, and the journal accepts the paper — whether or not the research produces any noteworthy results.
Despite his results, Smaldino is optimistic that incentives can change, allowing the best science to rise to the top. “I think science is great,” he says. “I think in general scientists aren’t bad scheming people.” The dire predictions of the models don’t have to come to pass. “This is not a condemnation of science,” Smaldino says. “I love science — there’s no other way to learn a lot of things that are important to learn about the world. But the science we do can always be better.”