March for Science will take scientists’ activism to a new level

Lab coats aren’t typical garb for mass demonstrations, but they may be on full display April 22. That’s when thousands of scientists, science advocates and science-friendly citizens are expected to flood the streets in the March for Science. Billed by organizers as both a celebration of science and part of a movement to defend science’s vital role in society, the event will include rallies and demonstrations in Washington, D.C., and more than 400 other cities around the world.

“Unprecedented,” says sociologist Kelly Moore, an expert on the intersection of science and politics at Loyola University Chicago. “This is the first time in American history where scientists have taken to the streets to collectively protest the government’s misuse and rejection of scientific expertise.”

Some scientists have expressed concern that marching coats science in a partisan sheen; others say that cat is long out of the bag. Keeping science nonpartisan is a laudable goal, but scientists are human beings who work and live in societies — and have opinions as scientists and citizens when it comes to the use, or perceived misuse, of science.

Typically when scientists get involved with a political issue, it’s as an expert sharing knowledge that can aid in creating informed policy. There are standard venues for this: Professional societies review evidence and make statements about a particular issue, researchers publish findings or consensus statements in reports or journals, and sometimes scientists testify before Congress.

In extreme circumstances, though, scientists have embraced other forms of activism. To broadly categorize, there are:

Celebrity voices
In 1938, amid the rise of fascism and use of false scientific claims to support the racism embedded in Nazism, prominent German-American anthropologist Franz Boas released his “Scientists Manifesto.” Signed by nearly 1,300 scientists, including three Nobel laureates, the manifesto denounced the unscientific tenets of Nazism and condemned fascist attacks on scientific freedom.
Fear of war of a different sort prompted Albert Einstein, Bertrand Russell and nine other scientists to compose a manifesto in 1955 calling for nuclear disarmament. The Russell-Einstein Manifesto led to the first Pugwash Conference on Science and World Affairs, which sought “a world free of nuclear weapons and other weapons of mass destruction.”
Wildlife biologist Rachel Carson eloquently synthesized research on the effects of pesticides in her wildly popular book Silent Spring, published in 1962 (she would later testify before Congress). Despite attacks from industry and some in government, Carson’s work helped launch the modern environmental movement, paving the way for the establishment of the Environmental Protection Agency.

Advocacy groups
In the 1930s, chapters of the American Association of Scientific Workers (based loosely on a similar British organization) formed in various cities including Philadelphia, Boston and Chicago. Despite broad goals — promoting science for the benefit of society, stressing public science education, taking a moral stand against government and industry misuse of science — infighting and members’ opposing views limited the group’s effectiveness.

In the decades since, other broadly focused groups — for example, Science for the People (born out of a group started in 1969 by physicists frustrated by their professional society’s lack of action against the Vietnam War), the Union of Concerned Scientists, the American Association for the Advancement of Science — have picked up the banner, speaking out, circulating petitions and more. Single-issue groups such as the Environmental Defense Fund and the Council for Responsible Genetics have proliferated as well.

Protest marchers
Many scientists have traded pocket protectors for placards, hitting the streets as concerned scientist-citizens. Academic scientists frequently joined university students in rallies against the Vietnam War in the 1960s and early ’70s. Linus Pauling famously protested nuclear testing in a march outside the White House in 1962 (he was in town for a dinner with the Kennedys honoring Nobel laureates). Carl Sagan was one of hundreds arrested for protesting nuclear testing at a Nevada site in 1987. And plenty of scientist-citizens joined the inaugural Women’s March on Washington in January and the annual People’s Climate March (the 2017 one is scheduled for April 29, just a week after the March for Science).

But the March for Science feels different, say the science historians. Transforming concern into sign-toting, pavement-pounding, slogan-shouting activism is motivated by a collective — and growing — sense of outrage that the federal government is undermining, ignoring, even discarding and stifling science. That’s hitting many scientists not just in their livelihoods, but in the very fabric of their DNA. “Part of [President] Trump’s message is that science is not going to be thought of as part of a collective good that’s essential for decision making in a democracy,” Moore says. “We have not seen this outright rejection of science by the state.”

That rejection has come in many forms, says David Kaiser, a science historian at MIT. “It’s a cluster of issues: cutbacks in basic research across many domains, the censure and censorship regarding data collected by the government or the ability of government scientists to speak, and a range of threats to academic freedom and the research process generally.”

It’s a sign of the times, too, says Al Teich, a science policy expert at George Washington University in Washington, D.C. President Reagan, for example, slashed science in his budget in 1981. But many more people today are aware of science’s role in society, says Teich, the former director for science and policy programs at AAAS. This awareness may be fueling the upcoming march. “The number of people engaged and the range of scientists involved is not something that I’ve ever seen before.”

Measuring the impact of any of these efforts is difficult. They aren’t controlled laboratory experiments, after all. But one thing this march may do is spawn a new form of activism, says Moore: more scientists running for political office.

Beetles have been mooching off insect colonies for millions of years

Mooching roommates are an ancient problem. Certain species of beetles evolved to live with and leech off social insects such as ants and termites as long ago as the mid-Cretaceous, two new beetle fossils suggest. The finds date the behavior, called social parasitism, to almost 50 million years earlier than previously thought.

Ants and termites are eusocial — they live in communal groups, sharing labor and collectively raising their young. The freeloading beetles turn that social nature to their advantage. They snack on their hosts’ larvae and use their tunnels for protection, while giving nothing in return.

Previous fossils have suggested that this social parasitism has been going on for about 52 million years. But the new finds push that date way back. The specimens, preserved in 99-million-year-old Burmese amber, would have evolved relatively shortly after eusociality is thought to have popped up.

One beetle, Mesosymbion compactus, was reported in Nature Communications in December 2016. A different group of researchers described the other, Cretotrichopsenius burmiticus, in Current Biology on April 13. Both species have shielded heads and teardrop-shaped bodies, similar to modern termite-mound trespassers. Those adaptations aren’t just for looks. Like a roommate who’s found his leftovers filched one too many times, termites frequently turn against their pilfering housemates.

40 more ‘intelligence’ genes found

Smarty-pants have 40 new reasons to thank their parents for their powerful brains. By sifting through the genetics of nearly 80,000 people, researchers have uncovered 40 genes that may make certain people smarter. That brings the total number of suspected “intelligence genes” to 52.

Combined, these genetic attributes explain only a very small amount of overall smarts, or lack thereof, researchers write online May 22 in Nature Genetics. But studying these genes, many of which play roles in brain cell development, may ultimately help scientists understand how intelligence is built into brains.
Historically, intelligence research has been mired in controversy, says neuroscientist Richard Haier of the University of California, Irvine. Scientists disagreed on whether intelligence could actually be measured and if so, whether genes had anything at all to do with the trait, as opposed to education and other life experiences. But now “we are so many light-years beyond that, as you can see from studies like this,” says Haier. “This is very exciting and very positive news.”

The results were possible only because of the gigantic number of people studied, says study coauthor Danielle Posthuma, a geneticist at VU University Amsterdam. She and colleagues combined data from 13 earlier studies on intelligence, some published and some unpublished. Posthuma and her team looked for links between intelligence scores, measured in different ways in the studies, and variations held in the genetic instruction books of 78,308 children and adults. Called a genome-wide association study or GWAS, the method looks for signs that certain quirks in people’s genomes are related to a trait.

This technique pointed out particular versions of 22 genes, half of which were not previously known to have a role in intellectual ability. A different technique identified 30 more intelligence genes, only one of which had been previously found. Many of the 40 genes newly linked to intelligence are thought to help with brain cell development. The SHANK3 gene, for instance, helps nerve cells connect to partners.

Together, the genetic variants identified in the GWAS account for only about 5 percent of individual differences in intelligence, the authors estimate. That means that the results, if confirmed, would explain only a very small part of why some people are more intelligent than others. The gene versions identified in the paper are “accounting for so little of the variance that they’re not telling us much of anything,” says differential developmental psychologist Wendy Johnson of the University of Edinburgh.

Still, knowing more about the genetics of intelligence might ultimately point out ways to enhance the trait, an upgrade that would help people at both the high and low ends of the curve, Haier says. “If we understand what goes wrong in the brain, we might be able to intervene,” he says. “Wouldn’t it be nice if we were all just a little bit smarter?”

Posthuma, however, sees many roadblocks. Beyond ethical and technical concerns, basic brain biology is incredibly intricate. Single genes have many jobs. So changing one gene might have many unanticipated effects. Before scientists could change genes to increase intelligence, they’d need to know everything about the entire process, Posthuma says. Tweaking genetics to boost intelligence “would be very tricky.”

Every breath you take contains a molecule of history

Julius Caesar could have stayed home on March 15, 44 B.C. But mocking the soothsayer who had predicted his death, the emperor rode in his litter to Rome’s Forum. There he met the iron daggers of 60 senators.

As he lay in a pool of blood, he may have gasped a final incrimination to his protégé Brutus: You too, my son? Or maybe not. But he certainly would have breathed a dying breath, a final exhalation of some 25 sextillion gas molecules. And it’s entirely possible that you just breathed in one of them.
In fact, calculating the probability of a particle of Caesar’s dying breath appearing in any given liter of air (the volume of a deep breath) has become a classic exercise for chemistry and physics students. If you make a few assumptions about the mixing of gases and the lifetimes of molecules in the atmosphere, it turns out that, on average, one molecule of “Caesar air” — or any other historic liter of air, for that matter — appears in each breath you take.

Author Sam Kean begins his book Caesar’s Last Breath with this exercise, noting that “we can’t escape the air of those around us.” It’s all recycled, and every day we breathe in a bit of our, and Earth’s, history. “The story of Earth,” he writes, “is the story of its gases.”

Kean, author of a best seller about the periodic table, The Disappearing Spoon, then tells that story. As he did in his fascinating portraits of the elements, Kean profiles individual gases such as nitrogen and oxygen primarily through the scientists and entrepreneurs who discovered or sought to harness them. These are quirky men (and they are mostly men) — every bit as obsessed, greedy and brilliant as one could hope for in a page-turner.

Along with lesser-known backstories of textbook heroes such as James Watt, Antoine-Laurent Lavoisier and Albert Einstein (who was surprisingly obsessed with building a better refrigerator), Kean clearly delights in weaving in the unexpected. In the discussion of helium, we learn about Joseph-Michel Montgolfier, the papermaker who was inspired to build the first hot-air balloon as he watched his wife’s pantaloons billowing suggestively above a fire. And in a chapter on the radioactive elements carried in nuclear fallout, there’s Pig 311, a sow that survived a nuclear test blast only to be used as propaganda for the weapons’ supposed safety.

Along the way, Kean threads in the history of Earth’s atmosphere in a surprisingly compelling narrative of geologic history. He steps aside from Lavoisier’s work on life-giving oxygen, for example, to describe the Great Oxygenation Event, which infused the atmosphere a couple billion years ago with a gas that, at the time, was toxic to most living things. The explanations of science here and throughout the book are written clearly and at a level that should be understandable with a high school education. And while they’re straightforward, the explanations have enough depth to be satisfying; by the end of the book, you realize you’ve learned quite a bit.
Even those who rarely read science will enjoy the drama — death, for instance, plays a big role in these stories. Over and over, we learn, men have taken gases’ powers too lightly, or wielded their own power too cruelly, and paid the price. Fritz Haber, for instance, could have died a hero for finding a way to make fertilizer from the nitrogen in air. Instead, he died broke and loathed for his World War I work on gas warfare.

Then there was Harry Truman — not that Truman, but the one who refused to leave his home when scientists warned of an impending volcanic eruption. Truman contended that officials were “lying like horses trot” right up until Mount St. Helens blew searing gases that erased him from the mountainside.

The links between these stories can seem at first as ephemeral as the gases, but together they tell the story of the birth of the atmosphere and humans’ history in it. In the end, like Caesar’s breath, it all comes full circle.

This history book offers excellent images but skimps on modern science

Books about the history of science, like many other histories, must contend with the realization that others have come before. Their tales have already been told. So such a book is worth reading, or buying, only if it offers something more than the same old stories.

In this case, The Oxford Illustrated History of Science offers most obviously an excellent set of illustrations and photographs from science’s past, from various ancient Egyptian papyruses to the Hubble Space Telescope’s ultradeep view of distant galaxies. Some of the images will be familiar to science fans; many others are obscure but apt; nearly all help illustrate various aspects of science’s history.
And yet the pictures, while many may be worth more than 10,000 words, are still just complements to the text. Oxford attempts a novel organization for recounting the story of science: a sometimes hard-to-follow mix of chronological and topical. The first section, “Seeking Origins,” has six chapters that cover ancient Mediterranean science, science in ancient China, medieval science (one chapter for the Islamic world and Europe, one for China), plus the scientific revolution and science in the Enlightenment. The second section, “Doing Science,” shifts to experimenting, fieldwork, biology, cosmology, theory and science communication.
Each chapter has a different author, which has the plus of bringing distinct expertise to each subject matter but the minus of vast divergence in readability and caliber of content. Some chapters (see “Exploring Nature,” on field science) are wordy, repetitive and lack scientific substance. Others (“Mapping the Universe”) are compelling, engaging and richly informative. A particularly disappointing chapter on biology (“The Meaning of Life”) focuses on 19th century evolution, with only a few paragraphs for the life science of the 20th and 21st centuries. That chapter closes with an odd, antiscientific tone lamenting the “huge numbers of people … addicted to antidepressants” and complaining that modern biology (and neuroscience) “threatens to undermine traditional values of moral responsibility.”

Some of the book’s strongest chapters are the earliest, especially those that cover aspects of science often missing in other histories, such as science in China. Who knew that the ancient Chinese had their own set of ancient elements — not the Greeks’ air, earth, water and fire, but rather wood, fire, water, soil and metal?

With the book’s second-half emphasis on how science was done rather than what science found out, the history that emerges is sometimes disjointed and out of order. Discussions of the modern view of the universe, which hinges on Einstein’s general theory of relativity, appear before the chapter on theory, where relativity is mentioned. In fact, both relativity and quantum theory are treated superficially in that chapter, as examples of the work of theorists rather than the components of a second scientific revolution.
No doubt lack of space prevented deeper treatment of science from the last century. Nevertheless the book’s merits outweigh its weaknesses. For an accessible account of the story of pre-20th century science, it’s informative and enjoyable. For more recent science, you can at least look at the pictures.

Intense storms provide the first test of powerful new hurricane forecast tools

This year’s Atlantic hurricane season has already proven to be active and deadly. Powerful hurricanes such as Harvey, Irma and Maria are also providing a testing ground for new tools that scientists hope will save lives by improving forecasts in various ways, from narrowing a storm’s future path to capturing swift changes in the intensity of storm winds.

Some of the tools that debuted this year — such as the GOES-16 satellite — are already winning praise from scientists. Others, such as a new microsatellite system aiming to improve measurements of hurricane intensity and a highly anticipated new computer simulation that forecasts hurricane paths and intensities, are still in the calibration phase. As these tools get an unprecedented workout thanks to an unusually ferocious series of storms, scientists may know in a few months whether hurricane forecasting is about to undergo a sea change.

The National Oceanic and Atmospheric Administration’s GOES-16 satellite is perhaps the clearest success story of this hurricane season so far. Public perceptions of hurricane forecasts tend to focus on uncertainty and conflicting predictions. But in the big picture, hurricane models adeptly forecasted Irma’s ultimate path to the Florida Keys nearly a week before it arrived there, says Brian Tang, an atmospheric scientist at the University at Albany in New York.
“I found that remarkable,” he says. “Ten or so years ago that wouldn’t have been possible.”

One reason for this is GOES-16, which launched late last year and will become fully operational in November. The satellite offers images at four times the resolution of previous satellites. “It’s giving unparalleled details about the hurricanes,” Tang says, including data on wind speeds and water temperatures delivered every minute that are then fed into models.

GOES-16’s crystal-clear images also give forecasters a better picture of the winds swirling around a storm’s central eye. But more data from this crucial region is needed to improve predictions of just how strong a hurricane might get. Scientists continue to struggle to predict rapid changes in hurricane intensity, Tang says. He notes how Hurricane Harvey, for example, strengthened suddenly to become a Category 4 storm right before it made landfall in Texas, offering emergency managers little time to issue warnings. “That’s the sort of thing that keeps forecasters up at night,” he says.
In December, NASA launched a system of eight suitcase-sized microsatellites called the Cyclone Global Navigation Satellite System, or CYGNSS, into orbit. The satellites measure surface winds near the inner core of a hurricane, such as between the eyewall and the most intense bands of rain, at least a couple of times a day. Those regions have previously been invisible to satellites, measured only by hurricane-hunter airplanes darting through the storm.

“Improving forecasts of rapid intensification, like what occurred with Harvey on August 25, is exactly what CYGNSS is intended to do,” says Christopher Ruf, an atmospheric scientist at the University of Michigan in Ann Arbor and the lead scientist for CYGNSS. Results from CYGNSS measurements of both Harvey and Irma look very promising, he says. While the data are not being used to inform any forecasts this year, the measurements are now being calibrated and compared with hurricane-hunter flight data. The team will give the first detailed results from the hurricane season at the annual meeting of the American Geophysical Union in December.
Meanwhile, NOAA has also been testing a new hurricane forecast model this year. The U.S. forecasting community is still somewhat reeling from its embarrassing showing during 2012’s Hurricane Sandy, which the National Weather Service had predicted would go out to sea while a European meteorological center predicted, correctly, that it would squarely hit New York City. In the wake of that event, Congress authorized $48 million to improve U.S. weather forecasting, and in 2014 NOAA held a competition to select a new weather prediction tool to improve its forecasts.

The clear winner was an algorithm developed by Shian-Jiann Lin and colleagues at NOAA’s Geophysical Fluid Dynamics Laboratory in Princeton, N.J. In May, NOAA announced that it would test the new model this hurricane season, running it alongside the more established operational models to see how it stacks up. Known as FV3 (short for Finite-Volume Cubed-Sphere Dynamical Core), the model divides the atmosphere into a 3-D grid of boxes and simulates climate conditions within the boxes, which may be as large as 4 kilometers across or as small as 1 kilometer across. Unlike existing models, FV3 can also re-create vertical air currents that move between boxes, such as the updrafts that are a key element of hurricanes as well as tornadoes and thunderstorms.

But FV3’s performance so far this year hasn’t been a slam dunk. FV3 did a far better job at simulating the intensity of Harvey than the other two leading models, but it lagged behind the European model in determining the hurricane’s path, Lin says. As for Irma, the European model outperformed the others on both counts. Still, Lin says he is confident that FV3 is on the right track in terms of its improvement. That’s good because pressure to work out the kinks may ramp up rapidly. Although NOAA originally stated that FV3 would be operational in 2019, “I hear some hints that it could be next year,” he says.

Lin adds that a good model alone isn’t enough to get a successful forecast; the data that go into a model are ultimately crucial to its success. “In our discipline, we call that ‘garbage in, garbage out,’” he says. With GOES-16 and CYGNSS nearly online, scientists are looking forward to even better hurricane models thanks to even better data.

The brain’s helper cells have a hand in learning fear

WASHINGTON, D.C. — Helper cells in the brain just got tagged with a new job — forming traumatic memories.

When rats experience trauma, cells in the hippocampus — an area important for learning — produce signals for inflammation, helping to create a potent memory. But most of those signals aren’t coming from the nerve cells, researchers reported November 15 at the Society for Neuroscience meeting.

Instead, more than 90 percent of a key inflammation protein comes from astrocytes. This role in memory formation adds to the repertoire of these starburst-shaped cells, once believed to be responsible for only providing food and support to more important brain cells (SN Online: 8/4/15).
The work could provide new insight into how the brain creates negative memories that contribute to post-traumatic stress disorder, said Meghan Jones, a neuroscientist at the University of North Carolina at Chapel Hill.

Jones and her colleagues gave rats a short series of foot shocks painful enough to “make you curse,” she said. A week after that harrowing experience, rats confronted with a milder shock remained jumpy. In some rats, Jones and her colleagues inhibited astrocyte activity during the original trauma, which prevented the cells from releasing the inflammation protein. Those rats kept their cool in the face of the milder shock.

These preliminary results show that neurons get a lot of help in creating painful memories. Studies like these are “changing how we think about the circuitry that’s involved in depression and post-traumatic stress disorder,” says neuroscientist Georgia Hodes of Virginia Tech in Blacksburg. “Everyone’s been focused on what neurons are doing. [This is] showing an important effect of cells we thought of as only being supportive.”

The most distant quasar ever spotted hails from the universe’s infancy

The most distant quasar yet spotted sends its light from the universe’s toddler years. The quasar, called J1342+0928, existed when the universe was only 690 million years old, right when the first stars and galaxies were forming.

Quasars are bright disks of gas and dust swirling around supermassive black holes. The black hole that powers J1342+0928 has a mass equivalent to 800 million suns, and it’s gobbling gas and dust so fast that its disk glows as bright as 40 trillion suns, Eduardo Bañados of the Carnegie Institution for Science in Pasadena, Calif., and his colleagues report December 6 in Nature.
“The newly discovered quasar gives us a unique photo of the universe when it was 5 percent [of] its present age,” Bañados says. “If the universe was a 50-year-old person, we would be seeing a photo of that person when she/he was 2 1/2 years old.”

This quasar is only slightly smaller than the previous distance record-holder, which weighs as much as 2 billion suns and whose light is 12.9 billion years old, emitted when the universe was just 770 million years old (SN: 7/30/11, p. 12). Scientists still aren’t sure how supermassive black holes like these grew so big so early.

“They either have to grow faster than we thought, or they started as a bigger baby,” says study coauthor Xiaohui Fan of the Steward Observatory in Tucson.

The temperature of the gas surrounding the newfound quasar places it squarely in the epoch of reionization (SN: 4/1/17, p. 13), when the first stars stripped electrons from atoms of gas that filled interstellar space. That switched the universe’s gas from mostly cold and neutral to hot and ionized. When this particular black hole formed, the universe was about half hot and half cold, Fan says.
“We’re very close to the epoch when the first-generation galaxies are appearing,” Fan says.

New Horizons’ next target might have a moon

NEW ORLEANS — The New Horizons team may get more than it bargained for with its next target. Currently known as 2014 MU69, the object might, in fact, be two rocks orbiting each other — and those rocks may themselves host a small moon.

MU69 orbits the sun in the Kuiper Belt, a region more than 6.5 billion kilometers from Earth. That distance makes it difficult to get pictures of the object directly. But last summer, scientists positioned telescopes around the globe to catch sight of MU69’s shadow as it passed in front of a distant background star (SN Online: 7/20/17), a cosmic coincidence known as an occultation.
Analyzing that flickering starlight raised the idea that MU69 might have two lobes, like a peanut, or might even be a pair of distinct objects. Whatever its shape, MU69 is not spherical and may not be alone, team members reported in a news conference on December 12 at the fall meeting of the American Geophysical Union.

Another stellar flicker sighting raised the prospect of a moon. On July 10, NASA’s airborne Stratospheric Observatory for Infrared Astronomy observed MU69 pass in front of a different star (SN: 3/19/16, p. 4). SOFIA saw what looked like a new, shorter dip in the star’s light. Comparing that data with orbit calculations from the European Space Agency’s Gaia spacecraft suggested that the blip could be another object around MU69.

A double object with a smaller moon could explain why MU69 sometimes shifts its position from where scientists expect it to be during occultations, said New Horizons team member Marc Buie of the Southwest Research Institute in Boulder, Colo.

The true shape will soon be revealed. The New Horizons spacecraft set its sights on the small space rock after flying past Pluto in 2015, and will fly past MU69 on January 1, 2019.

AI has found an 8-planet system like ours in Kepler data

Our solar system is no longer the sole record-holder for most known planets circling a star.

An artificial intelligence algorithm sifted through data from the planet-hunting Kepler space telescope and discovered a previously overlooked planet orbiting Kepler 90 — making it the first star besides the sun known to host eight planets. This finding, announced in a NASA teleconference December 14, shows that the kinds of clever computer codes used to translate text and recognize voices can also help discover strange new worlds.
The discovery, also reported in a paper accepted to the Astronomical Journal, can also help astronomers better understand the planetary population of our galaxy. “Finding systems like this that have lots of planets is a really neat way to test theories of planet formation and evolution,” says Jeff Coughlin, an astronomer at the SETI Institute in Mountain View, Calif., and NASA’s Ames Research Center in Moffett Field, Calif.

Kepler 90 is a sunlike star about 2,500 light-years from Earth in the constellation Draco. The latest addition to Kepler 90’s planetary family is a rocky planet about 30 percent larger than Earth called Kepler 90i. It, too, is the third planet from its sun — but with an estimated surface temperature higher than 400° Celsius, it’s probably not habitable.

Story continues below graphic
The seven previously known planets in this system range from small, rocky worlds like Kepler 90i to gas giants, which are all packed closer to their star than Earth is to the sun. “It’s very possible that Kepler 90 has even more planets,” study coauthor Andrew Vanderburg, an astronomer at the University of Texas at Austin, said in the teleconference. “There’s a lot of unexplored real estate in the Kepler 90 system.”
Astronomers have identified over 2,300 new planets in Kepler data by searching for tiny dips in a star’s brightness when a planet passes in front of it. Kepler has collected too much data for anyone to go through it all by hand, so humans or computer programs typically only verify the most promising signals of the bunch. That means that worlds that produce weaker light dips — like Kepler 90i — can get passed over. Vanderburg and Christopher Shallue, a software engineer at Google in Mountain View, Calif., designed a computer code called a neural network, which mimics the way the human brain processes information, to seek out such overlooked exoplanets.
Researchers previously automated Kepler data analysis by hard-coding programs with rules about how to detect bona fide exoplanet signals, Coughlin explains. Here, Vanderburg and Shallue provided their code with more than 10,000 Kepler signals that had been labeled by human scientists as either exoplanet or non-exoplanet signals. By studying these examples, the neural network learned on its own what the light signal of an exoplanet looked like, and could then pick out the signatures of exoplanets in previously unseen signals.

The fully trained neural network examined 670 star systems known to host multiple planets to see whether previous searches had missed anything. It spotted Kepler 90i, as well as a sixth, Earth-sized planet around the star Kepler 80. This feat marks the first time a neural network program has successfully identified new exoplanets in Kepler data, Jessie Dotson, an astrophysicist at NASA’s Ames Research Center said at the teleconference.

Vanderburg and Shallue now plan to apply their neural network to Kepler’s full cache of data on more than 150,000 stars, to see what other unrecognized exoplanets it might turn up.

Coughlin is also excited about the prospect of using artificial intelligence to assess data from future exoplanet search missions, like NASA’s TESS satellite set to launch next year. “The hits are going to keep on coming,” regarding potential exoplanet signals, he says. Having self-taught computer programs help humans slog through the data could significantly speed up the rate of scientific discovery.