Imagine that you've found something really, really cool — say for instance you've discovered that female swallows prefer symmetrical male swallows as mates. Your hypothesis: that symmetry is a sign that you have good genes, and female swallows use that to pick healty males for reproduction. Now imagine that your new discovery sets your subfield in science on fire, and gets a whole bunch of people excited about symmetry. Is this true for other animals? Is it true for humans? Is the secret to hotness having a symmetrical face?
This is exactly what happened to Anders Møller in 1991. Møller was a Danish zoologist at Uppsala University, Sweden. He published this result on symmetry in Nature, the most prestigious journal in science. To quote Jonah Lehrer:
In the three years following, there were ten independent tests of the role of fluctuating asymmetry in sexual selection, and nine of them found a relationship between symmetry and male reproductive success. It didn’t matter if scientists were looking at the hairs on fruit flies or replicating the swallow studies—females seemed to prefer males with mirrored halves. Before long, the theory was applied to humans. Researchers found, for instance, that women preferred the smell of symmetrical men, but only during the fertile phase of the menstrual cycle. Other studies claimed that females had more orgasms when their partners were symmetrical, while a paper by anthropologists at Rutgers analyzed forty Jamaican dance routines and discovered that symmetrical men were consistently rated as better dancers.
The problem: subsequent studies showed less and less of an effect. Over the course of 5 years, the average effect sizes attributed to symmetry went down by 80%. This is known as the 'decline effect', and it affects studies in fields as broad as psychology to medicine.
The two interesting articles about this that I've summarised over the past week have been: The Truth Wears Off (originally published in the New Yorker in December 2010 by now-disgraced science journalist Jonah Lehrer) and Jonah Lehrer, Scientists, and the Nature of Truth (written by Virginia Hughes on the science writer blog Last Word on Nothing, shortly after Lehrer's fall from grace).
I was really excited when I first read Lehrer's take on the Decline Effect, because it was a well-written, easy-to-read piece that takes you on a journey around the world with a bunch of worried scientists. But it was difficult to separate what was real and good in the piece from Lehrer's reputation: this was a journalist who was shown to have made shit up, who resigned from the New Yorker in 2012, only 2 years after this article was written.
The decline effect should be interesting to you because it affects how you read scientific news. If you read something like "Researchers Discover 'Anxiety Cells' In The Brain", you probably shouldn't go around saying "OH, I'M SO STRESSED BECAUSE OF ANXIETY CELLS IN MY BRAIN."
In fact, it's worse: the decline effect might mean you shouldn't say that for up to 5 years after the initial study.
Lehrer presents a few arguments from scientists as to why this is true. Quoting from my summary:
- Regression to the mean. Valid reason. Problem is, it doesn’t explain everything: "Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!"
- Intellectual fad: when an exciting new phenomenon comes out, journals only want confirming data, so they only publish positive studies. Then after a few years the fad is now an established idea and the attractive thing turns into attacking that established idea.
- Publication bias: scientists and journals prefer positive data over null results.
- Selective reporting: this is way more subtle. It skews results towards positive results because scientists are human and they have biases. For example, when you make tiny measurement adjustments and unconscious misperceptions during an experiment, you are probably influenced by your existing biases and this can turn out to affect your research! Classic case: acupuncture studies are nearly always more positive in Asia (where it is widespread) than in the West (where it’s not). Basically, selective reporting is built on the cognitive flaw that you want to be right.
- Significance chasing: John Ionnidis (epidemiologist, Stanford) thinks this is another way selective reporting happens. Most results in science have to pass the bar for significance — that is, the effect sizes in their data must be above 5% significance. The 95% boundary for statistical significance in science is actually arbitrary, but careers depend on it, so scientists just chase that bar.
- Random noise: randomness when doing experiments; though Lehrer notes this is the weakest possible reason.
(Here's a great comic about significance chasing from Randall Munroe of XKCD):
How do you know Lehrer can be trusted? Well, it turns out that Virginia Hughes went off to ask the scientists who were originally interviewed for Lehrer's piece. What you learn is that only the bits where he quotes the scientists are accurate. Everything else — when Lehrer comments, or when he editorialises based on their quotes — are less trustworthy.
This also makes sense given that it was published in The New Yorker — which possesses one of the most rigorous fact-checking institutions in the industry.
But Hughes concludes, and I quote from my paraphrasing in my summary:
The biggest problem Hughes finds is that Lehrer presented an argument that is not supported by the vast majority of scientists, and he never let his readers know this is the minority, non-mainstream view. Hughes finds this depressing because Lehrer wrote in the most elite magazine, with the smartest editors and the best fact checkers, and he still got the story wrong.
If you want a takeaway from this: know that the decline effect is real, that scientists know about it, and that you should be sceptical of scientific studies reported in the news ... at least until a few years in.
There really is no replacement for thinking for oneself. The decline effect is just another reason why.