randombio.com | science commentary
Tuesday, May 01, 2018

Science under siege, part 5

A reproducibility crisis, you say? Talk to the hand.


T his scientific “reproducibility crisis” is a remarkable social phenomenon. Everyone is sure it really exists, yet there's very little actual evidence for it. It's one of those topics where everybody on all sides is wrong . . . well, except me, of course.

A few days ago somebody sent me a link to a report by something called the National Association of Scholars, an advocacy group with a slight conservative flavor. It's another in a long series of articles claiming that modern science is becoming irreproducible. The critics are turning into a giant conga line. But is any of what they claim true?

The industry model

Industry has mostly given up on basic research. We all know the reason: they're product-oriented, and the top-down corporate model tends to stifle creativity. It's cheaper to “insource,” which means licensing something from a university or startup, and to skim results from the research literature and try to commoditize them, rather than try to discover something new that their competitors can run with at no cost. Economists call this the tragedy of the commons.

It's sausage-making in action. I've sat in conference rooms many times watching lab directors with dollar signs in their eyes hyping up poorly supported results to industry leaders, giving them snowjobs when challenged. I often wondered if any of them actually believed the bullshit they were getting.

I've also heard it from industry people. They're unhappy that grabbing results out of the research literature and turning them into products doesn't work, and they blame academics for publishing things that are wrong.

It's a classic social dynamic. The boss grabs a result out of the literature, patents it, and tells their staff to make it work. The staff report that, for whatever reason, they can't reproduce it, and avoid being fired by blaming academia.

It's a serious accusation, and it's important to understand what's happening. If we apply the wrong solution, research will simply grind to a stop and millions of people will die from uncured diseases. If you think that can't happen, take a look at what's happening to the pharma industry.

The academic model

Academia has the opposite problem. Knowledge is merely a byproduct, a pollutant as it were, that academics hold out as bait to get what they really want: papers and grants. Academic researchers don't validate their assays (they might claim to, but often they're unclear on the concept—validation is an elaborate series of tests that can take six months). Academics rely on products sold by industry; the goal is to get the assay to work, do the experiment, get whatever they get, write it up, and move on.

Since there's no job security for junior people, they're under the most pressure to get papers and grants. If they don't get them, their career is over. I've seen cases where a university gives people one year to get a project working and get a grant. If they don't get funded on the first attempt, they're thrown out. And they hate academia from then on.

The critics are doing research too

Critics of science are claiming something as fact. This means they ought to follow the same rules as everyone else: describe their methods in exact detail and state dispassionately whether the evidence supports their hypothesis. That's basic scholarly procedure. If they don't follow it, they're just doing politics.

The Scholars report is motivated in large part by skepticism about global warming: computer models don't match reality, the authors say, so the warming theory is flawed.

That may be true, but their arguments attributing it to a basic problem in science are weak. They cite two famous reports as evidence, both of which are seriously flawed.

  1. Amgen tried and failed to reproduce a number cancer results. This is not strictly true. One guy, named C.G. Begley, who worked at Amgen, claims to have tried and failed to reproduce the results of some cancer studies. It's not clear whether Amgen commissioned or supported this or not, but what is clear is that Begley is no longer at Amgen.

    The problem is this: where is the evidence that backs up this guy's claim? I have still not been able to find it. Claiming that someone else's research is wrong demands the same scientific rigor and disclosure as the original research. Otherwise, you're just hurling accusations.

  2. Ioannidis theorized that most published research is wrong. This guy's theory was that the vast majority (99.99%) of possible hypotheses are wrong, and therefore everyone should use a p-value of 0.0001 or some similar number as a criterion for statistical significance. Since most people use 0.05 or 0.01, he concludes that their results are invalid.

    This is a badly flawed argument, as I discussed here, and since much of Ioannidis's argument is based on this flawed theory, his conclusions too are bald, unsupported assertions.

It may or may not be true that science is being done badly, but you can't make the claim by doing bad research yourself. Both these critics have made the same mistakes—not showing their data, basing conclusions on flawed theories, and not disclosing their methodology—that they accuse others of doing. That doesn't mean their claims are false, but they have not been scientifically demonstrated. By Ioannidis's argument, there would be a 99.99% chance it's not true.

But suppose we accept the conclusion. We've all seen bad research. I've seen invalid reasoning, faulty statistics, and plain old data fabrication and plagiarism many times. It definitely exists. But, just as with global warming, there are three questions we should ask.

  1. What's really causing it? Blame must go somewhere if you're going to make improvements.
  2. Does it harm science more than the proposed solution? No good scientist will accept something as true just from a single report. At a minimum, they will withhold judgment until somebody else repeats it. Does the taxpayer want to double the funding for science research? I didn't think so.
  3. It is the most serious problem science is facing? There are much bigger problems in science than occasional irreproducible results, the biggest one being the lemming effect.

Who is to blame?

Here's an example that might shed light on the first question. In the lab, we separate different molecules using an HPLC column, which is a metal tube about 150 mm long packed with special material. We inject samples into the tube, and each molecule comes out at a specific time, called the retention time. The molecules then get detected by some instrument.

Last month I bought a new type of column called a porous graphitic carbon column. The first day it worked beautifully: all my standards eluted at 6.00 minutes, so I ran some samples. Suddenly they started coming out at 5.5 minutes. By the next day the retention time was down to 2.5 minutes. The column had great resolution, but retention times were not reproducible.

It turns out that the manufacturer (a very reputable company) knows this, but doesn't mention it. After some searching, I discovered a document which hints at the problem, and I found two or three papers in the literature that acknowledge it and propose solutions (which didn't help, though we eventually found a solution on our own).

I should emphasize that we really like this column and use it all the time. It's so useful we bought a second one. But most researchers wouldn't get that far: they'd see yet another product from industry that doesn't work as advertised, throw it away, and try something else. This happens all the time: antibodies that don't recognize the right protein, and chemicals that aren't as pure as claimed. But at least these are real products, not ideas taken from somebody's paper, patented, and shoved down the throat of some poor researcher who's supposed to make it work. That has been the bane of my career, and I'm still dealing with it.

The fact is that doing science out of a desire to become rich and famous does not work. If people only see dollar signs, they will always take short cuts.

In all these cases, the problem is institutional, not scientific. Pressuring scientists to change how they do research will not solve the problem. The institutions themselves need to be radically reformed.

Does it harm science more than the proposed solution?

If you've ever wondered why scientists just yawn when the topic comes up, it's because we've been through it before. We had the scientific fraud crisis, and then the other reproducibility crisis. The solution is always the same: more rules and accountability. Translation: more bureaucracy, and slower and much more expensive science.

That last crisis gave us Good Laboratory Practice, or GLP. Mention GLP to a commercial lab, and their eyes light up. The FDA demands it on anything that's used to document safety or efficacy, and it costs ten times as much as normal research. If you take a chemical out of the fridge to weigh it, you have to document the time, date, amount, lot number, calibration date of the balance, and God knows what else. If you buy sodium chloride, you have to test it to make sure it really is sodium chloride and not, say, nerve gas (how embarrassing, the last time that happened; or as in, “Why is it pink?”). And you have to test every conceivable thing: does your chemical remain stable for 1, 2, 3, 4, 6, 12, 18, and 24 months in the fridge? How about in the freezer? What if it's in the freezer and you thaw it out and re-freeze it? What if the humidity in the room changes? And you need records to prove that your freezer was really at the right temperature on the top shelf, and on the second shelf, and on the front of the shelf, and in the back, and you have to prove that your thermometer was properly NIST-calibrated and that your temperature records were collected securely and never tampered with. And on and on. This is one reason why an NDA is a stack of papers 33 feet high.

This makes sense for a drug company, but if an academic did this he or she would be tossed out on their ear. If the real cause of the supposed reproducibility crisis is merely guessed at, which is what people are doing, it's what we'll end up with.

If you want to establish something as a fact, you have to follow scientific procedures. That includes establishing that there really is a reproducibility crisis. There might be one, but so far, all I've seen is politics, and I'm not convinced.

The lemming effect

If there's a serious crisis in science, it's not about reproducibility. It's about groupthink and lemming-like behavior. One study finds that cholesterol causes heart disease, or that glyphosate causes cancer, and suddenly ten or twenty other labs find the same thing. It's hard for a layman to understand how the data can be correct, the experiments conducted and analyzed properly, yet the finding is still wrong. But that's the nature of the real world. Blame the universe: if you look for something hard enough, you'll find it—even if you have to match your aunt to your ankle to see it.

Often this comes from labs that are still struggling to make names for themselves. They're thinking: if this other guy says it, it must be right, so here's a project that will work.

They also think there will be funding. The competition for funding is intense. If someone claims to have found something, whether it's global warming caused by CO2 or cancer caused by glyphosate, they'll propose a hypothesis for it because they think the government will fund it. That creates a vicious lemming cycle, and believe you me, lemmings can be pretty darn vicious when they start cycling.

Think carefully before you accept conclusions about this crisis, either for or against. It wouldn't take much for what happened in industry to happen in academia. In ten or twenty years, all our new drugs and all our new scientific discoveries could be coming from China. That means the problem will go away without ever being solved. And we'll pay a big price.


may 01 2018, 5:31 am


Related Articles

The Earth is Round (p<0.05)
After 23 years, the paper with that title still raises uncomfortable questions

Science Under Siege, Part IV: Are we too dependent on the infrastructure?
A lab horror story just in time for Halloween

Science Under Siege, Part III
Understanding what causes bad science is critical to reforming it.

Science Under Siege, Part II
People say there are no jokes in scientific papers. But I found one.

Science under siege, Part I
Bogus claims about the reproducibility of scientific research will not die on their own. We must give them a push.


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
book reviews
home