books book reviews

books on science, ethics, and science policy

reviewed by T. Nelson

score+2

Ending Medical Reversal
Improving Outcomes, Saving Lives

by VK Prasad and AS Cifu
Johns Hopkins Press 2015. Paperback ed 2019, 264 pages

Reviewed by T. Nelson

Many of the procedures done by doctors aren't based on any real evidence. When the medical community figures out that they either don't work or actually make the patient worse and stop using them, it's called a ‘reversal’.

The authors mention coronary stents for angina and intracranial stenosis, cholesterol-reducing drugs, nosocomial infections, the placebo effect, the value (or lack of it) of mammography, colonoscopy, and PSA tests, and several other examples. They speculate on the possible causes, and there's a short chapter on how you, dear patient, can avoid becoming a victim of reversal.

Their prescription: change the medical culture and make it more evidence-based. It might help, but doctors at the coal face, or I should say patient face, have a lot of untapped knowledge as well. Information has to flow both ways or medicine will continue to stagnate.

It's interesting and non-technical, but the book is written for the layman. There aren't many statistics or facts that a medical administrator could use to make a case for reform. On the second-last page, they finally get around to saying the real reason their ideas won't work: somebody would have to pay for them. Three guesses, dear patient, about who that will be.

PC writing style.

mar 20, 2020

score+1

Science and the Good:
The Tragic Quest for the Foundations of Morality

by JD Hunter and P Nedelisky
Yale 2018, 289 pages

Reviewed by T. Nelson

Can science be the foundation of morality? That's the question these two authors ask. Their answer, of course, is no. But there is a more fundamental question here: can morality ever have a solid foundation, or is it a mere cultural construct?

For morality to have a solid foundation, there would have to be some value that is universally held—something that is accepted by everyone to be morally right or wrong. In medicine, that value is survival. This is not culturally contingent, but it lacks universality: many moral conundrums cannot be reduced to questions of life and death. Conversely, peace, happiness, and justice are clearly culturally contingent and therefore also cannot be a basis for morality. And, the authors say, neither science nor religion have an answer. But this is something we already know.

In the first few chapters they spin a hearty revisionist tale trying to show that “science” has tried to define a morality to replace religion and has failed. They claim that people like Jonathan Haidt, Patricia Churchland, and E. O. Wilson proposed a scientific morality. They did not. Like Darwin, they never tried to invent a theory of morality, but only described it and its possible evolution. If some scientists did propose a scientific morality, they weren't speaking as scientists, but as educated thinkers (which they have a right to do), and their ideas never caught on.

Most of the other “scientists”—Locke, Hobbs, Bentham, etc., as well as moderns Joshua Greene and Alex Rosenberg, are either philosophers or fringe figures, and their writings, interesting as they may be, have never been a part of science. Yet the authors repeatedly call them “moral scientists” and their philosophical speculations “science.”

The examples pulled from the scientific literature confirm what I'm saying: they are all descriptions of how moral judgments are made in the brain or how they might have evolved. Nowhere is the basic claim in this book that science is trying to define morality even remotely established, and so the argument that it has failed is unsound. Just as Thomas Nagel missed the mark by blaming Darwinism for not solving a problem for which it was not designed, these authors miss the mark by blaming science for failing to succeed at something it's not designed for and is not trying to do. Calling a philosopher a scientist six hundred times does not make him or her one.

The only credible argument one could make is that some scientists believe there is no such thing as absolute morality at all. This is, of course, a widespread view in modern society, but it also doesn't support the case that science has failed or that it would fail if it found some way of addressing the question.

apr 20, 2019

score+4

Scientocracy:
The Tangled Web of Public Science and Public Policy

by PJ Michaels and T Kealey, eds.
Cato 2019, 365 pages

Reviewed by T. Nelson

This is a collection of essays critical of how government, politicians, and academic leaders affect science policy in the USA. Some of them are thoughtful and cogent, others less so. The presence of the latter means that this book is likely to be ignored by those who need it most.

But the chapter by PJ Michaels is a devastating indictment of the global circulation models, and it's essential reading for anyone interested in the question of global warming. This chapter makes the book worth reading.

Climate

In it, PJ Michaels discusses the “tuning” that's done on the general circulation models, or GCMs, to make them predict the “correct” amount of global warming. Many people don't know that the models predict too much warming, sometimes way too much, and they have to be tweaked to make them “predict” the changes that happened in the 20th century. Climatologists hide this fact because they're afraid that skeptics will use it to claim the models are invalid.

Michaels says climatologists think of tuning as an “art” because it makes the models unphysical and because the tuning can't be reproduced by anyone other than the original programmer. Whether it discredits the GCMs or not, all the curve-fitting that's going on certainly discredits any claim that climatologists are doing anything like the modeling that's done in a hard science.

I always thought these models were based on the Stephan-Boltzmann equation with some allowance for the physical effects of clouds, winds, and albedo. The models were supposed to be telling us what the relationship is between CO2 and temperature. When your model gives the wrong result, it's telling you something important: that you don't understand the system. You can't just go back and change it to give you the right result. The climatologists, in effect, convinced themselves that fudging the results of their models was okay.

If what Michaels says is true, it means climatol­ogists are, in effect, adding their conclusion—that X amount of CO2 causes Y amount of warming—into their models. That would mean the models are essentially useless.

This is a bombshell revelation. This isn't how you're supposed to do science. At this point, it's not clear how much science, if any, is in these models. I hope there must be some.

Nutrition

Why are Americans so fat? The answer is simple: the US federal government for decades told people to eat more carbohydrates and avoid fat and salt.

In a way it almost make sense: the government makes everything bigger and more bloated. So it stands to reason they'd do the same to us. Bigger is better, and if they can't create more taxpayers, at least they can create bigger ones. So they did.

That probably wasn't the intention, of course. But surely people aren't dumb enough to take nutritional advice from the US government. Or are they? The government told people to eat more carbohydrates and less salt, to eat margarine instead of butter, and that meat, fat, and eggs are bad for you. Powerful senators and congressmen deliberately stacked the hearings with scientists who all professed with 100% certainty that they were right. Anyone with a dissenting view was excluded. The goal was to create a consensus where none really existed. It's a model the government follows again and again.

And it worked: Americans turned into barrage balloons. It created a brand-new problem for the lawmakers to solve with bigger and better regulations. Some scientists back then warned that guidelines made by fake consensus would create distrust of government among the public, and it did just that: it's why we have a whole generation that is cynical about the food industry, vaccines, and infectious diseases.

Radiation

In Chapter 7, the author is really worked up over the linear no-threshold (LNT) model for radiation exposure, which says that half the amount of radiation causes half the amount of genetic damage, all the way down to zero.

But no one really believes this. As I recall, LNT was adopted because of the extreme difficulty in obtaining valid data at such low doses. All three hypotheses—LNT, threshold, and hormesis—are taught in radiation safety courses as the motivation for ALARA, which means “Who the hell knows, let's just keep it low!” It was not an ideological decision at all, but a committee compromise: a decision not to decide until the technology is good enough.

And it worked, at least until the bureaucrats made us account for immeasurably small amounts of radiation from atoms that no longer exist because they decayed to nothing ten half-lives ago. But that's a rant for another day.

Government funding

The other chapters have quite a different tone and show a distinct antipathy toward basic science. In Chapter 6, for example, the author questions the value of the Human Genome Project and claims that most innovation actually comes from industry, presum­ably because industry is the one that creates the final product, which can be a drug or medical device.

But this is silly. These days industry does almost no basic research. Without the DNA technology and the basic understanding of immunology and molecular biology done mostly by academics, industry would still be doing pharmacognosy and synthesizing small organic molecules instead of cloning antibodies as they do today.

In Chapter 9 the author calls science “a canon of knowledge that is massively littered with false-positive results.” In Chapter 1 another author repeats Ioannidis's widely discredited claim that almost all scientific results are false, saying “many studies are designed either to produce positive results or to produce experimental data that support the original research hypothesis, or both.”

I've debunked Ionannidis's claims here many times. And the idea that studies designed to produce positive results are a problem suggests that the author has never written a research grant. Unless you're planning a clinical trial, you have to design experiments to produce positive results, or the government won't fund it. The standard way this is done is by proposing experiments that have already been done so the investigator knows exactly what the results will be. Yes, it's stupid, but that's what the government demands.

What we need is more cooperation between industry and academia, not more enmity. Both have things the other needs. And both have flaws. If the authors wanted to be constructive, they'd have proposed a way to make that happen.

Innovation and government

The authors are right about one thing: the academia-government complex stifles innovation. A new and less corrupt way of funding science is desperately needed. Government funding has made ‘perverse incentives’ into a policy.

But the outrage and even open hostility against basic academic science is proof of what I've been saying all along: if the claims about a supposed “reproducibility crisis” aren't properly challenged, false claims will snowball and mistrust of science will grow in the eyes of the public. Science and science-related industries will decline and the focus of discovering new knowledge will move to China. And the press will congratulate themselves on a job well done.

And that's my rant for today.

nov 10, 2019. edited nov 13, 2019