Science is much more than a body of knowledge. It is a way of thinking. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which ones best match the facts. It urges on us a fine balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything—new ideas and established wisdom. We need wide appreciation of this kind of thinking. It works. It’s an essential tool for a democracy in an age of change. Our task is not just to train more scientists but also to deepen public understanding of science.
—Carl Sagan (1990)
Last week’s New York Times Sunday Review section featured a first-rate opinion piece by Naomi Oreskes, Professor of the History of Science and Affiliated Professor of Earth and Planetary Sciences at Harvard University. Her research focuses on the earth and environmental sciences, with an emphasis on understanding scientific consensus and dissent. The article, entitled “Playing Dumb on Climate Change,” is posted on the NYT website here.
The article’s title suggests the context—climate science and scientific communication. Oreskes summarizes important features of scientific inquiry and, in doing so, identifies a key aspect of science that could mean the threat of climate warming has been “underpredicted” by scientists and, therefore, understated by the media. I highly recommend a careful reading of her opinion piece, not only for its implication about climate warming but for its concise explanation of the scientific method.
Here are the key points made or implied by Oreskes, about good science.
- Scientific appraisals are conservative, highly qualified, and provisional.
- New scientific claims or denials of established theories are met with skepticism. The burden of proof rests on the person making the new claim. If one seeks to replace or deny a successful theory, then the alternative must be shown to explain a similarly full range of phenomena.
- Science demands rigor (a lot of evidence and statistical significance).
- A scientific claim must be falsifiable. Typically, a result or hypothesis is rejected when it fails to meet or exceed a 95 percent confidence limit. In other words, a claim is thrown out when the chance of it being a coincidence or false is shown to exceed 1 in 20.
- The 95 percent confidence level is a value judgment based on an aversion to bias and making a mistake in claiming a phenomenon is real when it is not. In the jargon of statistics, this mistake is known as a Type 1 error.
- Ideally, science requires a researcher to avoid the “method of the ruling theory” (or pet theory) and instead apply the “method of multiple working hypotheses.” See T. C. Chamberlain’s classic paper (Science 1890) for much more commentary on these methods.
- Similarly, science requires a researcher to avoid the mistake of being too conservative or rejecting a phenomenon that is actually real (Type 2 error).
In science it often happens that scientists say, “You know that’s a really good argument; my position is mistaken,” and then they would actually change their minds and you never hear that old view from them again. They really do it. It doesn’t happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day. I cannot recall the last time something like that happened in politics or religion.
—Carl Sagan (1987)
Science deniers often point to the process of inductive reasoning as laden with subjectivity and uncertainty. Yes, science can never prove anything with 100 percent certainty; however, good science uses a very effective systematic approach that often identifies the most parsimonious explanation. The process consists of (1) devising alternative and competing hypotheses; (2) devising a clever experiment with alternative possible results, which enable the researcher to reject one or more of the alternative results; (4) performing the experiment to get meaningful results; and (5) repeating steps 1 – 4 to refine the remaining explanations or hypotheses. See the highly cited paper by J. R. Platt (Science 1964) for more on “strong inference.”
The preceding approach might suggest that the scientific method is an immutable, linear series of steps that always yields a clear result. On the contrary, the process of science (i.e., methods, data analysis, and findings) undergoes critical interactions with the scientific community through debate, peer review, and replication of experimental results.
To close, I reproduce below three brilliant and eloquent quotes from Richard Feynman, the Nobel Prize winning physicist, that offer insights into the process of science, specifically experimenter bias, experimentation, and scientific uncertainty, respectively.
The first principle is that you must not fool yourself and you are the easiest person to fool.
In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong.
It is in the admission of ignorance and the admission of uncertainty that there is a hope for the continuous motion of human beings in some direction that doesn’t get confined, permanently blocked, as it has so many times before in various periods in the history of man.