How science works: Theory vs. practice, as seen by a meteorologist

 

by

 

Chuck Doswell


Posted: 10 November 2004 Updated: 05 November 2005: some minor revisions and fixed some typos.

As usual,this essay is mine and mine alone. It represents nothing more than my personal view point. If you want to comment on this, or to make suggestions, contact me at cdoswell@earthlink.net. If you're offended by it in some way, get over it!


 

Introduction

This essay is the result of a career's worth of conversations with my colleagues in the business of science, as well as my readings on the subject. During my development as a scientist, I became aware early on that I was uncertain about how science actually worked. Those who taught me were concerned mostly with the "nuts and bolts" issues, but there was not any time spent on reassuring me that what I was doing was "right" or how to choose among the many alternatives that were presented to me.

Later, as I pursued these questions on my own during the course of becoming and working as a scientist, I came to what I hope is some understanding of the process. I'll provide what I think to be a key set of references in a bibliography. This essay will not attribute each idea unambiguously to a particular reference, but if you take the time to read the references, you will see the roots of many of my ideas. If you had the chance to talk to all of the people from whom I've learned, you'd also recognize other roots of my ideas. It is clear to me that I've gained from all these sources, and it would be dishonest to claim that everything in this essay originated with me. However, my own take on things is almost certainly unique, so perhaps some or all of the contributors to my views would repudiate some or all of what I'm going to say here.

My reason for sharing this is not to convince you of the "truth" of my perspective. You'll have to engage in your own individual search for truth, and it will be a lifelong project. Rather, I hope to stimulate you in that very process of seeking your own understanding of scientific truth. Along the way, I hope to convince you that "scientific truth" is not some independent ideal, apart from all the rest of your life. Scientists are human beings, with all that being human implies. Scientists generally select themselves, because the difficulty in achieving anything in science is not something you would take on successfully if you weren't compelled to do so by an internal need that drives you to do things that are difficult and somewhat unnatural. Therefore, if you seek to be a scientist or to be a better scientist than you now are, I'm hoping that I can suggest some lines of inquiry and thought that will be valuable to you. I think I'm a better scientist today than I was 30 years ago, before I did all this reading and thinking and talking with my colleagues. It's my hope that sharing this will help you - it's not my intent to convince you that I know everything about this subject. I don't want to be your "guru" - I want to challenge what you think you know about how science works, to convince you to think about such things - if you're not already doing so.

If what follows seems to you to be rambling, you're probably right. This is a hodge-podge of thoughts and may not be all that coherent. I hope, nevertheless, that it will resonate here and there with someone.

 

Principles of "the scientific method"

Virtually all scientists would agree that "the scientific method" is not a formula for doing science, but an abstract, vague notion of how science is done. It speaks more of the existence of some basic principles than it does about a recipe that you can follow to satisfy yourself and others that the work was done correctly. I'll discuss my views on what those basic principles are, both in the abstract and in the reality of how science actually is done in my own field. I might use examples from other fields, but only insofar as I know them from my readings, rather than my actual experience, naturally.

 

Theoretical science and logic

The word "theory" in science has a multiplicity of meanings. Its use in science is not like the conversation in a bar that begins, "I've got a theory about such-and-so." Such a colloquial use of the word is as a hypothesis or conjecture about some topic. That's not the usage with which this section is concerned, however. Generally, "theory" in science refers to abstractions - human constructs that have no real existence but which we use to express some essential aspect of a scientific issue, and from which we have stripped away all the messiness of reality. It is highly idealized in other words. For many purposes and in many sciences, "theory" is expressed in mathematical terms. Since I practice in such a science, that's what I'm going to discuss.

Mathematics is its own discipline, with its own rules and philosophical underpinnings. Since I'm not a professional mathematician but only a user of mathematics, I can only talk vaguely and generally about those topics, but I believe I know a few things about mathematics.

First of all, the "rules" of mathematics are not some set of abstract truths. We can argue about the reality of mathematical constructs, but the rules of mathematics are more about the logic of how symbols are manipulated than they are about reality. Everyone should be familiar with the geometry of Euclidean spaces, but mathematics is not limited to Euclidean spaces. Many people have difficulty imagining non-Euclidean geometry in spite of the fact that they are familiar with many common examples. For instance, the surface of a sphere is a 2-dimensional, non-Euclidean space. Everyone has seen and felt a sphere, but nevertheless many are confused by non-Euclidean geometry, for some reason.

The axioms of mathematics comprise an arbitrary set of assumptions that can be changed to fit different situations. Euclidean geometry is one set of such assumptions. Euclidean geometry does not work on the surface of a sphere, but we can develop another set of rules to conform to our expectations about the geometry in such a situation. Other non-Euclidean geometries can be devised. Various axiom sets can be devised for arithmetic, algebra, geometry, sets, and other branches of mathematics.

Part of the process for mathematics is the creation of symbols that stand for the elements of mathematics. These symbols are arbitrary in that the symbol is not necessarily related to the object for which it stands in any obviousl way. A common symbol for a variable in algebra is x. There's no obvious reason to connect the symbol x to anything in particular. We could just as easily use the symbol y or some random collection of marks, or whatever. Other equally arbitrary symbols (e.g., + or -) are used to stand for operations upon the variables. The axioms of a particular branch of mathematics are expressed in symbolic terms, and the "rules" of mathematics are associated with how those symbols are manipulated. What is sometimes amazing to me is that using a set of essentially arbitrary rules to manipulate a set of arbitrary symbols can actually have a meaning in the real world. "Theory" in the sense I'm using the term, is about such manipulations. Using mathematics, it is possible to make deductions from the physical principles expressed in symbolic (mathematical) form that can offer insight into the workings of the real world. That is truly astonishing, at times!

The "rules" for mathematical symbol manipulation are based on logic. Such things as "if A=B, and B=C then A=C" express these rules of logic, and their roots are often, but not necessarily, based on real world experiences. Of course, some of the theoretical results that come from mathematics are difficult to reconcile with our real experience. The thing about theoretical developments, however, is that if the rules have been followed, the results are incontrovertible. We can take issue with the assumptions, but if the derivation from those assumptions is valid according to the rules of mathematics, there is simply no room for argument over the results

As I'll discuss shortly, theory based on mathematics is an important tool in science, but it is not science, as I see the term. It's simply mathematics. If there is a relationship between a mathematical result and reality, all well and good, and insights gained this way are very useful to scientists, but science itself operates very differently. Theoretical results are regularly published in science journals, and there are "theoretical" branches of science wherein the entire field is about the exploration of the mathematical implications derived from a particular set of assumptions presumed to relate to the physical world. Such work surely is not to be denigrated, but the practice of mathematical derivations from a set of assumptions is only an adjunct to the real business of science.

Many scientists are frustrated with pure mathematicians, who seem to take a positive delight in the irrelevance of their mathematics to reality. The more abstract and, therefore, devoid of physical meaning their mathematics, the more some mathematicians seem to like it. Many of them abhor any semblance of practical application, and so they seek to avoid research into areas of mathematics that could be applied. Nevertheless, scientists with a knowledge of mathematics often persist in finding practical applications for very abstract branches of mathematics. At one point, for instance, tensors were at the forefront of abstract and seemingly useless mathematics, but Einstein found an application for them and now they seem very mundane to most applied mathematicians. Other examples can be found, of course: number theory, set theory, abstract vector spaces, and so on.

However, those scientists who are less abstract than the "theoretical" practitioners often express the same frustration with purely theoretical approaches to science. I'm not necessarily one of those gripers, but I do feel that pure theory in meteorology is generally a sterile exercise, at least in my field. It may be very productive in other branches of physics, of course. In meteorology, it's important that we learn from our abstractions in some way, and abstraction for its own sake is pretty much useless to meteorology. Of course, if someone can show that some really abstract concept actually has value in meteorology, then I have no problem with that topic. For an example from my own work, I found derivative estimation and mapping of irregularly-spaced observations from their original location onto a regular grid are two non-commuting operators, whereas they do commute if the original data are regularly-spaced. It turns out there is a type of algebra, called Lie (pronounced "Lee", not "Lye") algebra, that is concerned with non-commutative operators. Thus, a very abstract branch of algebra might indeed be of some interest to me. [I never did pursue this topic, although I certainly would be interested if some theoretically-inclined meteorologist did pursue it!]

Theory can be fun for its own sake, but in the final analysis, it's only mathematics, and it misses something essential about science that's not shared with mathematics. In mathematics, proof is possible, whereas "scientific proof" is something of an oxymoron! That is, proof isn't generally possible in science. I'll discuss this in more detail in the next section.

 

Observational science

Real science has been said to have begun with the Greeks, but whatever the Greeks did along scientific lines mostly ended in Europe with onset of the "Dark Ages" - although real science as we know it did continue to grow and evolve in the Islamic world. Islamic science helped seed the re-emergence of science during the Renaissance in Europe. What made Renaissance science so revolutionary was that some daring individuals actually began to think that it was useful and important to conduct experiments to test ideas. Greek "natural philosophy" (which is what was the Greek concept for what we would know call "science") was mostly thinking and deduction, without experiment. In fact, at some point it was common (and still is, in some circles) to look down on the very idea of experiments to verify deductions. Experiments involved work, a "dirtying of the hands" process that was contrary to the notion that proper deductions would need no experiment to confirm their validity. If logic was pursued correctly (as in mathematics) there could be no dispute about the results.

Therefore, what I see as the origins of modern science lie with the notion of doing experiments in the real world. Ideas could be checked against real events, data could be collected, and ideas could be refuted on the basis of experimental evidence. Ideas that conformed to experimental evidence could be said to have been validated, but a validation is not necessarily the same thing as a proof. After all, the experiment could have been biased (intentionally or not) to give a certain result, or the measurements could have been done badly so as to give an erroneous result, or the accuracy of the measurement system could be questioned, or the experimental design may have been flawed, permitting another explanation for the result that differs from the hypothesis being tested. In other words, the interpretation of the evidence was no longer incontrovertible as it is with mathematical proof - on the contrary, a whole host of issues could arise from the mere act of experimentation.

Experimental design was, and is, a serious challenge for scientists. The object is to design an experiment that will give as unambiguous a result as possible. For experiments such as what might be done in a physics classroom by rolling balls down inclined planes, dropping objects from buildings, and such, the real world contains a lot of processes that are not included in the simple theories - friction, air resistance, people creating vibrations by walking outside the room, and so on. A well-designed experiment will minimize these complicating factors and give results that have a chance of reflecting theoretical predictions (hypotheses) about the outcome. Very sophisticated apparatus can be built to permit validation of conjectures about very subtle physical effects.

However, for many sciences - and meteorology is one of them (as are astronomy and geology) - most of the "experiments" cannot be conducted in a laboratory under carefully-controlled conditions. Rather, we must test our ideas about what's happening in the real world by collecting data that, hopefully, can be used to validate the concepts expressed within our ideas. We depend on having particular events happen in such a way that we're able to collect the right kind of data. There are two basic paths for data collection in meteorology: "operational" data collected routinely for the purpose of weather forecasting, and "experimental" data collected in special field observing campaigns, usually in some area and at some time where the phenomenon of interest is likely to occur. We can use both types of data to test scientific ideas - operational data are generally rather sparsely distributed and processes that might be influencing the data may not be sampled very well. Experimental data, on the other hand, may be capable of detecting small-scale events, but the events in question might not happen during the course of the experiment, or the data collected on some event might be compromised by any of a large variety of problems. Moreover, it may be that the data we collect, either operationally or in special experiments, might not be the right kind of data - often it's hard to know in advance just what (and how much) data is needed.

Another complicating factor in meteorology (and some other sciences) is that even if we collect data on an important event and it's just the right sort, and we even have enough of it to see what we need to see, the next example of such an event is not likely ever to be exactly the same. Therefore, a thorough test of an idea about the processes leading to some event requires that we sample many similar events (and struggle with just how to define when events are "similar" since all events are different!). Clearly, "similar" events share some aspect of the even in common, whereas other aspects can difffer without altering the essential similarity. It should be evident that deciding how to classify (i.e., to find examples that share some characteristic and so can be grouped together) events can be tricky. This question amounts to formulating a taxonomy of events, a topic I've written about elsewhere. This is not a trivial issue, because our classification schemes influence the way we see the world.

Generally, even knowing all there is to know about one example of an event is not necessarily very useful for the next such event, since (a) we don't know how representative of such events that particular example is, and (b) that particular event is unique - no event will ever happen exactly that way again! Single case studies can be enlightening, but it's dangerous to overgeneralize from them - a widespread problem.

Therefore, designing useful and meaningful "experiments" of hypotheses can be a real challenge. The best scientists can do this well, and the rest struggle to achieve such clear and unambiguous results. There is a real "art" to doing good science!!

In spite of all the potential problems with validation experiments, it rapidly was recognized during the Renaissance that an understanding of real world results based on experiment could lead to

Abstract reasoning is not enough, because logical deductions from flawed premises lead to a false understanding. Testing of ideas is an essential element of science. Even apparently obvious ideas should be tested! One critical aspect of science is that new understanding might well suggest new applications of that science to gain new data. Consider the following: abstract thoughts about light led to the study of electromagnetic radiation, which yielded important results about the nature of electromagnetic "waves" of all sorts, including radio waves. The abstract study of electricity also led to the development of tube amplifiers. Abstract study of solid-state Physics gave us transistors and other devices to replace those tube amplifiers. The study of radio waves emitted from devices built using electronics technology produced radar in WWII in order to detect enemy aircraft and submarines. Radar then came to be applied to meteorology. New technology was the offspring of abstract research, and that technology was a seed for new kinds of abstract research, in very different disciplines from those that gave rise to the technology, as well as within the originating sciences. The wonderful series of PBS programs by James Burke called "Connections" explores the very important feedback between science and its applications.

In effect, technology is an offspring of scientific understanding derived mostly from experiment, but supplemented with theoretical (abstract) understanding. The abstract has come to be seen in science as, in effect, an "experiment" in an idealized world. This idealized world is vastly simpler than the real world, such that many unimportant and possibly confusing factors are avoided. The theoretical results are cleaner and not muddied by all the myriads of complicating factors that make real experiments so easy to challenge. Alternatively, ideas derived from theoretical manipulations in an idealized world can inspire experiments in the real world to show the relevance and importance of an idealized process in reality. Theory and observation can interact in a highly productive way.

The explosive growth of science since the Renaissance is clear testimony of the power of this connection between abstract theory and observations. The observations validate the ideas and when experiments are done properly, they allow us to make an unambiguous choice between competing ideas.

The resulting technological progress has made possible things that would not have been dreamt of, let alone considered possible, by citizens in the world prior to the Renaissance. Technology that was inspired by science has been harnessed to develop new scientific understanding, giving rise to new technologies - an important feedback between science and technology. Technology is both a child of science, and the parent of new science. This interaction has made possible today's world of technological wonders and it will continue to change the world of the future so long as science is done.

In meteorology, in particular, the purely mathematical results derivable from the equations felt to represent our existing understanding are limited by the nonlinearity of those equations. Linearity generally permits the development of purely mathematical results (albeit sometimes of dubious relevance to the real world), whereas nonlinearity precludes the deduction of proper mathematical solutions in all but a handful of extremely restricted situations involving mostly unrealistic assumptions. Because of nonlinearity in the real world, theoretical and observational meteorology had almost no points of contact for most of the history of meteorology. With the advent of powerful computers, however, it became possible to "solve" the equations of meteorology. But those solutions are neither observational nor purely theoretical. Rather, they are estimated solutions based not on the real equations, but on finite mathematical (numerical) approximations to the real equations. Nevertheless, these computer-based simulated "solutions" have become the basis of much of modern meteorology, in which the connections between the theoretical and observational have moved into much closer contact. We're now empowered to explore the "theoretical" implications of equations that formerly could not have been "solved" at all. This is, indeed, a perfect example of the interaction between science and technology. In this case, the science that drove the development of computer was not meteorological science, but meteorologists have benefited enormously from that new technology to take their science to new heights. The science of dynamic meteorology is now mostly one of numerical models, and only secondarily is it involved with pure mathematics. The need for larger and faster computers is being driven, in part by scientists who can use that enhanced computer power to solve scientific problems heretofore unsolved.

 

A hierarchy of scientific ideas

In a very stimulating essay, Murray (1991) has proposed the following hierarchy of scientific ideas:

He also defines an ad hoc hypothesis as one made specifically to account for any discrepancy between the observed facts and the prediction of some other hypothesis. Generally, in his view, hypotheses are generalized statements that cannot be directly observed because they entail notions that are general, whereas we cannot observe all instances but rather only particular instances. A hypotheis might be a statement like, "All tornadoes are formed by stretching of pre-existing vertical vorticity" but we cannot observe all tornadoes, so we cannot know the truth or falsity of such a statement. Murray goes on to consider the following approaches to science

  1. How can I explain my data in terms of theory X?
  2. How can I best explain my data?
  3. What statements can I make from which I can deduce my data?

Number 1 is typical of a lot of science. It proceeds on the basis of some existing notion and often involves creating ad hoc hypotheses to explain the data in terms of an existing theory. This is the James Burke notion of science, discussed below.

Number 2 is associated with the creation of new theories, rather than making ad hoc hypotheses to make data fit within existing theories. It typically involves inductive reasoning, arguing from the particular examples at hand to create generalized notions - hypotheses. Murray uses Kepler as an example, who used the example of the orbit of Mars to develop his hypothesis about planetary orbits. He also argues that hypotheses based on inductive reasoning are hard to justify - we would somehow need to test every possible example to become convinced about some generalization based on particular examples. The hypotheses created by inductive reasoning can be used to make testable predictions, however, which is something of value in science. Murray comments further that hypotheses need not offer any explanation for things (Kepler's did not - it was up to Newton later to propose his laws of motion and a law of gravity to provide the explanation for Kepler's hypothesis).

Number 3 is basically a deductive method of reasoning - that is, from the general to the particular. It involves using one or more hypotheses and deducing from them predictions that are indeed testable by observation. Using theory in this way permits the explanation of observed facts in a way that's not done by the other two approaches to science. In some ultimate sense, we still have to accept that observations never allow us to prove (or disprove) theories, but we can test them in a rigorous way and to the extent that the hypotheses are consistent with the observations, we have a provisionally acceptable way to explain the natural world. No one has yet provided a proper explanation for gravity - it's not yet been deduced from some underlying theory in a completely convincing way - but nevertheless we can use gravity to explain many observations.

 

Communication of scientific ideas

An important component of science is the sharing of scientific ideas. If some scientist works alone in a closet and dies without ever sharing her ideas, then whatever she learned dies with her. Moreover, an important reason for communicating scientific results and ideas is to have them subjected to critical review. It's certainly the case that every scientist has "blind spots" - prejudices that influence her decisions about what experiment to do, what data to collect, how to interpret the results, and so on. Presumably, it's therefore essential to have her ideas reviewed by others doing similar work. It's felt by scientists that peer review is a way to limit the propagation of bad ideas as well as to encourage the propagation of good ideas. Given that much of science involves a body of rather esoteric knowledge that cannot be presumed to permeate the populace in general, who would be better equipped than other scientists working in the same subdiscipline to spot logical errors, problems with the experimental design and execution, and errors in the interpretation of the results?

Therefore, peer review forms the basis for formal scientific communication, an essential part of any science. Without it, science as we know it now would be simply a collection of individual opinions. There would be no shared body of knowledge and no way to evaluate the validity and worth of that haphazard collection of opinions. In principle, this means that there's a "marketplace of scientific ideas" in which different scientists participate, trying to "sell" their ideas based on their work. As I'll discuss later, in principle, no one has a monopoly in this marketplace. Anyone should be able to compete for prominence of their ideas, provided they've done work that's convincing to their peers.

 

Journals

Historically, the notion of scientific journals as the communication medium of choice for scientists has its origin in the British Royal Society (founded in 1660), which began its first printed scientific journal (Philosophical Transactions) in 1665, and it's still being published today. Numerous other scientific societies arose around the world, mostly patterned after the Royal Society, and these societies also began to publish journals containing scientific communications. Prior to this, scientists communicated among themselves via ordinary correspondence, primarily. Journals became the de facto medium for scientific communication, at least up to recent times. This is changing, in large part owing to the Internet.

Most journals accept some sort of comments about the contents of earlier issues. That is, if a scientist or group of scientists has a problem with a publication in that journal, they can write up their critical comments, and the author in turn has a chance to respond to any criticism of the work. This is a sort of "dialog" that can go on in the journals and all the subscribers have an opportunity to read this discussion and draw their own conclusions about the merits of the respective sides.

The published journals, therefore, naturally create and maintain an archive of their contents. This provides an official history of the science for scholars and allows future generations of scientists to read and benefit from the work that preceded theirs. Since the contents of each issue have survived the process of peer review (see the next section), this archive carries with it something of an imprimatur - a sort of "blessing" given to the contents by the scientific community. I'll be discussing this later.

 

The review process

I'm unaware of when peer review of scientific communications via the journals first began. In principle, peer review accomplishes a number of important and useful goals:

  1. Maintaining rigorous scientific standards for the contents of the journals
  2. Helping authors address flaws or oversights in their contributions
  3. Helping authors communicate their ideas effectively
  4. Facilitating communication among scientific peers

Generally speaking, peer review has traditionally been anonymous during my career. Again, I don't know when anonymity became the norm, but it generally has been considered an important part of seeking the honest opinion of the reviewers. Perhaps one reason for it was to protect the reviewer from the repercussions of a negative review. In any case, the author(s) of peer-reviewed manuscripts generally are not told from whom their reviews came. Journals sometimes offer referees the opportunity to surrender that anonymity, if they choose.

 

Principles of validation and confirmation

A lot of my understanding of this has come from Karl Popper, so this discussion will largely be based on his ideas. First of all, I've already established that "proof" of some conjecture isn't possible within the domain of science. Only in the abstract world of mathematics is "proof" possible. The way we judge the validity of a scientific hypothesis is by the rigor of the tests designed to refute it. In principle, no matter how many tests some conjecture has survived, it still might fail the next one. Logically, at least, one counter-example represents an important statement, whereas a vast number of failures to refute does not yet constitute "proof." A common example is the Second Law of Thermodynamics, or Newton's Laws of Motion. It's never been shown to make an incorrect prediction of the outome of an experiment - therefore, it's no longer called a hypothesis, but is now called a "Law." Same for the Law of Gravity. A step below a "Law" is a "Theory" like the Theory of Relativity, which has also survived numerous tests, but is not quite at the same level as the Law of Gravity, for instance. Note that the Theory of Relativity doesn't "refute" Newton's Laws of Motion - it simply describes some special circumstances (at speeds approaching the speed of light) in which Newton's Laws of Motion are an inadequate description of the dynamics.

In any case, then, scientists who understand the importance of rigorous tests designed not to validate, but to refute, some hypothesis will work very hard to find the most rigorous experiments they can devise. They even might seek suggestions from others as to how to make their experimental evidence more convincing than it already is. Clearly, the goal is to be convincing in the "marketplace" of scientific ideas, so experimental design and execution are critical elements of scientific work.

 

Objectivity as a standard

If the preceding arguments sound reasonable, they make an important statement about the "objectivity" of the scientist. We admire scientists most who will discard cherished ideas in the face of convincing evidence to the contrary. Conversely, we have little respect for scientists who cling stubbornly to ideas that are demsonstrably false in the face of overwhelming evidence. This means that scientists need to be as "objective" about their work as possible, in the sense that they must be willing to let the evidence speak for itself.

Another interpretation of "objective" is impartiality - an unbiased and unprejudiced look at the topic. This is something desirable, of course. Although scientists prefer to have their ideas validated by experiment, they should be (and, indeed, must be) prepared to accept that their results may not substantiate their pet idea.

At times, it's possible to "mold" or "shape" the evidence so as to slant the results in favor one idea over another. This sort of behavior would be considered an ethical violation (see below) if it is done consciously, although it is quite possible to do this sort of thing without even being aware of it. If a scientist allowed his prejudices to influence the design and execution of his experiment, or the interpretation of his results, we would argue that he's not being "objective" about the experiment. If his communication of his work in the form of a publication allowed some other scientist to see and recognize that sort of flaw in the work, then she would be ethically bound to call this flaw to his attention.

Objectivity also is a standard that we impose on the methodology, to the maximum extent possible. In meteorology, this takes the form of favoring computerized analysis over:

The meaning of the word "objective" in this context isn't the same as being "objective" about the evidence. In this case, "objective" in the context of methodology is a synonym for "reproducible." That is, a computer-done analysis of a weather map will always produce an identical result from identical data. Human-drawn maps and data derived from human opinions are idiosyncratic, and even the same person would be unlike to produce exactly identical results from analyses of the same data performed at two different times.

It is also true, however, that computerized analysis schemes necessitate making some decisions about the methodology and its characteristics that usually cannot be "objectively" justified. In fact, considerable room for debate exists over the choice of "objective" methodology in most scientific experiments. One dictionary definition of "objective" is "mindless" and good science tries to be pretty far from "mindless" in its choices. It seems inappropriate to view a mindless approach as the most proper path to follow! The advantage to "objective" methods (in this sense of the term) is that they are clearly unbiased and impartial, which certainly is beneficial. However, it seems impossible to remove subjectivity from science, and what I see as an obsessive concern for the use of objective methods is not necessarily rational.

Another sense of the word "objectivity" in science is the notion that the work is being done coldly and dispassionately. Most scientists are pretty passionate about their work, even if they sometimes try desperately to hide that passion or pretend that they're not so passionate, simply because of their concern that being perceived as passionate can be interpreted as a lack of objectivity. Personally, I find this whole pose, this effort to appear cold and dispassionate, to be

 

Consensus as a standard

Since individual scientists tend to regard their own viewpoint as correct and to be suspicious of other viewpoints, it ought not be very surprising that science is often contentious. The vehemence of the disagreements can spill over into public confrontation, and certainly can be mean-spirited. After all, egos are at stake, and it takes at least some ego to consider one's own viewpoint to be paramount. It would be all too easy to characterize scientists as a whole as argumentative and egotistical to a fault. In spite of the sometimes acrimonious character of scientific debate, many nonscientists fail to appreciate that any reasonable argument must actually be based on some substantial common ground. If the principals in an argument can't agree on anything, then there's no basis for an argument! All that can happen is that they agree to disagree about virtually everything.

Therefore, it may come as something of a revelation to many that scientific consensus is the sole basis for scientific arguments. Most scientists accept certain principles and consequences of those basic principles as the starting point for their work. This large and pervasive consensus is the core of a science education curriculum - it takes years to be taught the fundamentals for a particular discipline and to know the acceptable rules for drawing deductions using those fundamentals. The "state of the art" in science is always on the margins of this dominant background consensus.

Is the consensus always right? No. It's axiomatic that nothing in science is sacred, even elements that are widely accepted as fundamental and basic to all that scientists do. Most scientists understand this principle, but they don't act as if everything needs to be repeatedly validated. They simply accept the consensus - what T.S. Kuhn called the dominant paradigm. There are various levels of consensus on different topics - many scientists bristle at some part of the consensus even as they accept the majority of its canons. Hence, on the whole, there's always an undercurrent of attack on the paradigms. Virtually all scientists accept another principle: if you take on something fundamental, you have to be prepared for a spirited response to your claims to have overthrown a paradigm. The saying goes (I know Carl Sagan said it in "Cosmos"), "If you make extraordinary claims, you have to be able to provide extraordinarily convincing evidence!" As already noted herein, someone with a talent for designing revealing experiments can go far in science. Some parts of the consensus understanding are always vulnerable, perhaps owing to their never having been tested sufficiently rigorously. Others may have seemed so obvious that they have hardly been tested at all, so there are always many targets for a young scientist seeking to make a contribution.

As T.S. Kuhn points out, sometimes, the old shibboleths die hard, and a "young turk" who takes on the establishment may have to wait until they start to die off and retire to have her day. However, the profession also holds those who win such arguments in high esteem. The scientist whose ideas overthrow some major part of the consensus often is venerated and hailed as a major contributor.

Taking on some small part of the consensus and having it validated is a lesser accomplishment than challenging a cornerstone of scientific understanding and successfully replacing it with one's own new concept. But even minor contributions are useful and can help to achieve what scientists usually crave the most: the respect of their peers.

It should also be observed that on the fringes of any science is an army of "crackpots" - many of these folks are intelligent and clever, but one thing that characterizes the difference between a crackpot and the establishment is that crackpots are generally contemptuous of the consensus. They do not accept the principle of extraordinary evidence to back up extraordinary claims. It's when people agree to the basic principles of science and the standards of scientific evidence that they have a basis for adjudicating a disagreement. Crackpots also disagree, but they almost always have no concept of what constitutes a convincing argument and are generally unwilling even to make an attempt to be convincing. They believe the establishment simply should bend to their will and reject the consensus on their word, not their evidence. In some celebrated examples, scientific "outsiders" have been vindicated, but the vast majority of them are nothing more than crackpots. It's a myth that an outsider always brings in a fresh perspective that allows her to see what the establishment has missed - instead, what the outsider usually brings is a deep, pervasive ignorance and a stubborn refusal to accept reasonable challenges for a convincing demonstration of his concept. The occasional exception to this rule gets far more attention than is ever deserved. Perhaps it's attributable to the widespread sympathy for the underdog? Be that as it may, it's the acceptance of the consensus that permits a scientist to be part of the game and not considered a crackpot, or worse.

 

James Burke on consensus science

In his companion book to the PBS Series, "The Day the Universe Changed," Burke describes consensus science as "the struture" and says:

All observation of the external world is ... theory-laden. The world would be chaos if this were not so.

In the general structure of nature, ... boundaries are indicated within which investigation of nature may be conducted. Research beyond those boundaries will be defined as useless, unnecessary, or counter productive. ...

Within those boundaries the structure also dictates what research is to be considered socially or philosophically desirable. ...

When evidence has been accepted or rejected and the existence of a phenomenon established, the structure again dictates the next step. It provides the means for examining the phenomenon and a guide to expected data. Any data presented in this way will be acceptable since the instruments used will have been designed to find only those data which, according to the structure, are needed for confirmation. Any data considered to be extraneous to the events will be disregarded. ...

the instrument is constructed to find only one kind of data. The meaning of the data revealed by measurements or observation of the phenomenon is already inferred by everything which has gone before. ...

the theory-laden prediction of what the data will show is so strong that absence of the expected results casts doubt not on the theoretical structure but on the observational technique itself. ...

At every level of its operation, ... the structure controls observation and investigation. Each stage of research is carried out in response to a prediction based on a hypothesis about what the result will be. Failure to obtain that result is usually dismissed as experiment failure. Every attempt is made to accommodate anomalies by a minor adjustment to the mechanism of the structure ... . In this way, the structure remains essentially intact, as it must do if there is to be continuity and balance in the investigation of nature.

Science, therefore, ... is not what it appears to be. It is not objective and impartial, since every observation it makes of nature is impregnated with theory.

If there is no privileged source of truth, all structures are equally worth assessment and equally worth toleration.

These are serious claims and deserve some attention. First of all, I don't disagree with the basic idea that scientific investigation is "theory-laden". But as Henri Poincare once said, and I'm paraphrasing, "Science is no more a collection of facts than a pile of stones is a house." Carl Sagan re-emphasized this point during his "Cosmos" series - science does not proceed by going out and collecting facts without regard to their apparent pertinence to the hypothesis in question. If I want to understand the relationship between temperature and pressure, why should I go out and seek observations of something else? Burke seems to be criticizing science for not collecting every conceivable form of data for any experimental validation exercise. Apart from the practical consideration of the volume of data associated with the experiments, it seems absurd to do so in the first place. Yes, it is conceivable that something important might be overlooked, but the frequency of such would be miniscule if the experiment is indeed crafted with the goal being to test some hypothesis.

I also don't disagree with the notion that consensus science permeates everything we do within the profession. It does indeed define what are acceptable practices for all scientific activities within any field or sub-field. However, it is also clear if all scientists do is confirm the consensus, their activities are generally considered to be of little value, and their careers may go badly. Burke seems to be missing the very important point that most of science is directed at changing the existing structure in ways that range from "minor adjustments" to "major paradigm changes". Confirmation of existing theory is afforded little respect - in fact, I argue below that, in my opinion, confirmation experiments are not usually given their due respect and may, in fact, be rejected out of hand for publication because they offer "nothing new"..

Moreover, science is not done by "the structure" but by individual human beings, who are not generally bound by any restrictions on what they do or how they do it. Yes, the consensus does define the acceptability of many aspects of scientific work, but individuals who participate in science are typically not particularly dedicated to perpetuating the status quo, because (a) they are truly seeking to understand and are likely to perceive gaps and flaws in the existing structure as opportunities to learn and understand things better than they do at the moment, and (b) their careers will depend on their contributions to helping other scientists understand. Those who make significant changes to our collective understanding are given the highest honors, in general. We don't vilify those who change things - we exalt their contributions in proportion to the changes in the consensus they have wrought. Most scientific contributions challenge the consensus to a greater or lesser degree. The greater the degree of challenge, the more convincing the evidence is expected to be, but if that evidence carries the day, then the greater the contribution is considered to be.

Science does indeed resist change to the consensus, but the degree of resistance to change is proportional to the level of the material. Basic ideas of space and time underlie Newtonian physics and those resisted change until Einstein came along. Einstein did not "overthrow" Newtonian science, but he revised our notions of space and time in such a way that his theory has permitted us to make successful predictions in situations that Newton never conceived of and which are alien to ordinary human experience. Within that "ordinary" world, Newtonian physics continues to reign supreme and its applications are continuing to this very day on a routine basis with great success. Einstein's ideas were resisted, to be sure, but the reason Einstein is so famous is that he indeed altered our view of the universe in a fundamental way, and his theories were constructed in such a way that they could be tested - those tests have been uniformly consistent with Einstein's theories. Had they not been testable, however interesting they might be, Einstein's concepts of space and time wouldn't have become legends within science. Rather, they wouldn't be considered science at all. Although Einstein himself didn't participate in the tests, had they not been done, Einstein's ideas would've been marginalized. Moreover, even Einstein's ideas are not sacred to science. Others have attempted to formulate modifications to Einstein's theories - to date, none have yet supplanted Einstein but that by no means has caused such efforts to cease.

The notion that if nothing in science is sacred, then all structures should be treated as equal, is fundamentally opposed to the values of science. Science has been successful ever since its methods were developed during the Reniassance precisely because it works. By following its accepted practices, heretofore unexplained phenomena are explained. Previously unsolvable problems become solvable. Predictions about the natural world and how it works are confirmed, in some cases with extraordinary accuracy and precision. This isn't something that's on an equal footing with religion, or mythology, or any other proposed structure that hasn't been subjected to the strenuous testing within consensus science. I agree that science isn't objective or impartial, but when its accepted practices are followed, it works to a degree that no other structure does. Thus, I must disputre the contention that other structures are on an equal footing with science. Within science, hypothese (ideas) are constantly competing and all should be afforded equal respect until they've been subjected to the necessary rigorous testing. Among competing ideas, a satisfactory outcome depends on the rigor of the test between different ideas. The survivor becomes a part of the existing consensus (perhaps supplanting previous ideas) while those not proving up to the test are discarded. As the story of Wegener's continental drift hypothesis shows, discarded hypotheses aren't inevitably wrong or evem forgotten, however. Scientific question are never settled in absolute terms, and consensus is not settled upon by majority vote, no matter what percentage of scientists consititutes the consensus. It can be said that Wegener's idea wasn't given as thorough a test as it deserved before it was rejected by scientists of the time, but extraordinary ideas require extraordinary evidence. It wasn't until many decades later that such extraordinary evidence became available, and Wegener's notion of continental drift was quickly resurrected and shown to be simply ahead of its time regarding the confirmatory evidence (and the existence of an explanatory mechanism).

 

Accuracy of prediction as a standard

As I've been saying, experiment is at the heart of the scientific process. Designing a key experiment properly is crucial to making a convincing argument - it's keyed to an understanding of deduction. The logical approach is to consider what's implied about the obsevables by a conjecture that purports to explain some process in the natural world. If the conjecture is associated with a mathematical analysis, it's often possible to make quantitative statements about the outcome of a proposed experiment or observation. Not all hypotheses can be cast as quantitative statements, however. It may be that the hypothesis makes a qualitative statement, such that an experiment could vindicate a hypotheses that seeks to replace an established idea.

For some examples from the history of science, consider the work by Johannes Kepler showing that the orbits of planets are elliptical instead of circular. In this case, the theory made both a quantitative and a qualitative statement. Data collected by Tycho Brahe and analyzed by Kepler were clearly more consistent with an elliptical orbit than a circular one. Einstein's Relativity Theory makes a number of quantitative predictions and they have been tested by some extraordinarily clever experiments, with Relativity Theory being validated in every case to date. The success of baroclinic instability theory in meteorology is not so much quantitative as qualitative - its predictions are consistent with what is observed in a general sense, although its quantitative aspects are far from perfect.

This brings up a primary difficulty with experiments, however. It's not generally expected that results of an experiment would be perfectly in accordance with the predictions of some hypothesis. Such things as measurement errors and processes that are unaccounted for by the hypothesis (sometimes referred to as "background noise") inevitably crop up in any real-world experiment. Therefore, it is never expected that the data fit one concept perfectly and the competing model less than perfectly. Perfect correspondence between theory and observation would, in fact, cause some suspicion of having "cooked" the data to match the theory. So how can we be sure of the interpretation of an experiment? This is where statistics comes in.

In science, as Robert Hooke has stated so succinctly, we do not have the choice of whether or not to use statistical analysis of our data. Rather, our only choice is to use statistics properly or improperly. Contrary to popular opinion, it is not possible to lie with statistics, when done properly. Instead, it is possible to cloud the issue with a lot of statistical jargon and thereby disguise the fact that the statistical analysis was flawed. For us scientists, Statistics comprises a set of rules for deciding how much confidence we can place in our data and in the results of the analysis of our data. In spite of the fact that Statistics is not the most popular subject among scientists, we have no choice but to abide by its rules. Therefore, given the reality of imperfect data, scientists use statistics to decide whether the outcome of an experiment is meaningful or is simply a by-product of random chance attributable to the inevitable errors associated with real data. Statistics does not offer absolute proof, and it may be difficult to ascertain the effects of experimental biases that can creep in, but it does suggest whether or not the results of an experiment meet established criteria. In effect, statistics makes a probabilistic statement about the significance of an experiment - if the assumptions used in the statistical analysis are valid, this probabilistic statement can be marginal or it can be overwhelming evidence in support of (or contrary to) the hypothesis. Of course, it's possible to challenge the assumptions made in the statistical analysis, and so even apparently overwhelming statistical evidence may not always be entirely convincing.

For many physical sciences (like meteorology), where "experiments" under controlled laboratory-like conditions are not possible, the data collected in an experiment can be challenged on a number of grounds: not a large enough sample, not a representative sample (in other words, a biased sample), an observing instrument of too low a precision or accuracy to be useful, and so on. Thus, it's often desirable that experiments be done repeatedly, to confirm the results of others. I'll return to this later. For now, it suffices to say that in order to compare experiments, there must be considerable effort to ensure that the results of a particular experiment are in fact compatible. The variables measured should be the same, the instruments used to do the measurements should be the same, any analytical tools used to manipulate the data should be the same, data quality control methods should be the same, case selection criteria should be the same, and so forth. Any departure from experimental protocols from one experiment to another raises important questions about the validity of the comparison.

Generally speaking, a concept that makes accurate predictions about the outcome of an experiment and that is validated in a number of independent experiments usually becomes part of the accepted consensus among practitioners in that field. When experimental results are open to a variety of interpretations, the experiment was poorly designed. When data and the analysis of the data are subject to various kinds of challenges, then the hypothesis remains subject to controversy and efforts should be made to refine the experiments to achieve truly convincing results.

An experiment that fails to produce convincing results is not entirely without value, however. Sometimes such an outcome indicates to the scientists that they may have been asking the wrong questions. That is, the hypotheses were so flawed, that the experiment wasn't even close to obtaining the needed information. It also may indicate that the experiment needs to be redesigned, if possible. Sometimes it points to the need for new measurement tools, or the refinement of existing tools. Learning from experiments - even "failed" ones - is an important part of the process. If important questions remain unresolved, we continue to debate the existing hypotheses. The VORTEX field campaign of 1994/1995 is an example of an experiment that was trying to answer the wrong questions, but I didn't see the whole project as a failure then (see here), and I still don't see it that way. VORTEX put us on a new track, and I think that the new track has been productive in refining our ideas about tornadogenesis.

 

The reality of science

So far, I've been talking mostly about the principles of the "scientific method" as I understand them. As I've already noted, virtually all scientists agree that there's not some simple recipe identifiable as the "scientific method" - rather, there's this consensus about the basic principles we use and how we go about obtaining truly convincing evidence of the failure of an existing hypothesis and its replacement by a new hypothesis that's more consistent with the data we collect. Now, however, I want to talk about some realities in the scientific process, some of which aren't pleasant realities, because they contrast so starkly with the ideals I've presented.

 

Communication realities

Journals in the modern world

Is this how scientists actually communicate?

I've described the scientific journals as a medium of communication between scientists. When journals first began to be published about 400 years ago, communication between individual scientists was so slow that journals were just about as fast as any other means of communication. I don't know how items were selected for publication, by the way, but it almost certainly was not the method we now use. In any event, in the 21st century, with the Internet and the World-Wide Web, communication of ideas is vastly more rapid. Given that scientists are often at the forefront in using new technology, it's not surprising to learn that journals are now far too slow to represent an effective form of communication.

The one thing that refereed journals have to offer in today's world is that the papers have been reviewed by other scientists before they were accepted for publication. This permits the assumption that at least some minimal scientific standards have been followed and that the results can be trusted to a greater extent than unrefereed materials.

 

Archive vs. communication

Another emerging role for journals, however, is that they become the archive that documents the evolution of the scientific field associated with the journal. We can trace the development of scientific ideas through this historical record embodied within the journal. Since "journal" and "diary" are related concepts, the journals can be thought of as a kind of diary of the science.

This can be quite useful, but it is also the case that many scientists are reluctant to do their "homework" - an idea that is more than 10-20 years old may be effectively lost in the dank, dusty stacks of the libraries. This means that old ideas are reinvented regularly and the work of earlier scientists may often go unrecognized and unacknowledged. This laziness defeats the whole purpose for having an archive in the first place. If science can't be built on what has already been accomplished, those tasks need to be done over an over again, unnecessarily. The lives and accomplishments of our forebears in science are forgotten and lost, instead of being used as the stepping-stones to new understanding. Credit is not always being given to the scientists for whom it is due.

Sadly, I've seen a growth in the default notion that science done 30 years or so in the past is silly and quaint, and the scientists who did it are regarded as mostly not very smart. The late Stephen Jay Gould had much to say about this "temporal myopia" that's so widespread in today's fast-paced world. But I digress ...

 

Is all this truth?

Although it's the case that everyone accepts the principle that nothing in science is sacred, it often seems as if many scientists feel that if something appears in a refereed journal, it must be valid. This is simply not the case. The review process is essentially a crapshoot. Sometimes valid ideas are rejected because of personal animosities that arise between an author and a reviewer. Sometimes bad ideas sail through the review process because the referees were lazy or may not have been qualified to recognize the flaws in the manuscript they received. Appearance of a paper in a journal is far from evidence of its actual validity. And rejection of a paper is not unambiguous evidence for its lack of substance.

There can be many reasons for the outcome of a review being "incorrect." In the past, peer review was nowhere near as rigorous as it has become. Many classic papers that we still cite with regularity would be difficult to publish by today's standards. Does this mean that science is being done better all the time? Were the scientists of bygone days a bunch of uncontrolled speculators, whereas scientists of today are held to rigorous standards of evidence? When I peruse today's journals, in spite of the apparently more exacting standards of review, I see an increase in the fraction of outright garbage, and a corresponding decrease in the number of papers I imagine are destined to be seen as "classics" with the passage of time.

How do I explain this paradox? Well, I'll get to that shortly. However, I believe it's important to understand two things.

  1. The refereed literature is probably more reliable than most of the unrefereed material one can find in many places (like the World-Wide Web!)
  2. Not everything you read in the refereed literature is actually valid

 

Baggage tied to publication

An important factor in what I see as the slow degradation in the quality of papers in the refereed journals, in spite of what might superficially appears as an increase in the standards of reviewing, is the baggage that has been loaded onto the refereed literature. That is, the journals are no longer just a medium for communication among scientists. They have been closely tied to other things, outside of scientific communication.

To begin, the publication of papers in refereed journals has come to be a widely-accepted measure of scientific productivity. While it certainly is true that a productive scientist should be publishing regularly in such journals, if for no other reason than the documentation of her work, this is pretty far from the original purpose of the journals. Administrators use the number of publications, as well as the number of citations to articles authored by an individual withing other papers as a measure of that scientist's productivity. This is full of perils, and is widely referred to as "publish or perish" by those who must accept this standard. No one denies that publication of articles in refereed journals is one part of a scientist's value to the community, but counting the number of papers or the number of citations is clearly a seriously-flawed exercise. The pressure to publish has direct consequences on the number of articles in those journals, and sins of various sorts are committed by authors to pad their publication list: breaking one piece of work into several pieces, publishing nearly the same paper repeatedly in different journals, etc.

Moreover, the number of publications has a very direct impact on the success of those seeking to get research funded. Again, publication isn't irrelevant to the decision-making of funding agencies but it surely isn't the only factor, or even the primary one, that should be considered when a research proposal is considered. I'll have more to say on the funding of science, later.

The competition for funding and scientific prestige associated with publication also leads to an egregious abuse. People who publish are often asked to review the manuscripts submitted by their peers for publication. This is as it should be - if you're publishing, you should bear your share of the burden for reviewing the publications of others. However, many scientists see the publication of ideas that differ from theirs as "bad science" rather than as a scientific difference of opinion. Reviewers can have a vested interest in delay or even preventing the publication of papers expressing views that challenge those of the reviewer. Editors often seek reviews by scientists whose ideas have been challenged in a particular submitted manuscript, but seem oblivious to the obvious direct conflict of interest this can create. The reluctance of the "establishment" to have their sacred cows gored can lead to a suppression of alternative hypotheses and considerable ill will among the scientists. In addition to the ego issues, the publication of papers is connected to getting research funded, so livelihoods can be threatened, as well as egos.

The result is that publication is no longer simply a way for scientists to exchange ideas and to have scientific discussions (and arguments). Instead, publication is viewed as the ticket to success. And blocking a competitor's success is one way to aid your chance for success.

 

The review process

Most journals maintain the anonymity of the referees who review the manuscripts submitted to the journal. This is done ostensibly to protect the referees from reprisals by the authors, and is intended to encourage the reviewers to be say what they think without any concern for having that come back to threaten their careers. In practice, reviewers often hide behind the cloak of anonymity to make absurd claims and censor an author's work without having to substantiate their criticism. Reviewers may not give an author much to go on, but merely make vague statements that attack the paper without even beginning to offer much help to the author in trying to address the reviewer's criticisms.

As I've already noted, many reviewers are not engaging in a dialog to improve the paper but instead are seeking to delay or prevent publication of the paper. In many cases, an author may recognize the reviewer by the nature of her remarks, but of course we're required to maintain the fiction of anonymity.

Some believe in the so-called "double blind" review process, where the author(s) is (are) also supposed to be anonymous. In large fields, where the practitioners are numerous, this might work. In meteorology, the subdisciplines are so small that most reviewers will be able to recognize the authors through a number of factors: whom they reference, their writing style, the nature of the data they use and where (and how) those data were collected, the point of view expressed in the paper, and so on. Although you can never be entirely certain who wrote the paper and the Editors are not supposed to confirm or deny your guesses, it's still relatively easy for an experienced scientist to recognize the work of their colleagues. Although I support the principle of double-blind reviewing, I don't believe it's very effective, at least in the subdisciplinary circles I inhabit.

As I've noted, reviewers often seem to be more eager to reject than to accept a paper. This is a relatively recent phenomenon and seems to be tied to the apparent increase in the "rigor" of the reviews. Also in line with my earlier comments, it's not at all clear to me that this has improved the journals. Far from it - instead, I believe that the quality of journal contents has generally declined, in no small part owing to "publish or perish" and the loading of additional baggage onto the publication process.

It seems that the review process is less than perfect. On its behalf, though, I should say that there are reviewers who take their responsibilities very seriously and do not abuse the process. I feel that most, if not all, of my formal publications have in fact been improved by the review process. I know of no alternative that I could propose to replace the peer review. It would be a disservice to my peers to damn the process and it wouldn't be accurate to assert that everything being published today is garbage in comparison to what was published decades ago. I'm simply saying that the signal-to-noise ratio is declining, in my opinion. If the review process is flawed in ways that we can repair, I hope that we can begin to do the self-examination and repair those problems we can fix.

 

The decline in "Comments"

An accompanying phenomenon has been a decline in the frequency of critical "Comments" in journals (see Errico 2000). In principle, the publication of critical comments on an earlier publication offers scientists an opportunity to communicate their concerns about the work of other scientists. The chance to correct misconceptions or to reveal inadequate or even deceptive analysis in the journals has existed virtually from the beginning of scientific publications, but this is being used less and less with time. Why?

It's my belief that a lot of work that is presently being published needs to be critiqued, but most scientists are declining to take advantage of the opportunity the journals are set up to provide. If so, there must be a reason, and I believe it comes down to self-interest. If you take on someone's work, there's a good chance that they could be reviewers of your future publications or, even worse, of your funding proposals. By being critical, you open yourself to reprisals that could block publication of your own ideas or even of your funding. It is all too easy to let fear of reprisals - a real fear and not just paranoia, in my opinion - prevent a scientist from speaking her mind. Thus, a lot of outright crap is getting published without challenge. It amounts to the old "shoot the whistle-blower" mentality that seems to be so common among us humans. We seem to value loyalty from within more than we value the truth.

 

The media and the public

Polar opposites vs. consensus

The confrontations that take place on the surface - that is, within the journals - are small compared to those going on under the surface. There's conflict between the authors and the reviewers during the review process. There are the arguments at times literally carried on in the hallways of buildings and at scientific conferences. There's considerable Internet traffic going on these days where ideas get debated. Basically, scientists are passionate about their ideas and don't always regard criticism as beneficial. In fact, some criticism is mean-spirited, selfish, self-serving, and narrow-minded. After all, scientists are human, with all that being human entails. We're certainly not saints, nor are we able successfully to be impartial about our own work.

Viewed from the outside, scientists seem continually to be arguing and fighting. This can be seized upon by outsiders, notably the media, in their efforts to "interpret" scientific issues for the general public. Often, it seems the media are obsessed with presenting a "balanced" view of things. If there's some scientific issue that catches their attention, their obsessive concern for a "balanced" presentation is to find two polar opposing viewpoints. Given the diversity of people who practice science, this isn't necessarily difficult to do. The public sees an acrimonious confrontation that might descend into a shouting match, while many of us actually involved in the science at issue are cringing in despair at this portrayal of the situation. Both sides may be equally at odds with the consensus, or the consensus may well be heavily weighted toward one side whereas the other is only a "fringe" element within the field. The media often do the field a great disservice by focusing on polarized, opposing viewpoints instead of understanding and reporting upon what is the consensus, as well as the debate itself.

It must also be said that science is not ultimately decided by majority "votes" in some scientific polling process. Rather, scientific controversies are ultimately decided by the evidence. It may be that the debate will appear to be settled in favor of one side, as it was initially when Wegner proposed his continental drift hypothesis. The consensus was that this was an impossible notion, and since Wegner's evidence was not so compelling, but only "circumstantial" in the absence of a mechanism to drive the movement of whole continents, he apparently "lost" the argument. More than a century later, new technology provided not only compelling evidence for continental drift, but also gave evidence for the mechanism: sea-floor spreading from the mid-ocean ridges driven by vulcanism. Wegner's "failures" were a product of his idea being truly ahead of its time. When the time for his idea did come, the consensus rapidly swung in favor of continental drift.

The point of this example is to suggest that the consensus isn't always right, but when the evidence becomes overwhelming, the consensus eventually changes. Presenting scientific ideas as polar opposites is misleading and does the public a huge disservice. By not presenting the consensus, but merely opposing viewpoints for the illusion of "balance" is not good reporting; it's superficial and stupid.

 

Vilification of "popularizers"

It bothers me that scientists who have a knack for presenting science to the general public are often vilified by their peers: Carl Sagan, Jacques Cousteau, and others who make the effort to help the public know more about real science and how it works are often criticized by their peers for being "publicity seekers" or whatever. They should be thanking the popularizers, who are not only serving to inspire another generation of scientists, but are helping the public understand what they're paying scientists to do. It seems to me that whatever we might feel about them personally is irrelevant compared to the value of the service they render to all scientists.

Please note that I'm distinguishing science from technology - new technology is both a driver of new science and the application of old science, but it isn't science, per se. I also am not fond of non-scientist reporters of science, no matter how favorably they might present their subject matter. Since they don't understand the science, they simply cannot convey an accurate understanding of the material or what it's like to be a scientist. Reporters often have their own agenda that differs from the priorities of an actual scientist, and that agenda is what gets told, not the science. Many popular "science" materials (magazines, TV "documentaries" and such) purport to be about science, but since they're not written and produced by scientists, the material is mostly superficial fluff or seeking simply to sell advertising "eyeballs". I think the public wants quality science presentations, but science "reporters" are deceiving the public with shallow sensationalist crap, not good science. I guess most scientists aren't charismatic or "pretty" enough to sell their ideas in the popular media.

Carl Sagan and Jacques Cousteau were exceptions to this, however. I loved what Carl Sagan was able to do for astronomy. He was able to tell its history, to convey the wonder of being a scientist, and to explain where his science was at the time of his marvelous Cosmos series on PBS. Yet, I'm told that his very popularity made him many enemies among his peers. What a shame that they couldn't overcome their petty jealousies and see the value of what he was able to accomplish for astronomy as a whole If he indulged his own personal agenda at time, why be upset with that? It seemed to me that Carl Sagan, the scientist, was able successfully to convey something of the scientist as a human being in this way. Why begrudge him that? We need more such people and we shouldn't crucify them, but rather we should applaud them and wish them even greater success.

 

What's not in scientific papers

I can recall reading my first scientific papers and thinking "Wow! This is brilliant work. I could never do something like this!" What I didn't realize at the time was that there was a lot that was not contained in the final, published version. Malformed ideas that didn't pan out, blind alleys followed to dead ends, poorly-designed experiments, misconceptions leading to erroneous conclusions, outright errors, and so on are part and parcel of being a scientist. But we never record that part of the process. We write our papers as if we had a brilliant idea, we designed a convincing experiment, carried it out, validated our idea, and wrote up the results to share with the community. But science never works in that way! No one is as good as their publications make them out to be. Only after the work is done and the paper has been reviewed by our peers (often anonymously, and virtually always for no reward other than the satisfaction of improving someone else's work!), does the work see the "light" through publication. Readers, and especially students, shouldn't be overwhelmingly impressed by what they read. If you're passionately interested in your field, you can do work every bit as good as the best of your predecessors.

At times, it's frustrating to me that failures aren't generally written up and shared. It often would do science considerable good to know about blind alleys and ideas that didn't pan out. If nothing else, we could avoid following some of those blind alley. And, of course, someone later on might be able to see how to make it work where we didn't succeed. Of course, there must be some limit to the documentation of failure - after all, the journals are already filled with a lot of apparently "successful" junk as it is! Perhaps there's no way to justify publishing failure - except to point out that at some very basic level, the pioneering work of Lewis Fry Richardson (as discussed in Lynch 1992) in numerical weather prediction was a failure! By today's standards, he wouldn't be allowed to publish his seminal work. What failures might we be benefiting from that we aren't allowing to be published?

Another thing we don't see much in today's publications are confirmation experiments. It seems that every publication must present new findings. It apparently isn't enough to confirm old findings. Frankly, this sort of thing bothers me - I've written about it earlier, here. I think we should be more forgiving of those who repeat old work simply to ensure that the results of the previous work are indeed reproducible, at least up to some reasonable point. That should be a valid point to make about an earlier work. The trick, of course, is to know when enough confirmation has been done. But I think that can be solved, if we can simply accept that independent confirmation of already published results by someone else is a potentially publishable result.

 

Philosophy of science

Natural philosophy

In its early days, science was called "natural philosophy" - perhaps in recognition of its mostly abstract character. I don't want to go off on a rant about philosophy, but I find myself unimpressed with most pure philosophical books. A lot of it is focused on unanswerable questions and unsubstantiated gibberish. "How do we know things?" "What is reality?" And so on. There are branches of philosophy, however, that seem quite pertinent; namely, logic and "natural philosophy" (science). As a scientist, I lean toward the notion of verification and validation of ideas. Purely abstract thoughts that can't be verified strike me as sophomoric and largely a waste of my time. Perhaps that is an Achilles heel. However, those who know me realize that I do worry about the philosophy of science. In particular, the question associated with the next section.

 

How do we know it's right?

When we do science, if we agree that there's no magic formula for "the scientific method," the question arises: Just how do we know when we're doing things properly? In mathematics, there's the notion of axiom systems and the mathematical logic that's associated with a particular axiom system. If we follow the rules for symbol manipulation in mathematics, then we can be sure that any results are correct. Proof is possible in mathematics, and I've already suggested that it isn't possible in science. Lacking a purely logical proof, we're left with the concepts of Karl Popper. An experiment can invalidate a hypothesis, but it can never prove that hypothesis. Only a single counter-example is sufficient to invalidate a hypotheses, at least in principle. In practice, it's not so easy - the counter-example might have been flawed in some way or another, or based on too small a sample, or whatever. Given the possibility of experimental design flaws and measurement errors of various sorts, even invalidation by a single counter-example is not assured.

At the moment, if we're concerned with real observations of real phenomena, then we're forced to accept the logic of statistical hypothesis testing. From a given set of data, it's possible to establish what are the chances that the observed results are simply random variations, as opposed to a systematic relationship? Experimental data point us toward one conclusion or its refutation, or the data indicate that it isn't yet possible to distinguish one outcome from the other. There are no other logical possibilities. If we make some sort of hypothesis, usually that some hypothesis is incorrect (hence, it is usually called the null hypothesis), a careful statistical analysis of experimental data must give one of three results:

  1. The null hypothesis is validated, to within some confidence limits
  2. The null hypothesis is rejected, to within some confidence limits, or
  3. The data do not permit a statement to be made concerning the null hypothesis that is within some specified confidence level.

Rejection of the null hypothesis is not logically equivalent to "proving" that some hypothesis is true - rather, it simply says that the data don't disprove the hypothesis at some specified confidence level. To some, this might seem like hair-splitting, but some thought will suggest that this is the safest sort of strong statement a scientist can make

As Karl Popper has suggested, the key to a successful experiment is the rigor with which the experiment (and the analysis of the experimental data) is capable of invalidating a hypothesis. Too many scientists think about how to validate their hypotheses, instead of focusing their creativity on how to invalidate their ideas! The key is to imagine what would be the most compelling experiment conceivable that would be capable of refuting some hypothesis. If you have difficulty imagining a key experiment, then it's plausible to allow other scientists to have a crack at a better experimental design. You want to make tests such that you don't have to collect very much data to get a strong result, and you hope to avoid collecting data that require extraordinarily high precision. Cloud seeding to enhance rainfall from convective clouds is difficult to confirm experimentally owing to a large amount of natural variability in convective precipitation. In order to get a strong result in the face of natural variability, either the signal from the seeding must be relatively large (which appears not to be the case), or you must collect a lot of data in carefully-conducted "double-blind" experiments to detect that signal with reasonably high confidence levels. Failing to do controlled experiments means that to distinguish a faint signal from natural variation, even more data must be collected than in controlled experiments. Cloud seeding experiments have come back with inconclusive results that are especially unconvincing to those without a vested interest in the outcome.

Recall the notion that extraordinary claims that differ from the scientific consensus need extraordinary evidence. The test of how "right" our scientific methods are is how convincing our arguments are to others. It isn't the case that what has already appeared in the refereed literature is automatically right. Certainly, work that has succeeding in passing peer review is more likely to be correct than work that has not passed peer review. But peer review is not perfect: good work gets rejected and bad work is accepted. The scientific literature is not holy writ - the use of a particular methodology within a published paper is no guarantee that its use in a particular application is correct and beyond question. If something (either methodology or a conclusion using that methodology) has previously appeared in the refereed literature, recall, it's not necessarily the correct methodology for the problem at hand. It may not even have been correct in the paper that was accepted! As already noted, in principle, virtually all scientists would agree that publication is not a sacred imprimatur, but nevertheless, many scientists argue that their methods are beyond being question because they're following protocols used in published works! This amounts to an argument by authority, which is generally considered not valid in science. Nevertheless, in practice, such an argument is often deemed sufficient.

There's no simple path to knowing whether or not our methods and results are "right" in some absolute sense. All things are open to question in science - we're not in the absolute truth business. It's futile to seek some way to guarantee only truth will come from one's efforts. In spite of this seemingly hopeless situation, it seems prudent to be as careful and as open-minded to criticism as it's humanly possible to be. We must constantly strive to understand why our peers might not find our work to be completely convincing and seek to make our work as convincing as we possibly can. This means learning how to recognize the weak points in our arguments and seeking to strengthen the work in those areas where they need it the most. We must not delude ourselves into the logical trap of seeking to validate our cherished personal ideas. Instead, we must seek insofar as we can, to see our ideas as others see them: with a critical eye. Although there are no guarantees even then, we're much more likely to be convincing than if we narrow our viewpoint to the extent that we only see the merits of our own arguments and stubbornly deny the merits of opposing arguments.

 

Sociology of science

The "post-modernist" point of view has suggested that science, being a human enterprise, is simply a reflection of its practitioners. This has often been used as an argument that there's white, male, majority science, in opposition to the brand of science practiced by females, or a different brand as practiced by nonwhites. I've discussed some of this here with regard to the science of meteorology. That notwithstanding, I believe that science is, in fact, monolithic - there's good science, there's mediocre science, and there's bad science, but I don't accept that there are various flavors of science that partition themselves along purely social lines. Having said this, however, I definitely agree that there's a sociology of science. In effect, as a human endeavor, lead by real humans, it's not merely a bunch of abstract principles. Science proceeds along directions that reflect our humanity and its history, not along some hypothetical, purely logical lines. I think what I've written so far suggests at least some of my beliefs along these lines.

 

Scientists as human beings

If we begin with an understanding that science is a human endeavor,then we must accept the whole package that comes with being human. We have all the failings and foibles that other humans have. We also have passion for our work; I would argue that a scientist who was not passionate about his work would be tantamount to a contradiction in terms - an oxymoron. This is explicitly a rejection of the principle of objectivity (or at least one interpretation of the word) - a mindless, dispassionate consideration of facts - as a useful model for how science is done. Of course, we'd properly reject the work of a scientist who let his passion for the subject lead him to any of a number of ethical violations, including such things as faking the data, or selectively deleting results that didn't fit his hypothesis. We must accept the principle that honesty and integrity constitute the only acceptable path in science. For me, that was one of its appeals when I was young. Science demands nothing less, and justifiably condemns those who would violate standards of honesty and integrity. I don't believe that "objectivity" is equivalent to honesty and integrity, athough being impartial does include honesty and integrity.

This isn't a proper place to discuss the issues of scientific ethics, but it suffices here to say that as human beings, it's possible for us to fail to live up to standards even when we don't realize we're walking along the margins of ethical behavior. When dealing with our own cherixed notions, it's possible to be blind to our own mistakes. An error is forgivable if it was done inadvertently and without any intent to deceive. But who knows how to real someone's intentions? Is it possible to deceive oneself? Absolutely. Critical peer review is one way to maintain standards of ethical behavior, of course.

I don't want to go on at length about this, but even when we agree on fundamental principles, as most scientists do implicitly and/or explicitly, it's a natural outgrowth of our human nature to push the margins of what's acceptable. So long as we understand that and don't let our personal opinions become commingled with our professional opinions, we can usually criticize and be criticized without undue negative reactions. It's also true that at times we let our human behavior get the best of us, and ill will often is generated during the natural course of scientific debate. I find this disturbing in myself and at least annoying in others, but I try also to remind myself that it's an inevitable part of the process.

 

Relationship of science to other human activities

There are many works that have been written about how science relates to other human endeavors, notably art, politics, and religion. I suggest my readers consider some of the bibliographical references at the end of this essay. I believe that there are some important differences between science and other activities:

Science, nevertheless, is creative. It's not mere discovery - like finding a pearl in an oyster. It's not a collection of facts. Science is an interpretation of factual information. It seeks to understand how the natural world operates and to find order in what might at first appear to be chaos. It uses logic and observations to create scientific hypotheses, rather than to discover the laws of science. The findings of science are human creations - not discoveries - and, as such, we know them always to be flawed and inadequate. To the extent that they permit us to make useful predictions about the natural world, scientific findings are useful, even though those predictions may not be perfect.

To those who participate in science understand this better than nonparticipants, but even scientists can be deceived by the words we use. The verbiage of "discovery" is in widespread use and some scientists may believe that they are discovering things along the way. Obviously, my reading of the situation is different. We do our experiments and find interpretations of those observations that seem to explain them. However, as Newton's "laws" of physics are supplanted by Einstein's theory of relativity and the strange, unintuitive constructs of quantum mechanics, we find that our ability to solve problems is increased and our ability to answer questions grows. Relativistic and quantum physics are associated with circumstances that Newton could not have observed and so did not attempt to incorporate in his laws of motion. All of these findings are not discoveries - they're human constructs that represent our current understanding of the natural world of physics. Someday, they almost certainly will be replaced by new constructs.

If there is some absolute truth out there that exists independently of human beings, to what extent does our scientific understanding relate to it? I'll not get into the debate about whether or not such a truth exists. Such a debate is, in fact, outside the domain of science, which concerns itself only with hypotheses that can be tested in some experimental way. Presumably, the existence or non-existence of absolute truth is a matter of faith and is, therefore, not a part of the scientific endeavor and beyond our capacity ever to know by experiment. Therefore, science isn't about finding truth in this absolute sense, either. Science is a process of formulating, testing, and revising hypotheses about the natural world. It is just as creative as art, just as inspirational as religion, and just as practical as politics.

 

Scientific organizations

As a human activity, the development of scientific associations is quite natural. People interested in the same things often form organizations designed to foster their interests and to sponsor activities of mutual interest, such as scientific conferences and publications (books, refereed journals, monographs, etc.). I've written at some length of the American Meteorological Society (see links from here). I'll not dwell on the details, but I want to say something that seems to characterize all such scientific organizations.

Whenever such an organization is created, a bureaucracy tends to spring up along with it. Outside of the elected officials, who serve finite terms, a support staff is hired to take care of the day-to-day transactions of the organzation. This staff tends to grow with time; most such organizations have experienced growth over the decades and as the organization's responsibilities expand, so does the staff. With time, the organizational staff takes on a life of its own, rather different from the scientists who form the "body politic" of the organzation. That is, the organization's permanent staff has its own agenda and its own interests, and those interests do not coincide precisely with those of the members.

Nevertheless, scientists doing science usually are rather preoccupied with the practice of their science and have no wish to use much of their time to do other things. Some find it onerous to review the work of other scientists, as critical as that can be to the success of science as a whole. Some find it impossible to even take enough time to help with the organization of scientific conferences, or to serve on committees of various sorts that are involved with the "business" of science. Therefore, the staff of a scientific organization tends to become an entrenched bureaucracy, mainly because of the apathy and selfishness of the scientists. When the organizational staff makes decisions that affect the scientists, most of them are content merely to gripe about those decisions.

Some say that it's impossible to confront the entrenched bureaucracy of a scientific organization. My experience is that it is the apathy of most members and the self-fulfilling aspects of the prophecy of the futility of fighting the bureaucracy are extremely annoying to me.

It's just as obvious to me as it is to others that service to the governing scientific organization is a bothersome part of being a scientist. But if a scientist abdicates all responsibility in favor of the bureaucracy, and refuses to confront them over the decisions they make, this disqualifies such a scientist from having any right to complain. If you don't like what they do, what are you willing to do about it? If the answer is "Nothing!" then I have no sympathy for your concerns. Pure and simple. And the bureaucracies grow and prosper at the expense of the scientists in the presence of such apathy and selfishness.

 

Who is given the right to communicate?

If publication in the refereed literature is the way by which one's contributions become part of the scientific "communication" archive, then getting published in the journals is the key to being "heard" within the scientific community. Fail to publish and no matter how important and meaningful your work, no one will know your work and recognize its import.

Therefore, an important question is concerned with the process by which some are selected and some are not. Who is given this privilege and how is it decided? I've already gone on at some length about the peer review process, and its flaws. No one who has experienced it first hand would say that peer review is perfect, but as I've noted, there seems to be no viable candidate for a replacement on the horizon.

Peer review, as discussed, operates primarily within the scientific consensus. To challenge the consensus view on anything has been described as requiring extraordinary evidence - and some reasonably skillful writing. This means that the majority of what's being said only nips away at the margins of the consensus. To stray very far from that consensus is to invite being labeled: a "fringe" scientist or even a "crackpot"!

 

The problem with "crackpots"

I've already hinted at this issue. A true crackpot is someone who has little or no understanding of the consensus, either by pure ignorance or by rejection of its primary foundations. I've also noted that nothing in science is sacred. But we scientists really don't behave as if we believe that, at times. To question something that shakes the very foundation stones of our hard-won understanding is to "gore a sacred cow" after all. We scientists stubbornly hang onto to what we believe is fundamental because if we question everything, then there's no foundation from which to build. If everyone abandoned everything they were taught and had to reconstruct what they believe in from their own personal efforts, then doing science would be impossible. No one could make any progress because everyone would be engaged in a monumental task of reinventing the wheel (and every other tool), which would take a lifetime, simply to achieve what Isaac Newton did hundreds of years ago.

Therefore, we all reach some personal accommodation that permits us to accept most of the things we are taught on a conditional basis. That is, we reserve the right to reject it later, if we decide it isn't right, but for the time being we accept it as offered. Most of the things we learned in school remain unrejected for our entire careers because we decide we haven't the time to go back and reassess everything. Whatever it is that we've accepted, we tend to regard as sacred - denying it means that much of what we may have built on that part of the foundation would collapse and we'd have to go back and reconstruct it on some new basis, if we could. This creates a natural reluctance to accept the arguments of those who would attack our foundations.

Therefore, since most of us have bought into the consensus, we regard those who reject it as "crackpots". Now there are different flavors of crackpots. There are those whose ignorance is their dominant characteristic. They know virtually nothing about our science, and don't have much of an idea how to present a scientific idea so that it might be convincing to other scientists. Many such people have what amounts to some sort of obsession that might have at least a veneer of scientific jargon, but it is the scientific equivalent of astrology. Such people have what amounts to a sacred belief of their own, but it's well outside of the scientific consensus. It's safe to say that virtually all such people deserve the label of "crackpot" and I doubt that any examples could be found where such a person actually ever contributed anything useful to science.

To this group I would add a subgroup. People who are sufficiently mentally unbalanced that they believe they have magic powers to control the natural world, or receive some sort of signal to forecast natural events, or whatever. These people are probably in need some form of medical help and perhaps institutionalization. Needless to say, such people are not capable of making a serious scientific contribution.

Another form of "crackpot" is someone who might be educated in some other technical field and has just enough knowledge of mathematics and scientific jargon to create an illusion of legitimacy around some obsession of theirs to make it difficult to reject their claims without at least reading them carefully. The meteorological subfield of tornado research attracts a fair number of people who have no significant understanding of meteorology or severe storm science, but who have some idea they believe has merit. Often, they have credentials in physics or engineering and aren't completely ignorant. They have just enough knowledge to make it difficult to reject their ideas easily and quickly.

Finally, there are those whose knowledge and experience in the field is limited but somehow, they've managed to stumble on something insightful and potentially helpful to the science. In spite of their general ignorance, their confrontation with the consensus actually has merit. Although instances of this might occur from time to time (I can't think of any in my field!), it's pretty rare. There's a kind of "Horatio Alger" mythology attached to the questioning outsider - it may be that our affection for the underdog makes us believe in such things in spite of the lack of much evidence to support them. Valuable contributions from true "outsiders" are few and far between.

Nevertheless, the possibility of missing something that would ultimately prove to be valuable on the basis of perceiving its advocates as "crackpots" ought to be just a bit unsettling. We clearly don't want to publish the ravings of a lunatic in a scientific journal but, after all, isn't the boundary between genius and lunacy notoriously thin at times? We should be cautious in dismissing seemingly outlandish notions as the product of a crackpot.

 

Scientific censorship

Crackpots often see their rejection by scientists as a conspiracy of the consensus, a plot to suppress their ideas. On another level, however, I've already suggested that peer review can result in a sort of censorship of competing ideas. Scientific authority figures (Yes, there are authority figures in real science, even though we reject such things in principle!) can suppress the ideas of those who dispute the findings of such authority figures (or their friends). For example, Chandrasekhar's theories about stellar evolution leading to black holes were rejected and suppressed for many years by Eddington,, although Chandrasekhar was ultimately vindicated and his contributions widely recognized. In this case, the originator of the idea was ultimately successful in getting his ideas published and they became an important part of the consensus. Such is not always the case for the originator of an idea, but good ideas can arise independently and eventually supplant the original consensus, even if the originator never gets credit for her work.

 

Rewards and recognition

I 've written elsewhere about awards and recognition. In principle, no scientist is working for recognition. Ideally, the work is its own reward and the acceptance of one's work by peers is the only truly meaningful recognition. In reality, all of us appreciate being recognized for our contributions and take considerable umbrage if we don't receive what we believe to be adequate credit. Further, awards and other formal recognition are seen by bureaucrats as another important measure of productivity. Therefore, the failure to receive tangible recognition can have a substantial negative effect on a scientist's career and on the resources a scientist might be given to do his work. Therefore, it would be less than realistic to discount the meaning and significance of these things, despite any desire to serve the ideal of being uninterested in rewards. The nature of awards and recognition is that considerable politics goes into who is and who is not selected to receive such things. Not every recipient is deserving and not every non-recipient is undeserving. It winds up being less a matter of deserving the award and more a matter of having a champion and not alienating too many key people. Of course, some scientists make contributions that are so seminal that recognition is virtually inevitable, but that cannot be said to be true for all honorees. I've discussed the process elsewhere, as noted.

 

History of science

How did we get here?

Don't be deceived by this title. I'm not about to go into a long historical presentation. Even in my limited field, I couldn't begin to write an authoritative and comprehensive history, much less for science as a whole. On the other hand, I maintain that all scientists should pay more attention to the history of their field. Some of this is absorbed as a routine matter during the educational process, but I believe that much of the history we learn this way isn't very substantial. We tend to see historical figures in our field as "cardboard cut-out" figures: flat, without depth, and without seeing the real people behind the legends. Often, considerable violence is done to the real history of how consensus science was done - the participants only rarely document their role in things and even when they do, it might well be done from a clearly self-serving perspective.

In spite of the obstacles, it should be a routine question for scientists to ask by what process and by whose contributions have we arrived at where we are now? The question is motivated by more than just a scholarly interest. It behooves all scientists to know as much as possible about the real men and women in their field who preceded them. What motivated them to act the way they did? What were their prejudices and predilections? As has been noted by most historians, if we fail to grasp our history, we are likely to make many of the same mistakes as our predecessors. When our knowledge of history is superficial or non-existent, we're clearly vulnerable to repeating past errors and our basis for making current important decisions is flimsy. All scientists, at least ideally, should be devoted scholars to the history of science in general and to the history of their subdisciplines. Sadly, this ideal is rarely achieved.

 

Where are we going?

This is also a matter that deserves consideration. It can be easy to be so absorbed in the detailed work of doing science that scientists might not give much thought to where their research is headed. There are practical concerns arising from this:

 

Why does history matter?

I've already said that one consequence of being ignorant of our history makes us vulnerable to repeating the mistakes of the past. This is also a trite notion, but its triteness is directly proportional to the truth of the assertion. Beyond this, however, is that understanding the path by which we came to the present state of knowledge and understanding allows a scientist to imagine ways and means by which the present can change to the future along positive lines. Ranther than simply drifting from the present into the future, a knowledge of the past suggests pathways to avoid the negative and to encourage the positive aspects that we see in the present. Human vision of the future is much more likely to be useful and positive if it's guided by historical precedent. As an example, in my field of severe weather, the existing U.S. infrastructure for dealing with severe weather has evolved without much overt direction, other than from the public weather service bureaucracy and the politicians, who were responding to various pressures to do something about severe weather hazards. The fact that our current infrastructure evolved without much clear vision and not on the basis of scientifically-driven understanding meant that decisions along the way were made without a basis in hard evidence. Is the current system the best we could have? I don't know but I doubt it. However, considerable work in collaboration with disciplines outside of meteorology (e.g., sociology, psychology, and economics) would need to be done if we were to try to reconstruct an improved societal infrastructure for dealing with severe weather. Unfortunately, interdisciplinary work usually sees far more lip service than resource support, especially from agencies like the National Weather Service.

 

Science education shortfalls

The education of scientists in this country has some obvious shortfalls, in my opinion. I'm not talking so much about the math and science material, per se, but rather the content that students are not taught, by and large. Of course, existing curricula are bulging with material. In some cases, it's a real challenge to complete an undergraduate science curriculum in four years, so it hardly seems plausible to suggest we insert several more courses in an already jam-packed program. However, I believe we could sacrifice some of the content so that students came away from their programs with enhanced understanding of the process. After all, as I see it, science education should not be so much the material that we learn as it should be learning properly the process by which we learn. This is not to say that we lose all the content - this content includes our existing consensus understanding (discussed above) - but I think we could indeed help ourselves by educating the next generation of scientists about things that often fall outside of the usual science curriculum, now.

 

Science history and philosophy

By this point, I think it should be fairly clear that I believe some science history and philosophy should be a part of both undergraduate and graduate education. Yes, science students do tend to be pretty narrowly-focused on getting into the meat and potatoes of their chosen field, and rightfully so. However, if those of us who have agreed to be their mentors will simply accept some responsibility for developing their non-science background, then it will be straightforward to promote at least a modest level of awareness in the historical development of science. Books like those of Karl Popper and T.S. Kuhn (see the bibliography) can stimulate the right kinds of questions in a student's mind. Learning about how great ideas developed in the minds of the great scientists in our past can help students understand what they need to focus on. It wouldn't hurt to learn a little about the real men and women of science - not just their great works, but also their human failings and foibles. It is actually far from disrespectful to point out the human qualities of a great scientist. It makes such a person seem real to us, as opposed to some distant Colossus, striding like a giant among pygmies, while we mere mortals look on in awe. To know a great scientist at the personal level reveals the inadequate nature of any 2-dimensional "cardboard cut-out" presentation. As we learn about such a person, we should come to grips with how this individual coped with the challenges of being a scientist and what principles guided this scientist's efforts. All of us learn from those who preceded us - we should take what we think is good from our predecessors and try to avoid the failings of the same individuals. History teaches us most of all about the people who have done the work, and what philosophies guided that science.

 

Reality vs. appearance

The reality of science as it is actually practiced is not always the way it appears. I've already mentioned that journal articles virtually never document the failures and the blind alleys followed before the work was completed. We simply don't know much about how science is actually done until we begin doing it. For example, in my experience, a great deal of effort in doing research is spent obtaining data, checking it for quality and consistency, and preparing it for analysis. Most students have little or no appreciation for this and so underestimate how much time it will take to complete a project. Of course, one would hope they would not be tempted to cut corners with data. Again, in my experience, I think all too many scientists make unwarranted assumptions about their data simply because that's the easy path. See Schwartz and Doswell (1991) for some discussion of unwarranted assumptions about one type of data, rawinsondes. The way for students to gain an appreciation for research is to do some on their own. An excellent path is the so-called REU program (Research Experiences for Undergraduates). I would like to see more students have the opportunity for this kind of experience. Actually, at the University of Oklahoma School of Meteorology, there's the so-called "Capstone" course for seniors where they work with a faculty mentor to do some actual research, albeit of an abbreviated form. When the mentors do their jobs properly, students get a chance to see how the real research process operates. Although I'm calling this a "shortfall", I believe that progress is being made at addressing it.

 

Intellectual honesty

There are two basic kinds of intellectual honesty that are at the core of science. The first is honesty to oneself. If there's a path that a researcher knows should be followed during the research, even though it entails extra effort, this is something that needs to be followed. And not simply because others might recognize it, but mainly because if you know of such weaknesses in the work, and you don't make others aware of them, it can be a cancer in your scientific soul. The task of a scientist is to strive to understand, and to leave no stones unturned if they can be turned. Yes, I am an unrepentant idealist when it comes to staying true to the ideals of science. Learning about the natural world is a lifelong challenge - it's not about the number of papers you've published, or the awards you've received, or the invitations to speak around the world. If "success" by such measures is achieved at the price of doing the best work you know how to do, then in reality those honors mean nothing. There's a great deal to be said for knowing that you've done your best, even if no one recognizes all that you did, or even knows about it. Stopping short of your best is a good way to give up your idealism - it's a slipperly slope.

The other kind of intellectual honesty involves the knowing suppression of contrary interpretations of your data, and consciously ignoring known opposing viewpoints. I believe the enterprise is served best by an honest competition in the aforementioned scientific marketplace of ideas. This means that you must honestly present data that might be interpreted as being negative to your interpretation, if it exists. If a contrary view is already extant regarding some idea of yours, you should be ready to consider it when you present your results and interpretations. If you feel those contrary views or bothersome data need to be suppressed or ignored, this is a kind of intelllectual dishonesty to those who will read your papers. It's not a matter of giving aid and comfort to your "enemies" after all - I tell my students that your most tenacious and severe critic is, in scientific terms, your best friend. Never take criticism of your work personally. A severe critic is challenging you to do your best to convince her, and you might well do some of your best work because you're aware that your critics are seeking to discredit your ideas. Taking on those challenges directly and honestly is at the heart of what we all are trying to do. In the long run, we're not judged only by whether we were right or wrong in some dispute, but also by how much we helped the science move forward.

When students learn about science, they also should learn about the critical importance of honesty. Nothing should be knowingly or consciously suppressed for the sake of supporting some cherished notion. If science ever loses its fundamental commitment to honesty, then the whole process will collapse, and justifiably so. Intellectual dishonesty, to oneself or to others, is unacceptable and we need to make sure that students are aware of the significance of that. In my opinion, we spend much less time inculcating this core value of science than we should.

 

Communication skills

By the nature of what we scientists do, we tend to avoid or work only superficially at other parts of the complete academic curriculum besides mathematics and science. It takes considerable effort and focus to become proficient at the tools of science and to learn the scientific consensus material that underpins most of what we do. Thus, we tend to be poor writers and poor presenters. I go to enough scientific talks every year to see that most of them are simply awful. See here for a discussion of what things I believe need attention. The main issue science students need to understand is that they all must eventually become salespersons. They must sell their ideas to their peers via scientific papers and presentations at conferences, they must sell their proposals to do research in order to obtain the funds to do that research, and they must ultimately sell the general public on the need to support the scientific enterprise (recall the importance of "popularizers" of science, above). If a scientist is a poor salesperson, their ideas may never be appreciated, they might have trouble obtaining funds for their work, and the public might decide that such science is a luxury to be slashed at the next budget crisis.

Few scientists ever take the time to become good writers and speakers. This actually hurts our graduates in the real world, where communication skills are generally given high value in assessing work performance. I think the importance of communication skills is high enough that our educators should be far more insistent on their students learning them than they now are. And the students should be confronted with the reasons for it. It's not enough that some student is a genius at some aspect of science if that student's communication skills are abysmal. I've seen too many examples of this and the negative effect it can have on a scientist's career.

 

A commitment to scholarship

It's become all too common for scientists to be poor scholars. This is manifested often in a sort of temporal myopia, whereby work more than a decade or two old is simply ignored by professors and students alike. One unpleasant consequence of a scholarship failure is the repeated "discovery" of things that were reported upon in earlier works by a previous generation of scientists. The late Allan Murphy discussed this problem in one specific context (Murphy 1996), but it is widespread, in my experience. This is another manifestation of the lack of emphasis on the history of our science but also represents a form of laziness that is shameful and disturbing. Stephen Jay Gould has talked about related issues in his book, Wonderful Life (as well as some of his essays in Smithsonian magazine) - we tend to think of previous generations of scientists as "quaint" because science certainly has progressed considerably over the past 30 years or longer. But those scientists were just as intelligent and motivated as we. They simply were operating in a different context - the scientific consensus of their age was different from ours and the tools they had at their disposal were less formidable technologically. Nevertheless, their work deserves to be consulted, remembered, and cited where appropriate. To do less is to do them a dishonor and it will come back to us, in our turn. Knowing the work that underpins our present understanding is important and faculty members should lead their students by example, as well as by instruction, in this and other aspects of science education.

 


Bibliography

 

  1. Errico, R., 2000: On the lack of accountability in meteorological research. Bull. Amer. Meteor. Soc., 81, 1333-1337.
  2. Lynch, P, 1992: Richardson's barotropic forecast: A reappraisal. Bull. Amer. Meteor. Soc., 73, 35&endash;47.
  3. Murray, B.G,, Jr., 1991: Isaac Newton and the evolution of clutch size in birds: A defence of the hypothetico-deductive method in ecology and evolutionary biology. Beyond Belief: Randomness, Prediction and Explanation in Science, J.L. Casti and A Karlqvist (Eds.), CRC Press, 143-180.
  4. Murphy, A. H., 1996: The Finley affair: A signal event in forecast verification. Wea. Forecasting, 11, 3-20.
  5. Schwartz, B.E., and C.A. Doswell III (1991): North American rawinsonde observations: Problems, concerns, and a call to action. Bull. Amer. Meteor. Soc., 72, 1885-1896.
  6. Hooke, Robert ....

 

See my listing of books that have been influential on me, some of which are relevant to this discussion and others are not.

The Day the Universe Changed - another PBS series by James Burke

 

  1. Wonderful Life - Stephen Jay Gould
  2. The Making of the Atomic Bomb - Richard Rhodes
  3. The Structure of Scientific Revolutions - T.S. Kuhn
  4. Innumeracy: Mathematical Illiteracy and Its Consequences - John Allen Paulos
  5. Conjectures and Refutations: The Growth of Scientific Knowledge - Karl Popper
  6. Impure Science - Robert Bell
  7. Weather Prediction by Numerical Process - Lewis F. Richardson
  8. Human Judgment and Social Policy - Kenneth R. Hammond
  9. The Two Cultures - C. P. Snow
  10. Science and Human Values = Jacob Bronowski