More thoughts on the future of

weather forecasting (Models, MOS, Education, and Training)

by

Charles A. Doswell III


Last update: 18 Feb 2008: minor revisions and updated links

This essay is the personal opinion of Chuck Doswell. No disclaimer is needed, as this is my personal Website. I expect that some (many?) readers will be upset with what they read. If you dislike being challenged, skip this essay. If you want to discuss anything said here, e-mail me at cdoswell@earthlink.net.


Introduction

Since Harold Brooks has given compelling presentations of the issues related to the future of humans in weather forecasting and on how technology is changing the future of weather forecasting, I won't have to repeat most of that. Rather, the following is further amplification, motivated in large measure by some past debates about what it takes to be a weather forecaster on a newsgroup. The character of the contributions therein took a particularly disturbing, defensive tone by the advocates of what I refer to as the "Grandma Moses School of Weather Forecasting"; their statements sound very much like "I'm a really outstanding forecaster (and if you don't believe it, just ask me!) and I'm ignorant of meteorological theory (and proud of it, by golly!). So there, Mr. pointy-headed intellectual!"

These discussions crop up all the time, as people who are forecasting without a degree end up being pretty defensive about it when the subject is broached. The folks who have a degree, in their turn, seem to feel pretty smug about it, as if those without a degree are somehow automatically in some sort of lower stratum of meteorological society.

In spite of my problems with many of the postings, the discussions have been very worthwhile, at least to me. They force most of us to think about the subject, which can't be bad. The fact that we don't agree isn't important.

For those who have followed my writings on the subject (references are provided at the end), I've never said that to be a forecaster, one has to have one or more degrees. Being a weather forecaster can be as "easy" as looking at the sky and learning from what you see. Every person intimately involved with the outdoors tends to become a weather forecaster, some of them perhaps attaining considerable skill. If your life and/or livelihood depends on the weather, chances are you're going to learn something useful. That's a capacity virtually every human being possesses to some extent.

Why is it necessary to learn meteorological science to be a forecaster? Well, Harold's essays say something pretty important. The world of the future is going to be dominated by numerical weather prediction models and guidance based on the output of those models (Model Output Statistics, or MOS, is the most commonly used one). If persistence or climatology forecasts are right most of the time, which they are, then how smart does a forecaster have to be to be right most of the time? Not very! This is why meteorological statisticians like the late Allan Murphy make such a fuss about measuring skill and not accuracy . What we call "accuracy" is some measure of the correspondence between forecast and observed variables). On the other hand, "skill" is measured as improvement over some standard (often simple) forecasting method (such as random guessing, climatology, or persistence). Even simple persistence is a remarkably accurate forecasting tool. Hence, being right most of the time says virtually nothing about skill since skill involves a comparison to some other method than the one being measured.

The value of forecasts

Harold has observed correctly that even a statistically proper measure of skill may not give the total picture. Allan Murphy (No one has thought more carefully about forecast verification than Allan! His recent passing is a great loss to us all.) points out that verifying forecasts with a single measure does not reveal a complete picture of the accuracy and/or skill of the forecasts. Harold has found that beating MOS by one degree can give one a lower mean squared error than beating MOS on the few occasions where MOS is making an egregious error. The user situation has to be pretty strange to conclude that the former is providing more value than the latter (not to say that odd circumstances don't exist at all!).

So what does this mean? Well, I suggest that forecast value is greatest when it improves our society's ability to respond properly to situations where the weather affects us . Of course, as Allan has pointed out, if the user can do nothing to affect his or her outcome based on the forecast, then no matter how accurate that forecast might be, it can have no value for that user! Being sensitive to weather is not important to users - being sensitive to weather information is what's important. Different folks need to respond to the weather in different ways but, for example, knowing the daily maximum temperature more accurately by one degree than that provided by, say, MOS, may not be worth much to most users. Of course, some power companies might benefit by having the max temperature one degree more accurate on the average. Or when the temperature is right around freezing, that one degree might have some import. In other words, value is situation-dependent (where the "situation" has both a meteorological and an individual user component). Using some statistical measure that does not account for the whole situation is not likely to give useful information.

By the way, we forecasters have very little accurate knowledge of user needs. Just listening to them, done rarely enough, is only part of the answer, because what users want and what we are capable of giving them may not coincide. Knowing the needs of all our users is basically impossible in public forecasting - it may be possible for private forecasters, though.

When forecasting is easy, then the value of simple forecasting systems like MOS becomes clear. If we let MOS handle the easy parts of a complex situation, then we humans have more time to ponder the complex parts. MOS and other objective aids truly become useful guidance when they liberate us from trivial tasks. But when the "system" decrees that MOS is an all-wise oracle that must be followed on pain of chastisement and disgrace, then Len Snellman's "meteorological cancer" becomes terminal.

And the modeling/MOS gurus in the "system" (generally, the NWS) have decreed that the future of weather forecasting is a non-human one. Forget the pious pronouncements by the bureaucrats; the bottom line is that humans are slated to disappear from the public forecasting process, except perhaps as the analogs to firemen on a diesel locomotive. The "system" doesn't have the courage or political clout to fire them straight out, so give them "technician" work and reduce staffing by attrition. (Tinkering trivially with guidance is essentially technician work.) Eventually, the weather forecasts of the future can go out unsullied by contact with the human hand. See my essay on management for more thoughts on this topic.

Harold has mentioned some important issues regarding the humans in the forecasting system - for example,

  1. criteria for selection of highly-qualified candidates,
  2. forecaster education/training,
  3. development of a clear understanding of where and how humans can contribute positively to the forecast

Failing to consider these hastens us along the path to a non-human forecasting process; the prophecy of the MOS and NWP advocates becomes self-fulfilling. The folks wanting to push automation assume humans can't add much value, and so they want to restrict the kinds of deviations from guidance forecasters are allowed to make, and they oppose putting any substantial resources into training. How many resources are expended on behalf of the human part of the process annually? What fraction of the resources spent on technological change are spent in scientific education and training of forecasters?

A major hang-up is measuring forecast value, then. How does our society measure the value of the forecasts? Ah, here's a challenge for us weather freaks. Basically, forecasts have value not from the resources they generate , but from the resources that are saved . [See my essay on the relationship between users and meteorologists.] No one can argue easily that weather forecasts produce income (except maybe for private weather forecasters), but the weather certainly influences outgo. A bad forecast results in money spent needlessly when a good forecast might have prevented its expenditure. Examples abound: snow removal, weather-related traffic accidents, construction, agriculture, power generation, access to markets, etc. Perhaps one way to get a clear picture of how important weather forecasts are to society is to quit making forecasts and measure what the expenditures are without them, compared to what they were with them. But of course, in the brave new world of models and MOS, the forecasts would not simply stop; they'd be produced by computers. So in order to measure the value of humans, we have to quit using them for a time, and see what the impact on our society would be.

How many human forecasters would be willing to submit their work to a crucial test? Go ahead and simply parrot MOS and NWS centralized guidance for a month and see if anyone among your customers complains about any accuracy decline in your forecasts. It seems to me that a lot of the posturing on the newsgroup is just so much chest-pounding rhetoric. If someone without any education, or with minimal education, is so good, prove it! Back up the statements with hard, rigorously-crafted verification statistics that demonstrate your consistent contributions over and above guidance, and I'm going to be a lot more convinced than I am now. Talk is cheap. Getting a forecast out the door (or over the air) for 30 years is not the same as doing consistently good forecasts that show increased accuracy compared to guidance and serve the needs of your customers through that added accuracy.

In fact, I do know forecasters who have the facts to back up a claim of being "good" in this comparison to guidance. Virtually every one of them has a Master's degree and is eyeballs deep in the science of meteorology. Perhaps this is because I associate with NWS forecasters and do not associate with private and military forecasters. If this is the case and you think your untutored forecasts beat guidance, show me the numbers and I might back off your individual case, depending on how you got your numbers. If you can't or won't do that - well, I once lived in Missouri ... .

Becoming a scientific forecaster

If your position on the topic of being a forecaster is that the science of meteorology is irrelevant to the task of forecasting, I fear I have little or nothing to say to you and you can simply skip the rest and/or flame me however you wish. If you are willing to grant that the science of meteorology has some part to play in your forecasting life, consider the following.

Why do we insist on teaching the science of meteorology to prospective forecasters? Well, for one thing, we are not interested in getting the majority of forecasts done accurately - that much we can do with models and MOS. It seems to me that the point is to deal with meteorological situations where the models and other forms of guidance don't work very well . Where are those opportunities to improve on guidance? As a forecaster, you need to be able to recognize them, right? These are your opportunities to add accuracy and skill (and, perhaps, value) to the forecasts. Harold's work has suggested that such opportunities don't necessarily arise every day. How do you go about recognizing them? Most of them are associated with mesoscale (and smaller scales than that) processes, where the science of meteorology is still in its infancy. If you're going to recognize them, it seems implausible that you'll be able to do so if you're ignorant of what's going on in the science regarding these processes. What do you now know about Conditional Symmetry Instability? Do you know how to diagnose it and what relevance it might have to your forecast? What's your feeling for Isentropic Potential Vorticity and its use in forecasting? How have you decided whether or not it has a bearing on what you do during a forecast day? What does your intuition and "gut-feeling" tell you about vortex dynamics and nonlinear mountain waves? The new ideas that are associated with science are going to form the basis for future forecasting techniques. If you don't know anything about them, what's your option?

My special example of this is PVA. Many forecasters use "PVA=upward vertical motion" as a forecasting "tool" with little or no physical understanding. It's a rule they learned from someone and they just use it with having any idea of how it works or where it comes from (except to attribute it to some derivation they never understood in the dynamics class they flushed from their memory once they finished the semester). I consider this to be within what I call the "Weather Lore School of Meteorology" and it's the logical equivalent of "Red sky in the morning, sailor take warning"! In fact, there may be solid scientific principles that lie underneath weather lore. To get a red sky in the morning, you need clouds in the west and clear in the east. Since in mid-latitudes, weather advances from west to east, this means advancing clouds, usually a sign of an advancing weather-producing system. But if you don't understand the physical basis for the weather lore, you don't know when it's going to let you down. What about the "red skies" rule in the tropical easterlies where systems move from east to west? What about a system with high clouds but little or no low-level moisture? And so on. Using weather lore is not the problem; using it in ignorance when understanding is available is the problem.

Physical understanding allows forecasters to do much, much more than follow "rules of thumb" and "pattern recognition" blindly. What about questionable data? How do you go about reconciling your diagnosis with the data? This is a complex topic, but in its essence it requires you to have a "vocabulary of models" that relate atmospheric processes to the observations. Without the science, you're unqualified to distinguish signal from noise in the data.

So if you've conceded that there just may be some use to the dynamics classes and other abstractions you've scorned that are part of a college education, then you must acknowledge that there's a use for the math and physics classes. Math and physics are the basis for understanding dynamics in the atmosphere. Sure, they may not be taught well; even the atmospheric dynamics class may not be taught well. You can sit around and whine about it or you can buckle down and get what you need by actually doing some work, rather than sloughing it off because you're going to be a forecaster.

Getting a degree isn't the point. Learning how to think, and especially how to think like a meteorologist is! If you don't get the latter while getting the former, that's mostly your fault. Sure, the universities may not be as much help as they should be in the best of all possible worlds, but none of us lives there, so why not get on with it? Are you willing to cut yourself off from a potentially valuable source of knowledge (like math or physics or dynamics) simply because you don't want to make the effort? Math and physics may not seem relevant to you, but just because you don't see the connection doesn't imply that a connection is non-existent! Do you want to be less knowledgeable about the atmosphere than what is possible?

Creating a two-stream education system, where forecasters get a watered-down version of the science of meteorology, would simply widen the vast chasm that already exists between forecasters and researchers. If you don't feel you get the respect you deserve as a forecaster now, think about how much you'd get with a non-challenging specialty degree in forecasting. In fact, I have created a specialty Master's Degree at the University of Oklahoma School of Meteorology. The idea is that it restricts your elective choices during the program to certain courses, but is otherwise indistinguishable from the rest of the M.S. degree programs in terms of rigor. In fact, I've tried to ensure that it is more rigorous than the regular program! There haven't been any takers ... I wonder why. Are "weather weenies" afraid to be challenged?

I like to make the analogy between forecasters and medical doctors. Every practitioner in medicine takes the full doctorate-level program of study, must pass a very difficult licensing exam, and is required to participate in continuing education. Why? Do you want someone with your life in his/her hands having fewer qualifications than this? Does a weather forecaster have less responsibility than a medical doctor? Which sort of doctor do you want: one that took a watered-down, no-math option for a Bachelor's degree and then went directly into practice with no license exam, no follow-on training, and no continuing education requirements, or one with the full program currently required? The choice seems pretty obvious in the case of medical doctors, so why is there so much debate about the same issue in meteorology? Is there some self-service at work, here? If so, it is my belief that even self-servers are going to have to recognize the compelling quality of objective guidance. By not adding consistent accuracy and/or skill to guidance, with all the effort and knowledge that is going to take, the day of the disappearance of human forecasters is hastened. If you believe, as I do, that humans need to play a role in forecasting, then the choices seem awfully obvious.

Conclusions

Playing Pollyanna and citing the long record of accomplishments of the past is tantamount to committing meteorological suicide. If humans are eventually cut off from participation in forecasting, then it's going to be very difficult to recreate that particular wheel. I believe, with Petterssen (see the quote at the end of this), that there always will be an opportunity for humans to add value to objective guidance, but it's going to require more and more of the humans doing it as the science of meteorology grows. Untutored forecasters won't be able to do much more than watch as their skills are usurped by the objective methods. There's too much economic incentive to reduce the costs associated with having humans in the loop; computer programs run for pennies a day, don't need annual vacations (although they do have some "sick leave" requirements), and make no continuing demand on resources after they retire. Computers and their programs don't have days when they just don't feel "with it" and don't form unions to seek higher pay and better working conditions. Whereas programs may not be capable of the "big insight," they don't commit the "big blunder" either; they're cheap mediocre forecasters. By avoiding the dumb mistakes, objective guidance makes up a lot of the ground lost by its inability to get the tough situations. Keeping human forecasters employed must eventually come down to a perception of added accuracy and skill beyond what can be bought with an automated system. If you think you can do that with your high school diploma, then I wish you the best.

Education is measured with pieces of paper, and those pieces of paper are notoriously unreliable at measuring the content of the individual mind. A poor relationship between degree level and forecasting ability is not surprising when this simple fact is considered. To infer from this that a degree in meteorology has no value in weather forecasting is a grotesquely inappropriate extrapolation! The value in going to school is the opportunity to learn; it is appallingly true that schools produce an alarmingly large number of graduates who are virtually unscathed by any learning whatsoever. The value in a rigorous meteorological science degree is not inherent in the piece of paper you get at the end; it is incumbent on the individual student to work with the faculty to obtain something useful from what is taught. I forget the origins of the aphorism, but I have always felt that "If you didn't get something from your education, perhaps you didn't bring anything to put it in!"

One uniformly-agreed upon outcome of a quality education is the recognition of how much we don't know. People with an education (i.e., the piece of paper plus the real content) tend to resent folks who come by their ignorance the easy way. To discover that there is a large fraction of forecasters who wear their ignorance like a badge of honor can only leave me with a sense of despair about the future of humans in weather forecasting. My sarcasm is a natural outgrowth of my idealism; I have retained that idealism in the face of a lot of evidence that the world doesn't match my ideals very well. Giving up leaves the victory to the mediocre, the incompetents, and the bureaucrats and I'm too stubborn to yield that victory.

Some pertinent quotations:

The first of these is from Sverre Petterssen, and the reference is:

Petterssen, S. (1956): Weather Analysis and Forecasting (2nd Ed.), McGraw-Hill, New York, in the preface (pp.vii-viii):


... the development of high-speed computing machines has made it possible to produce timely forecasts by numerical integrations of the equations of motion. Thus ... a real bridge between theory and application has been provided. Although this bridge will need further strengthening and widening, there can be no doubt that it is already useful

While statistical techniques, including the use of analogues and classifications of types, have always proved useful as an adjunct to empiricism, the development of more rigorous forecasting procedures, based upon autocorrelations and cross correlations and time series analyses of continuous fields, is a recent and notable achievement. In this field, too, the development of high-speed computing machines has vastly increased the possibility of obtaining timely forecasts by objective techniques.

In spite of progress in the development of quantitative techniques, the conventional forecaster will have an important part to play. His wide experience of local and regional conditions, orographic and topographic influences, moisture and pollution sources, etc., will be invaluable in supplementing the machine-made forecasts. While the machines provide the answers that can be computed routinely, the forecaster will have the opportunity to concentrate on the problems which can be solved only by resort to scientific insight and experience. Furthermore, since the machine-made forecasts are derived, at least in part, from idealized models, there will always be an unexplained residual which invites study. It is important, therefore, that the forecaster be conversant with the underlying theories, assumptions, and models. In particular, it is important that he be able to identify the "abnormal situation" when the idealized models (be they dynamical or statistical) are likely to be inadequate.

It appears, therefore, that the time has come for a reorientation of the training of forecasters. This reorientation should aim at minimizing (and, if possible, eliminating) the difference between what is commonly called synoptic and dynamic meteorology.


This quotation is astounding to me, both for its prescience and for the depressing lack of progress since 1956 along the lines Petterssen defined so lucidly. The insight and relevance of this quotation are just as valuable today as they were in 1956. It's sad to see how little attention was paid to as bright a light as Sverre Petterssen.

I can't resist another quotation, this time from the late Werner Schwerdtfeger, who was my de facto advisor as an undergraduate (at the Univ. of Wisconsin, Madison), and a great, inspirational educator. The reference is:

Schwerdtfeger, W., 1981: Comments on Tor Bergeron's contributions to synoptic meteorology. Pure Appl. Geophys., 119, 501-509.


In the years long passed, the time spent by a good synoptician for a careful analysis of a weather map and the scant upper air information, including a comparison with the previous maps, was also automatically a time of meditation about the possibly relevant physical processes. For any forecast to be worked out, minor characteristics of a weather situation, like a change of cloud type in a critical region ('indirect aerology'), could be taken into account and weighted against other evidence and arguments. That was the line of work, (based upon a combination of experience, theory and intuition), in which men like Tor Bergeron and Richard Scherhag excelled. Selected by the analyst himself, the specific meteorological considerations could vary from day to day and region to region. Now, even the most sophisticated models and biggest computers can do nothing of the kind.

In fact, the change from personalized to computerized analysis and forecasting in some countries has, and in others soon will have, the regrettable side-effect that many practicing synopticians have nothing more to do than interpret unidirectionally from computer to customer. The thinking has been done a few years earlier by a few modelers. Even if the meteorological 'interpreter' is convinced that the official forecast is doomed, his work schedule and available facilities would not permit him to make his own prognosis. There is an actual danger for the professional future of young, well trained, highly motivated synopticians in the large national meteorological services (see Schwartz, 1980). How can one combine the full deployment of advanced computer products with an adequate utilization of the initiative and experience of the local and regional forecasters, people with potentially an inspiration comparable to Tor Bergeron's? There can be no doubt that a workable solution to this problem would substantially improve the quality of weather forecasting.

Reference: Schwartz, G., 1980: Death of the NWS forecaster. Bull. Amer. Meteor. Soc., 61, 36-37.


This quotation sounds like something I might say (I would want to avoid the gender bias, though - Werner was not a sexist, but was making the default assumptions of his era), but I can't claim to have expressed these thoughts any better or more succinctly. See my "Human Element" paper and the paper about diagnosis for more of my thoughts on these issues.


Bibliography

1. Papers I've been associated with that have pertinence (as of February 2008)

Rather than list all of these, go to my publications page and browse what's easily available there.

2. Some other pertinent papers

Murphy, A.H., and R.L. Winkler (1992): Diagnostic verification of probability forecasts. Int. J. Forecasting, 7, 435-455.

______, and ______ (1987): A general framework for forecast verification. Mon. Wea. Rev., 115, 1330-1338.

______, and ______ (1971): Forecasters and probability forecasts: Some current problems. Bull. Amer. Meteor. Soc., 52, 239-247.

______ (1993): What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281-293.

______ (1985): Decision making and the value of forecasts in a generalized model of the cost-loss ratio situation. Mon. Wea. Rev., 113, 362-369.

Snellman, L.W. (1977): Operational forecasting using automated guidance. Bull. Amer. Meteor. Soc., 58, 1036-1044.

Glahn, H.R., and D.A. Lowry (1972): The use of model output statistics in objective weather forecasting. J. Appl. Meteor., 11, 1203-1211.

Schwartz, B.E. (1984): Typical warm season MOS guidance errors. Preprints, 11th Conf. Wea. Forecasting and Analysis (Clearwater Beach, FL), Amer. Meteor. Soc., 50-56.


In memory of Allan Murphy

In view of his passing in early August 1997, I feel obligated to say that it was my great privilege to know Allan as both a friend and a colleague. He's shaped my understanding of the forecast problem, in terms of methodology, verification, and value to users. Wherever my vague gropings for understanding of the issues in weather forecasting took me, I found his footprints in the ground ahead of me, taking great strides as I crawled along behind. His expositions are jewels of precision and clarity. We've lost a great man, but he has left us the equally great gift of his writings. It's my hope that you'll give his written legacy the thoughtful and careful consideration it deserves. I believe strongly that you'll be rewarded in proportion to the effort you expend - Allan's papers aren't necessarily easy to read, but if you understand his insights, you'll appreciate how far he took us all. We miss you, Allan!