Is there a role for humans in the NWS of the future?

(Is there an NWS in our future at all?)

by

Chuck Doswell


Posted: 11 May 2003 Updated __

As always, this is my personal opinion. Your input is always of interest to me, so pass it along to cdoswell@earthlink.net My thanks to Greg Thompson of NCAR for his terrific real-time weather site, from which I downloaded several images I've used here.


Introduction

This essay grows out of several disparate elements, all built around the notion that the National Weather Service (NWS) is hastily automating a lot of its products and doing its best to continue what has in fact been a long history of staffing reductions. Whatever their real intentions, it seems clear to me that a role for humans in the public sector forecast system (i.e., the NWS) is becoming ever more unlikely for the future. It may take many decades for the last humans to disappear, but that time is apparently coming.

Back in the 1950s, it was clear that diesel-electric locomotives were considerably more efficient than coal-burning steam locomotives. Among the casualties of the steam locomotive era were the firemen, whose job it was to keep the fires burning in the boiler of a steam engine. There could be no place for firemen in the future of railroading ... the trade was obsolete ... but they hung on for decades after the demise of steam engines, anyway. Whatever make-work they did during that time, when they died, quit, or retired, no new firemen were hired, and they eventually disappeared.

Something very comparable might well be underway in the NWS right now. As this essay proceeds, I'll be making a case for my concerns about the demise of a role for humans in the NWS of the future. As I see it, if we are to prevent this from happening, some drastic decisions will have to be made immediately, and I doubt that current NWS management is going to make such decisions.

My starting point is that there should be humans involved in the process of weather forecasting. Perhaps that idea is indeed becoming outdated, but I'll explain the reasons for my position in what follows.

So where are we now?

The Numerical Weather Prediction (NWP) models the NWS employs operationally are pretty good at forecasting in the 1-3 day range (so long as the mesoscale details are not too important), with some predictability in the largest scales out to about 5-6 days. After that, the models have little skill. Development of models has been underway continuously since the early 1950s, and they have shown a more or less steady improvement in accuracy since then.

Consider the following 12-h forecasts from the ETA model. Figure 1a shows the 12-h prog for surface SL pressure and wind, while Fig. 1b is the verifying analysis. The temperature and dewpoint forecasts are also comparable. In Fig. 2a is the 12-hr CAPE forecast, whereas Fig. 2b is the verifying CAPE analysis ... CAPE is a "volatile" parameter, so it would be unrealistic to expect a perfect forecast. Basically, this example is not too atypical, although worse examples could be found, naturally.

Figure 1. (a) 12-hr ETA forecast of the MSLP chart, valid at 00z, 09 May 2003; (b) verifying ETA analysis at 00z, 09 May 2003. Images downloaded from Greg Thompson's NCAR site.

Figure 2. (a) 12-hr ETA forecast of the CAPE chart, valid at 00z, 09 May 2003; (b) verifying ETA analysis at 00z, 09 May 2003. Images downloaded from Greg Thompson's NCAR site.

Post-processing of model output, notably MOS (Model Output Statistics), has been around since the 1960s and therefore is also relatively well-developed. Without going into the details, MOS forecasts are darned good, especially for situations with relatively common and/or benign weather situations. They tend to be of least value in those few situations which are outside of the "typical" behavior of the atmosphere. Nevertheless, MOS forecasts are difficult to beat on a regular basis. MOS is substantially better, in general, than the raw model output.

Another type of post-processor is associated with ensembles. Ensemble methods have been in development for about 20 years in the so-called "medium" ranges: 2-5 days. These have become reasonably sophisticated and have led the way in learning how to develop and use ensemble forecasts operationally. Clearly, ensemble methods are going to require continued research and development for the medium ranges. More interestingly, ensemble methods for short-range forecasting are being explored. The challenges associated with doing this are very different from medium-range forecasts, because mesoscale processes are much more dominant in the short-range problem. Unlike medium-range forecasting with ensembles, there is a much more compelling and indisputable concern for model errors in short-range forecasts. Nevertheless, this effort is underway and should eventually lead to considerable insight and improvement.

A recent development has been the implementation of more or less automated forecast preparation systems (IFPS). These are recent developments and their ultimate role and form have yet to be determined. They either signal the beginning of the end for humans, or they will become an important tool by which humans interact with the models and with the post-processing output of the future. Currently, it is certainly possible to take the path of least effort and let the "system" produce the whole forecast, beginning to end. This is what Snellman (1977) referred to as "meteorological cancer" of course. These new systems make it possible to remove humans altogether in the chain from model to forecast user, at least from most of the "routine" forecasting situations, and especially for forecast products (as opposed to warning products, which are simply very short-range forecasts). This issue is at the heart of this essay, so it will be elaborated upon at some length, later.

What about the private sector?

Another issue in forecasting is the so-called private sector. Since this essay is mostly about the NWS, I will have little to say about the private sector. My main point is that I see relatively little going on in the private sector that suggests they are vastly superior at forecasting in comparison to the NWS. They are much better at packaging their products, otherwise known as "tailoring" the product to the user, but the product itself is not significantly better than that produced by the NWS. There are several reasons for this, but I believe the main one is that the private sector forecasts are largely cost-driven. They are trying to produce forecasts comparable to those of the NWS but at low cost. They can do this in part because they benefit from the huge subsidy associated with the fact that the government (NOAA) underwrites data collection and a good portion of the operational numerical modeling costs, as well as weather forecasting research. This is a major infrastructure component cost borne by the taxpayers, not the private sector companies.

Private sector forecasters are not particularly well-paid, they are given substantial workloads, and they aren't notably better educated or trained in comparison to their public sector counterparts. People are the most expensive part of both public and private sector forecasting operations. The way to cut costs is to minimize people expenses.

One of the reasons all the private forecasting companies are cost-driven is that the NWS puts out a pretty decent product for free. Why should a client pay some private sector company for forecasts if they can get a free product (actually, supported by the taxpayers, naturally ... no such thing as something for nothing!) from the NWS? Presumably, at least part of the reason is "tailoring" of the product. To market a product in competition with the NWS, private sector companies must be able to convince clients they're getting their money's worth. But to create those products at a low enough price to be attractive to clients, the private sector companies must be very cost-conscious. Some forecasting companies in the private sector already have automated large chunks of their operational forecasts.

I don't want to get very deep into the public/private dichotomy in this essay. That topic needs its own essay, so I won't say much more about this issue. I simply want to note that as the NWS automates itself toward mediocrity and is forced out of those operations (by forces on the side of privatization) that are focused on specific market sectors (marine, aviation, agriculture, etc.), this creates opportunities for private sector forecasters to produce high-quality products aimed at those niche markets. If some private sector company is created that is interested in selling quality and notably better forecasts than NWS standard products, rather than a large emphasis on price, it might be possible for that company to create a successful new approach to private sector forecasting.

Forecast delivery methods

The current process for forecast delivery is via a partnership between the NWS and the media. The pathetic penetration of the so-called "NOAA" Weather Radio into the delivery process guarantees that the NWS forecasters are rarely in direct contact with their users. Instead, the NWS product is delivered primarily by TV and newspapers -- to a lesser extent, radio is involved, but modern radio is also becoming automated and there is less and less room in radio for forecast product delivery.

Of course, the media often either have their own forecasters, as in The Weather Channel, in most major network and local stations in metropolitan areas, or many newspapers and radio stations buy their forecasts from the private sector forecast companies, as well.

A major factor in forecast dissemination has been growing explosively: the Internet and the World-Wide Web. Even the NWS has been brought kicking and screaming into the age of electronic forecast product delivery. It is a way for the NWS to interact more directly with the public and eliminate the necessity for dealing with the rather recalcitrant and demanding media.

The future for dissemination of forecasts is quite bright with new ideas and concepts. The next few decades will see a revolution in how those who need weather information will obtain it. The classical methods for forecast delivery are focused on:

Delivery methods currently are evolving toward

It will be interesting to see how forecast delivery evolves in the next 20 years or so.

Some common issues

As noted, humans are major cost drivers in the forecasting world. Moreover, the production of high-quality human forecasters is not inexpensive. I can argue that weather forecasting is the most challenging task any meteorologist would ever undertake. To do this well, a forecaster needs as much education and training as is possible. Settling for a Bachelor's degree, with around 30 semester hours of meteorology is simply inadequate. I've discussed this at some length elsewhere.

Again discussed in detail elsewhere, in addition to education and training, forecasters need a useful verification system, that provides them with meaningful feedback about what they're doing well and what aspects of their output need some effort. That feedback also needs to drive research into dealing with forecast problems for which science has yet to provide useful understanding. That research is something that the forecasters themselves ought to be involved with on a routine basis. This means that staffing at a forecast office needs to be considerably greater than it is.

Forecast office staffing should include on the order of 20-30 percent of a forecaster's time free to pursue:

  1. Training evolutions at a training institute
  2. In-house training evolutions
  3. Research projects at some research institute
  4. In-house research
  5. Attending scientific conferences
  6. Attending seminars and local forecaster-only meetings to discuss forecasting issues
  7. Development of local forecasting tools, usually for local computers

Since it traditionally takes five people to cover one forecaster position (accounting for 3 shifts per day, 7 days per week, plus leave time, etc.) in today's NWS forecast office with what amounts to minimal staffing, this means the addition of at least one more person on staff for each forecaster position to supply the personnel resources to give every forecaster this level of professional development time.


NOTE: One week is 7 x 24 = 168 hours. Five people working 40 hours per week (regular time) produce 200 hours. Leave time for individuals (sick leave, annual leave, compensatory time off, etc.) and extra staffing in times of important weather is supposed to be contained within the 200 - 168 = 32 hours (4 shifts of 8 hours). This often isn't enough, and there are various ways to cover the additional hours: some covering of shifts by those not formally shift forecasters (MICs SOOs, WCMs), overtime, etc. To produce non-minimal staffing that allows for reality of forecasting shift work in a weather office, there should be six people to cover one position ... even before we begin the discussion of adding staff to cover professional development. Five is "bare bones" staffing that works only in fair weather! If weather was "fair" all the time, you wouldn't even need forecasters in the first place!!

I 've written extensively elsewhere about the problems with the way the NWS management treats the staff, especially with regard to introduction of new technology in the office. Suffice it to say that the standard approach has been to develop these systems in a vacuum -- in isolation from the field. After years of cost overruns and delays, the clunky system finally is dumped into the field offices and the staff is told that if there are problems, it's up to them to deal with those problems. And, curiously enough, they usually do.

I've also recently written about what it takes to be a good forecaster. This is a discussion that really needs serious attention, and it has never gotten even a backward glance from management. No one comes close to understanding this in a comprehensive way. Lots of people have lots of ideas, including me, but none of them have been given serious study.

Trends in the data

The traditional mix of data in forecasting was heavy on in situ observations: surface observations and rawinsondes. Radars have been around since the post-WWII era, and satellites since the 1960s. Of late, the mix of observations is becoming much more diverse and leaning heavily on remote sensing, even as in situ observations are becoming more automated.

In addition to the flood of model output, forecasters are also beginning to be flooded with data! This is a relatively new development, and carries with it a number of interesting issues. Rather than address those here, I'm going to defer most of it. However, consider the impact of fewer rawinsondes, redistributed. The sounding network has undergone wholesale changes, with the sites moving around to become colocated with the WFO sites. This has produced an altered distribution ... compare Figs 3a and 3b.

Figure 3. (a) Rawinsonde sites as of May 2003 [from Greg Thompson's site]; (b) rawinsonde sites in the mid-1970s.

The issue of changing sites has many ramifications that deserve attention on their own, but it seems to me that the new network is no improvement over those of the past, and may in fact have some serious issues. I observe several critical "holes" in the redistributed sounding network. This is not to say the earlier distribution was perfect, but it seems to me that the new distribution is worse, not better.

Rawinsondes continue to be under attack. Their simple, reliable, accurate technology is, unfortunately, too dependent on humans for its own good. In the interest of trimming staff, it's likely that continuing pressure to reduce the number of rawinsondes will go on until they're eventually "replaced" by something else, the nature of which is unclear.

There's a growing number of automated mesonetworks scattered about the nation. These may or may not be getting into the NWS operational data base. Work is afoot to implement an automated "cooperative observer" system that will amount to a whole new national surface network. It will be interesting to see how this evolves.

It has been shown that implementing in situ sensors on commercial aircraft can have a beneficial impact - not so much for human forecasters, but for the operational NWP models. Hence, this avenue for a new data stream is being pursued vigorously.

It's obvious that remote sensing has been pushed by its advocates and is popular within the bureaucracy because it doesn't require much staffing. The primary advantage to remote sensing is simple: coverage. The disadvantage to remote sensing is that its observations have all sorts of issues. Many of these methods require inversion of the integro-differential radiative transfer equation, the solution of which is difficult to make unique without the use of supplemental in situ observations to help "peg" the solution of the inversion problem. Many remotely sensed observations require complicated post-processing to remove various artifacts (Doppler radar data are a prime example of this), including various sources of contamination. The use of multispectral satellite observations has never really lived up to most of its promises, and satellites are very expensive observing platforms. Satellite observations are wonderful for detecting some phenomena (notably, hurricanes at sea) that otherwise might go undetected, and subsynoptic-scale features, but they are not so good at producing hard, useful data for forecasting. Tracking cloud motion observed by satellites is full of challenges: knowing the level of the clouds, separating cloud propagation from cloud movement, etc.

Lots of new information is coming in, some of which might be incredibly valuable, but the NWS is lousy at research, and it takes years, perhaps even decades, of research to make really effective use of new data streams. The NWS isn't very supportive of research, regarding that to be the role of other agencies (which are generally underfunded). I could go on at length about this, too, but will defer that, as well.

Trends in Numerical Weather Prediction (NWP)

As always, there is a continuing effort to push NWP to ever-higher resolution. It's obvious that parameterizations of subgrid phenomena and truncation error in the models represent two important issues. Enhanced resolution offers some prospect of being able to mitigate both of these issues at once, so the models are constantly being pushed toward higher resolution.

At the same time, Lorenz's contribution to our understanding of nonlinear dynamical uncertainty (Lorenz 1963) means that we may be reaching some point of diminishing returns associated with increased resolution. Initial condition uncertainty means that even a perfect model has limits to its accuracy, such that at long enough projection time, the predictions become indistinguishable from random guessing. And the models are (and will continue to be, even at much improved resolution) rather less than perfect.

Hence, it's becoming clear that a single run at the highest possible resolution may not be our best use of the continually growing computer power at our disposal. Stochastic-dynamic forecasting (Epstein 1969; Ehrendorfer 1994) is probably still a long ways off, if it ever becomes a reality, but the poor-man's version of that is Monte Carlo or ensemble methods. This is not the place for a full treatise on the subject of ensembles but you can go here to gain at least some idea of what I believe to be the compelling advantages of ensemble methods, and here for a collection of references about the subject. They are relatively new, compared to those of "traditional" NWP (which has enjoyed more than 50 years of development!), and there is much to learn about ensemble approaches. It remains to be seen where we will be if ensemble methods are pursued vigorously for 50 years or more.

But one this is certain: deterministic forecasting is dead! It may take some longer than others to accept this indisputable fact, but Lorenz has shown that highly accurate deterministic forecasting is fundamentally impossible, and not simply a difficult challenge.

As noted, post-processing of model output, including the output from ensembles, will continue to be explored. In effect, the products of post-processing are statistical-dynamical mixtures that offer better results than either purely statistical or purely dynamical methods.

The growth of communications bandwidths have opened up the model output to the field offices. It is now possible to "slice and dice" that output using new workstations in amazing new ways. This means that in addition to the flood of new observations, there is a gigantic number of new ways to look at the model output. So many new ways, in fact, that it is way beyond any single human's capacity to look at and interpret any but a tiny fraction of the total possible fields. Forecasters liken it to trying to drink from a fire hydrant!

As already noted, NWP models and the post-processed output are constantly improving, although it remains to be seen at what pace that improvement will continue. Even as all of this is taking place, the traditional skills of human forecasters are becoming increasingly irrelevant. Diagnosis of data is becoming a lost art, a relic of the ancient past. The new workstations are much less powerful when dealing with observations than when they process model output.

Given the generally high quality of automated forecasts in most situations, it is increasingly possible to argue that humans are no longer needed. The message to NWS forecasters is to leave the model output alone for predictions 24 h or more into the future. The mandated alternative has been for forecasters to become "mesoscale experts," adept at short-range forecasts of less than 24 h.

Of course, in my opinion, this is pure, grade-A nonsense! Mesoscale meteorology is by far the most complex physical system in all of meteorology. On scales both above and below, it's possible to make useful and reasonable assumptions that make large simplifications in the governing dynamics. On the mesoscale, virtually no such simplifications can be made, in general. Hence, the full complexity of the highly nonlinear primitive equations must be retained to treat mesoscale problems in full generality. There are no "mesoscale experts" - merely a collection of folks who have considered some small part of the mesoscale problem. To expect forecasters with a Bachelor's degree to become "mesoscale experts" is the height of foolishness and ignorance!

Furthermore, the conditions that allow mesoscale processes to grow and thrive are not ubiquitous. Rather, they are intermittent - an indicator of their fundamental nonlinearity. Such conditions are created at scales both above and below the mesoscale; mesoscale processes operate as scale transfer mechanisms, passing energy both up and downscale until the conditions that permit (or even require) mesoscale phenomena have been mitigated. Then, those mesoscale phenomena cease. To ignore the synoptic scale (and the microscale) is to prevent any possibility of comprehending and forecasting the mesoscale.

Whither goest the forecasters?

With the flood of data, the flood of model output, and the improvement of the models, it is becoming increasingly apparent that the role of humans needs to be reconsidered. Already, it's become apparent to me that the needs of the models have become dominant over the needs of the human forecasters. The evidence for this is everywhere, but I'll defer a full discussion of this to some other essay.

What has become obvious is that the gap between what the humans produce in terms of regular weather forecasts (i.e., not the warnings) has been steadily shrinking. Most of the improvement in forecasting generally has been due to improvements in the models. Even as I write this, many forecasts go out with little or no human intervention.

Generally speaking, a Faustian bargain has been struck between NWS management and the politicians. The NWS needs lots of resources to keep up with technology. All this hardware costs big $$ and the politicians are loathe to part with their discretionary dollars for the sake of the NWS. Hence, the clear signal has been to address the biggest cost-driver in the existing system: the staff. Clearly, we can't cut the management staff, so the operational staffing has been reduced to the bone, and perhaps beyond. But the need for more investments in hardware goes on. And doesn't automation mean greater efficiency and, hence, a much smaller staff of "caretakers"?

Industry has shown repeatedly that automation does not automatically lead to reduced staffing needs. Yes, the tasks taken over by machines no longer require much human intervention, implying staff cuts, but (for example) the IT staff has to grow by leaps and bounds! The mix of people has to change, but it's not obvious that we can get by with smaller staffs as a direct result of automation.

In spite of their protests about a commitment to their staffs, the NWS is clearly on a long, well-traveled road toward reducing the human side of weather forecasting. The facts speak for themselves so clearly, that there can be no doubt of the real process. Mere words pale when compared to the staffing facts.

In the public sector, resources are a zero-sum game: added resources to one agency must inevitably be taken at the expense of some other agency. Agencies are pitted against one another that might otherwise be cooperating. Enmity and mean-spirited competition results. If the NWS were to commit to meaningful investment in their staffing as well as to buy new hardware, where would the additional resources come from? Whose programs could they rob in order to enhance their staffing and staff support (e.g., training)? Obviously, they are loath even to try to do this. NWS management won't even try because they know they couldn't make it work.

Over the years, NWS management engaged in a game of "chicken" with the politicians, threatening office closings when they knew that the local congressional delegations would scream bloody murder and put that funding back into the budget. This game is increasingly not working. The NWS is just not very adept at power politics. Some of the bigger egos in the NWS (past and present) have felt otherwise, but the facts are indisputable. Whereas in the past, major weather disasters typically led to new Federal funding support for the NWS, now gigantic weather catastrophes (e.g., Hurricane Andrew, the 3 May 1999 tornado outbreak) come and go with not a ripple in the NWS budget.


NOTE: by way of contrast, in the private sector, new resources are considered investments that are expected eventually to bring in much more than they cost. That is, new investments are decided on the basis of potential profits. Improvements that simply allow break-even are not going to be viewed as favorably as promises of big increases in income (or big cost savings). Unprofitable development teams get the axe, especially in tough economic times, and unprofitable businesses fail.

Some simple extrapolation of existing trends

  1. NWS public sector forecasting by humans will disappear in 50 years or less.
  2. Models and post-processes model output will continue to improve its accuracy
  3. Private sector forecasting will continue and quality-driven, rather than cost-driven companies might come to dominate the marketplace
  4. Given increasingly wide access to public-sector weather data, a large market will develop for interpretive software

There's a simple, yet compelling comparison to make between automated and human forecasting:

Automated

Human

Physically consistent

Intuitive

Cheap to operate

Costly to operate - leave etc.

Requires periodic upgrades

Retirement - paid after ceasing production

Oracular - no interaction

Capable of interaction

Uses primarily quantitative data

Capable of using nonquantitative data

Requires development as technology evolves

Requires continuing training and education

Nonlinear

Capable of nonlinearity

Based on solid, if limited, understanding, so avoids big mistakes

Capable of both big mistakes and inexplicable successes

From many perspectives, it makes a lot of sense to go with automation. Humans are troublesome, inefficient, costly, unreliable, and can produce embarrassingly bad results, on occasion. Why pay them big bucks to sit around and pump out the model guidance most of the time?

But consider the following quotation from Sverre Petterssen:

While the machines provide the answers that can be computed routinely, the forecaster will have the opportunity to concentrate on the problems which can be solved only by resort to scientific insight and experience. Furthermore, since the machine-made forecasts are derived, at least in part, from idealized models, there will always be an unexplained residual which invites study.

It is important, therefore, that the forecaster be conversant with the underlying theories, assumptions and models. In particular, it is important that he be able to identify the 'abnormal' situations when the idealized models (be they dynamical or statistical) are likely to be inadequate."

-Weather Analysis and Forecasting, Vol. 1

This remarkably prescient passage from the Preface to Petterssen's classic text captures the essence of what I'm trying to say. Whatever the objective reasons to automate forecasting, the very best forecasts will always be the province of human forecasters. But with the improvement in models, this level of improvement over models will be increasingly difficult to achieve. Forecasters capable of adding substantial value over what automated systems produce will require more education, meaningful training, better staffing situations (that permit real professional development over a forecasting career), proper workstations and other tools of analysis, diagnosis, and exploration, and fewer non-meteorological distractions than forecasters now have.

Consider the forecaster spectrum (Fig. 4). From this conceptual model (which is only a guess, of course), it is essentially obvious to say that the majority of forecasters are mediocre; that is, they are somewhere near the middle or below the middle along the spectrum. The primary issue is where the peak in the spectrum lies with respect to what the models (plus post-processing) are doing. I think it's pretty clear that the peak in the forecaster spectrum lies somewhere near the accuracy level of the models (plus post-processing). If it were otherwise, and most forecasters were markedly better than the automated products, then the statistics should reveal a substantial gap between what the humans produce and what the models (plus post-processing) do. The relatively small gap (which is shrinking!) suggests that most of the added value is produced by only by a minority of the forecasters - the best of them. The essence of this is that automated forecasts will necessarily be mediocre forecasts, and that invites the prospect of adding value, but only when the best forecasters apply themselves to the job. This either means a very small workforce, or we will need to move the peak in the forecaster spectrum upward by a significant amount. This latter course will not be cheap! NWS management persists in avoiding the creation meaningful training for its forecasters and any attempts to certify the competency of its forecasters have gone nowhere.

Figure 4. Schematic diagram of the forecaster spectrum as a classic Gaussian bell curve.

In terms of statistics, the model (plus post-processing - this qualification will be dropped hereinafter, but should be implicitly included when talking about "models" or "guidance" or "automated forecasts") does well in the verification game because it avoids big mistakes most of the time. However, automated forecasts still make big mistakes from time to time. The best forecasters realize this and work hard to recognize when to depart substantially from the guidance, and when to stick close to the guidance. On most days you should go with the guidance, but when it looks like the model is blowing the forecast, don't hedge your expectations toward the guidance - rather, go for what you're anticipating.

As it now stands, in a typical forecast office, there probably are of order 10 days per year when substantial forecaster intervention is needed. Is this enough to justify the expense of all those human forecasters? A few years back, Harold Brooks, Mike Fritsch, and I wrote a controversial paper where we proposed that the NWS cut back the number of offices to a smaller number (order 10 which means more than 5 and fewer than 50, for those of my readers who might be confused by the meaning of "order of magnitude"). This would necessarily mean a larger area of responsibility, which would in turn mean an increase in the number of situations per year that could benefit from human intervention and give the forecasters more experience at dealing with such situations. We also said, "The kind of forecaster needed in the NWS structure that we envision will be much different from that which exists now." The forecasters would need to be trained and educated to be effective in an era of continuing model improvement. The staffing would have to permit continuing professional development. And so on.

What I now fear is that the NWS will make the move to fewer offices, but will not implement the substantive changes we recommended - we said "... an order of magnitude estimate of the number of trained meteorologists on duty at any one time in any office would be around 10." If we wind up with roughly the same number of meteorologists on duty in the "consolidated" offices as we now have in the WFOs of today,the result will be nothing more than a thinly disguised set of staffing reductions. Such a move is simply the next step in a program that seems destined to remove humans from the forecasting process eventually.

What about warnings?

Some have suggested that the warning function (really, a very short-range forecast) for hazardous weather is the "insurance" that will keep humans in the forecasting process. I think the handwriting for this is already up on the subway walls and tenement halls. As of now, there are algorithms that can read weather data and make various kinds of "guidance" products to call possibly hazardous weather to the attention of a warning forecaster. These are primarily radar-based, although they are being modified to include more than just radar data.

At the present time, these algorithms produce a lot of false alarms and still don't manage to detect certain events very well. That is, their accuracy is not very good. Some very talented and hard-working folks in places like NSSL, NCAR, and FSL, as well as in universities are working to improve the performance of these automated systems for detecting hazardous weather. Their performance certainly will improve.

At the same time, some NWS offices have adopted a policy that is the analog to Snellman's "meteorological cancer" - perhaps we should call it meteorological AIDS (Artificial Intelligence Dependency Syndrome - a name created by Charlie Chappell, to the best of my knowledge). Imagine the following scenario;

  1. Algorithm detects hazardous weather and triggers an "alert"
  2. Warning forecaster checks out the situation and decides a warning isn't warranted
  3. Hazardous weather event occurs and people die
  4. Investigation team determines that forecaster "ignored" the guidance
  5. ... I leave it to your imagination to fill in this last step ...

If there's anything I know about forecasters it's that they're not stupid! If a warning goes out and nothing happens, a false alarm never kills anyone. If anyone takes the heat for too many false alarms, it's the NWS office as a whole, or even the whole NWS. If a deadly event occurs with no warning, the forecaster on duty at the time is virtually certain to take the heat. Forecasters are faced with a major asymmetry in the penalty for the two types of incorrect forecasts: false alarms generate little pressure on an individual, failures to detect might get an individual fired! Hmmm ... how hard is that one to figure out? Local field managers may be infected with this same logic and whole offices may be churning out warnings for every algorithm detection. This is tantamount to automating the warnings! This is going on right now, so imagine what the pressure to follow this path will be as the algorithms get better in the future?

The folks working on algorithms are often prompt to point out that they are not doing so with the intent to displace humans. They say they're only producing "guidance" or "decision aids" to help the warning forecasters. I'm fond of pointing out that the road to perdition is paved with the best of intentions. There's no way that improved algorithms are not hastening the day when warning forecasters will become obsolete, irrespective on anyone's intentions.

What about public outreach?

Another major reason for having human forecasters is their public outreach function. NWS offices put on storm spotter training, and have many other "extra" duties associated with public outreach. Closing offices means that the chances for such outreach are reduced because the need would be spread over a smaller staff serving larger areas. This is an important argument against the "consolidation" process envisioned in the paper by Brooks, Fritsch, and me.

However, there are at least two different, but related responses I have to this legitimate concern. In the first place, having forecasters engaged in outreach is actually taking time away from their duties at the forecast desk. Many forecasters do outreach on their own time, perhaps getting only compensatory time off in return, or perhaps getting nothing in return for that effort. With minimal staffing for the normal duties of a forecaster, outreach duties are just another burden imposed on forecasters, whose "extra" burdens mean they have minimal time at the forecast desk to focus on their main task: forecasting.

Second, many forecasters are "weather geeks" who may not be well-suited to a role that involves much public interaction. Weather offices in the future might be well-served by having outreach specialists who are full-time committed to doing community outreach activities of all sorts. MICs, SOOs, and WCMs might be expected to pick up this slack, but none of them have necessarily been selected for their outreach skills. These folks are being asked to wear many hats, and it seems obvious to me that many of them fall short in one way or another ... you'd have to be superhuman to excel at all the tasks they're asked to perform, and time is limited. Outreach specialists could (and arguably should) be meteorologists, of course, but they would be specialists in this sort of effort, and presumably would have much more time than today's forecasters to spend on this important task. I would guess that if the "consolidation" occurs, there will be no enhancement of the public outreach component within the reduced number of offices.

Can this trend be stopped?

It seems to me that the political and economic situation in the U.S. at the moment is driving us toward a fully automated public sector forecasting "service". The seeds for the eventual elimination of most the NWS were planted in the Reagan administration, which exerted large pressure for privatization of most of the NWS functions. This notion has not gone away and is apparently being nurtured by the current G.W. Bush administration - there are hints of massive cuts to come. The top NWS management knows which side of the bread their performance bonuses come from: cost-cutting. Staffing is the major cost driver. You figure out where their priorities are! No matter what they say, what they do speaks so loudly, I can't hear their empty words.


As a brief diversion, if the NWS goes, then can the NOAA research labs devoted to weather-related research be far behind? I don't think they would survive more than a few years beyond the eventual demise of (first) human forecasters and (second) the NWS as we know it. Privatization continues to be a powerful force in government, and if the operational arms of NOAA are gutted, the research arms will go, as well. It's already happening.

The gradual dismantling of human forecasters presages the possible complete dissolution of the NWS. If the current trends aren't stopped, I don't think the NWS itself (like its human forecasters) has more than 50 years to survive. The current set of managers will probably last through their fat-cat retirements, and probably many of their immediate successors. After that - who knows? And the working stiffs will be on their way out sooner than that.

Strangely, most forecasters seem apathetic and unwilling to do anything to prevent this from happening. Many are willing to collect their substantial paychecks, sit back, and let meteorological cancer and AIDS reach terminal levels. Forecasters have to be committed to making a substantial positive difference - when the chances come, as they inevitably do, forecasters must rise to the challenge, or be swept away. Their own self-interest is tied to adding value to the models, so if they ignore that, it seems they're not even motivated to serve themselves. If they want a future NWS with them in it, they'd better be prepared to fight for it. A climate of fear in the NWS muzzles many of them who might otherwise speak out.

What forecasters should do:

Can the unions protect the jobs of human forecasters?

Ask the air traffic controllers fired by the Reagan administration. The simple fact is that the politicians have the power to do pretty much whatever they want, especially when dealing with folks with no political power base. Politicians and high management officials can run roughshod over the rules. Generally, they invoke the rules only as an excuse for not doing what they don't actually want to do. If they really want to do something, the rules won't often stop them.

The unions are generally made up of folks worrying about the so-called "bread and butter" issues: job security, benefits, wages, working conditions, etc. They are not much concerned about the philosophical issues that I've been discussing. If the unions can be transformed into something that can muster important political support for keeping the NWS forecasters on in some role other than the equivalent of a fireman on a diesel locomotive, that would be great. I'm not going to hold my breath, however.


References

Ehrendorfer, M., 1994: The Liouville equation and its potential usefulness for the prediction of forecast skills. Part I: Theory. Mon. Wea. Rev., 122, 703-713.

Epstein E. S., 1969: Stochastic dynamic predictions. Tellus, 21, 739-759.

Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141.

Snellman, L.W., 1977: Operational forecasting using automated guidance. Bull. Am. Meteor. Soc., 58, 1036-1044.