As always, this is my opinion only. Your opinions and feedback are welcome. firstname.lastname@example.org
Having recently attended the 1st Annual Forecaster's Forum of the Meteorological Service of Canada (MSC) in Victoria, British Columbia, I was in a position to participate in some "breakout"sessions regarding forecaster training. In one of these sessions, we were asked to identify the characteristics of a good forecaster. This has long been a topic of some interest to me, springing from some conversations with the late Alan Murphy and Tom Stewart at a conference years ago. What is the suite of characteristics of a good forecaster? It has long seemed to me that this would be a worthy subject for cognitive psychologists to consider in a systematic way.
Our discussions at the Forecaster's Forum confirmed that there were several characteristics that seemed to be common to good forecasters. In something like jet fighter pilots, there are some clear constraints on who would even have a chance to be considered. For example, having excellent uncorrected vision seems like a clear prerequisite to being a good jet fighter pilot. Although there might well be other qualities of importance, not having good uncorrected vision would seem like a reasonable criterion to begin weeding out obviously unfit candidates.
If we could identify and measure the traits that are clearly needed to be a good forecaster, we could minimize the chance of hiring the weather forecasting equivalent to a legally blind jet fighter pilot. Many of us know of examples of people who definitely should not be forecasters ... they are obviously unfit for the job, but there they sit. We need to be able to identify such people before we invest in their training, to prevent them from reaching the position where it becomes clear that they are unfit for such duty.
I know of no such study by cognitive psychologists, so this informal discussion is little more than a first step toward such a study. I want to present my interpretation of our discussions, as perhaps a catalyst, or a starting point for a serious consideration of the traits need to be a good forecasters. I was very impressed with the discussions and felt that this was indeed the first tentative consideration of what traits are common to good weather forecasters. In what follows, no particular ordering is implied by the following set of characteristics.
We were also tasked to consider what roles a forecaster might be expected to assume during a career. This included the following notions
1. Service provider
A forecaster first and foremost provides forecasting services. This necessarily involves the analysis, diagnosis, and prognosis of meteorological fields.
In order that the forecasts provide value to the users, the forecasters will be involved in helping users to use those products provided in an effective way. This may require considerable effort to inform users about the products and their implications, as well as their limitations, It also will involve considerable interaction with forecast users.
3, Information provider
The forecasters is, in a sense, a mediator between the meteorological information at his/her disposal and the user. The forecaster passes that information to the user, and may also pass information about the user's needs to the system so that useful information can be collected and passed on to the users.
4. Mentor (trainer)
A forecaster should be able to act as a mentor to incoming meteorologists, passing on the value of his/her experience and knowledge of the existing forecast infrastructure. Each forecaster therefore should be trained in how to train incoming employees
However, this set of duties is not necessarily central to the issue of a role for humans in the increasingly model-dominated world of operational weather forecasting. I've written extensively about the human element in the past (a paper first presented at the 1st Annual Operational Meteorology Workshop, in Winnipeg, CA), including the skills forecasters need for the future, and I recommend two essays by Harold Brooks here and here.
What we were saying in those earlier essays was that the continuing improvement of models, coupled with post-processing schemes (like MOS) was reducing the frequency of a need for forecaster intervention, except on days with "high impact" weather or when the models and the post-processing were giving egregiously bad forecasts. It is increasingly the case that on many days, there is little a forecaster can do to make a significant difference to the automated products ... such days tend to be those days where little of real import is going on. During the Forecaster's Forum, however, two important points were noted.
1. Large forecast areal responsibility means more days with high-impact weather
Since Canadian offices are being closed down, the remaining offices have responsibility for huge areas and the chance that something of importance will be going on during any given day is correspondingly large. This is not unrelated to the notions that Mike Fritsch, Harold Brooks, and I wrote about a few years ago. In our proposal, of course, staffing in offices with such large areas of responsibility would have to be such that they can cope with the workload. Bureaucracies tend to want to reduce staffing as the primary means of reducing the budget, arguing that automation means forecasters can leave the "routine" stuff to the models and the automated forecast preparation systems (that have been implemented recently). But with large areas of responsibility, there are fewer such "routine" days and more days where forecaster intervention is needed. If staffing declines in the few remaining offices, it's not obvious that forecasters can cope with the increasing need for intervention as offices are closed.
2. Virtually any day's weather is of high impact to someone
Weather sensitivity is not easily limited to days with weather that is interesting to meteorologists. With a wide variety of user needs, almost any day's weather has a big impact on some users. Therefore, if we are to serve users effectively, it's not entirely clear that we can simply let automation do its thing even on days with relatively fair weather.
Automated forecast preparation systems based on model output, like IFPS in the National Weather Service, or SCRIBE in the Meteorological Service of Canada, have changed the forecasting "landscape," so it's of some interest to explore what this changed world means for the human element. Also, it has become "trendy" to focus on serving user needs, which may in fact require more human intervention to live up to those needs. Automated forecast preparation means that forecasters are not "wasting time" typing up worded forecasts. However, modification of the automated forecasts is becoming a huge time sink if the forecaster is to be conscientious about service. Unfortunately, it's not at all clear that changing the gridded fields subjectively is going to produce meteorologically consistent results, and there is a problem with coordinating the forecasts on the boundaries of areas of responsibility.
It is not at all obvious to me that these systems have "freed up" much time for forecasters to focus on the meteorology. Quite the contrary, in fact, unless the forecaster chooses to not intervene and let the automated products go out untouched by human hands. This is tantamount to Snellman's "meteorological cancer" and will only hasten the day that humans disappear from the loop entirely. But intervention is complicated and difficult to do with these new systems. Far from making life easier, these seem to be the harbinger of a future automated public weather forecasting system.
Warnings are nothing more (or less) than very short-range forecasts. For a long time, it has been the "party line" that forecasts beyond about 12-24 h ought to be the domain of the models and post-processing, and forecasters should pay attention primarily to the first 12-24 h of the forecast period. Certainly, there is reason to argue that this is of paramount importance to many forecast users. The developers of algorithms for application to weather radar insist that these are simply "guidance" to the human forecaster. The fact is that the handwriting is on the wall for warnings as well as "routine" forecasts. The algorithms are going to get better and in some NWS offices, the CYA attitude has been that if the algorithm alert is tripped, a warning should follow. Apparently, the philosophy is when an algorithm is triggered and you don't put out a warning, if something bad happens, your ass is going to be in a sling. No one ever dies in a false alarm, after all. Hence, it is already the case that algorithm output is effectively becoming the de facto warning system! As with the models and MOS, the "guidance" is likely to become the product!
It seems to me that having humans tinkering subjectively with the model output is the wrong place for human intervention. In today's world, models are simply oracles ... you can't have much of dialog with a centrally-run numerical weather prediction model. You either accept it or try (and struggle) to produce a meteorologically consistent alternative. Having humans trying to do this on gridded model output is just not likely ever to work successfully. As I see it, the place for human involvement with models is on the input side. A forecaster busy doing meteorological diagnosis is very much in tune with the existing structure of the atmosphere, so it seems to me that a better place for that insight to be of value is before the models are run.
This is also the right place to consider ensembles and uncertainty. If the forecaster has some control over the initialization then uncertainties about the initial state built up by doing diagnosis can provide a non-random component to the generation of an ensemble of different initial conditions. What if the system over the ocean has a stronger wind max than analyzed? What if the moisture is just offshore? What if the MCS ongoing lays down an outflow boundary? The advantage to altering the input is that the output will always be as meteorologically consistent as the model equations demand.
We need to develop models in such a way that they are truly interactive. Forecasters should be able to conduct "what if" experiments with the models, running them repeatedly and changing things to see what are the meteorological implications of those changes. This obviates the need for intervention on the output side and actually has the potential to teach forecasters a great deal about how models actually work! Knowing how the models work has a number of beneficial results, not the least of which would be that it would be important in guiding any interactions with the models.
This does say some things about the thorny issue of local models. I believe that centralized models should continue but forecast offices should be encouraged to develop local models that will be truly interactive. it probably will never be possible to have a proper interaction with a centralized suite of models, but with the ever-increasing capability of personal computers and workstations, it's now possible to run a mesoscale model on a laptop computer! Perhaps not with super high resolution, and it might not be available really fast, but the interactive capability might outweigh the disadvantages of its limited capabilities.
An interesting issue arises with respect to ensembles. It's important to understand that although the consensus among the ensemble members is almost always the best single forecast ... for reasons that are essentially statistical ... there is more to the ensemble than the consensus. A proper ensemble will often include low-probability members that have high potential impact. Being aware of the possibility of an evolution that leads to a major event is the best way to avoid unpleasant surprises on the forecast desk. If the atmosphere is evolving toward a low-probability but high impact event, then human intervention may be warranted. But the forecaster must be conscious of the possibilities inherent in any given situation. Ensembles may be the best way to do this if forecasters don't limit their attention to the ensemble mean forecast.
The following traits are not in any particular order. These are not validated in a scientific sense. Rather, they emerged as a consensus among the participants in the aforementioned Forecaster's Forum
A forecaster has to be able to make decisions, often in the face of inadequate information, and within the deadlines that typically are imposed on him/her.
Deals well with pressure
The responsibility of forecasting, perhaps in situations that literally involve people's lives and property, under the constraint that those decisions have to be timely to be of value creates considerable stress on the responsible forecaster. Decisions have to be made under these circumstances that will have serious consequences.
Meteorological data are inherently 4-dimensional. They evolve with time in 3 space dimensions. A good forecaster must be able to visualize this 4-d structure to be able to grasp the existing relationships and to be able to anticipate the evolution of that structure.
Passionate for meteorology
Every good forecaster shares a passion for the job,. This means that the forecaster is excited about the weather and stimulated by the challenge of forecasting,.
The passion for the subject necessitates a career-long commitment to continuous learning about the atmosphere. Every experience is a learning experience, and there is no avenue of learning that a good forecaster will willingly exclude in trying to become a better forecaster.
A successful forecaster must be able interact with other people successfully. This means that s/he will be a leader when called upon to do so, as well as a follower when the situation requires it. S/he must deal effectively not only with other forecasters well, but also with the variety of users that are the customers for his/her forecast products.
In the complex, volatile world that constitutes the forecaster's environment, the successful forecaster must be able to carry out multiple tasks simultaneously. This means that prioritizing the multiplicity of tasks is needed. It takes considerable organizational skills to be able to handle the complexities of real-world forecasting.
Able to deal with failure
Given the reality of weather forecasting, failure is inevitable.. The successful forecaster must avoid the extreme reactions to the inevitable forecast failures: (a) giving up and becoming unconcerned about forecast failures, or (b) letting the frustrations and criticisms associated with forecast failures become disheartening. A good forecaster accepts failures but is never satisfied with it.
A successful forecaster never loses situation awareness. Even when the weather is fair and the situation is apparently boring, the successful forecaster never loses situation awareness and therefore is less vulnerable to surprise weather events.
The successful forecaster is an effective communicator, to other forecasters and to forecast users, in terms of verbal, graphical, and textual communications tools.
A good forecaster is able to adapt effectively to constantly changing situations,. Although experience can suggest ways to deal with particular situations, every weather situation is unique and good forecasters can cope with unanticipated changes and still get the job done effectively.
Physical capability to do shift work
Not everyone is able to adapt well to the shiftwork that is typical of most forecaster positions. Only those who can do so without substantial impairment of their general abilities will ever become successful forecasters.
Return to Chuck Doswell's Essays Page
Return to Chuck Doswell's Home Page