Thoughts about false alarms with tornado warnings

by 

Chuck Doswell

 


Posted: 28 May 2011 Last update: 29 May 2011: added some input from a reader

All the standard disclaimers apply. These are my personal opinions only and do not have any official status. Feel free to e-mail any comments to <cdoswell # earthlink.net> - use the email hyperlink or cut out " # " and paste in "@".


Introduction -- a scientific shortfall

After recent events this year (2011), the topic of warnings that turn out to be "false alarms" has re-surfaced as a hot topic, which inevitably happens after many people die in tornadoes. There can be little doubt that the majority of tornado warnings turn out to be false alarms. Why is this the case?

First of all, we don't yet have the science to be able to distinguish with absolute accuracy which storms are going to become tornadic, and when they will do so. Far from it, in fact, as the warning verification demonstrates so well. As much as the recipients of tornado warnings want us to be 100% accurate, it's simply not possible, even under the best of circumstances. New research may help, of course, but improvement of our warnings is still possible, even if no new scientific insights are created through research. The operational warnings are not as good as they could be if all that is known were applied and applied properly by well-trained warning forecasters. Perhaps we could make some gains if the science were used to its fullest, but I don't think that would permit a truly major reduction in false alarms.

Given the apparently inevitable uncertainty with warnings, some have argued that warnings (which, after all, are forecasts) need to provide information about that uncertainty (see here and here). This always touches off impassioned arguments that seem inexorably to focus on certain repetitive themes -- see here for some discussion. It appears that many concerns exist for how best to communicate uncertainty and some people believe passionately that any expression of uncertainty gives warning recipients more reasons not simply to respond as commanded to do: seek shelter immediately! If we in the weather community can't agree that communicating uncertainty is in our best interests, then it simply delays the time when we make a commitment to the processes that could result in a successful implementation of probabilistic warnings. But I digress ...

What else drives false alarms?

Apart from inadequacies in the science and training shortfalls, are there other reasons for so many false alarms? I believe the answer for that is affirmative: the penalty for issuing a warning that turns out to be a false alarm is much less than the penalty for not issuing the warning and then having a tornado hit the area and kill people. No one dies from a tornado in a false alarm! This asymmetric penalty function is a major factor responsible for overwarning! It's often referred to as CYA -- cover your ass! Since forecasters presently are only given the option to issue or not issue a tornado warning irrespective of the perceived threat level, when a given warning forecaster's estimate of the probability of a tornado rises above his/her personal threshold, then s/he pulls the trigger and the warning is issued. I believe that probabilistic warnings (using a common threshold for issuing a tornado warning) would permit forecasts to come closer to the ideal of unbiased warnings.

It’s already been shown that forecasters are pretty good at estimating their uncertainty. Given some training and some feedback through verification, they can calibrate their probability estimates pretty well, such that their forecasts are reliable: that is, as the forecast probability increases, so does the frequency of events. Therefore, not including uncertainty information in the warnings constitutes a form of withholding information!

If we always issue a tornado warning for virtually every thunderstorm, we'll catch nearly all the tornadoes (a few tornadoes come from storms so weak as to not even be thundering, so some may still be missed). The real challenge is in reducing the false alarms. Interestingly, if we include uncertainty information in the warnings, the “false alarm” problem disappears, to be replaced by a concern for calibrating probability estimates. A warning that includes uncertainty information is only wholly right or wrong when the probability is either 100% or 0%!

The "Integrated Warning System"

Back in the latter part of the 19th century, the pioneer of tornado forecasting, John Park Finley made an attempt to forecast tornadoes and developed the classic dichotomous (either yes or no) contingency table forecast verification scheme to show the results of his experimental forecasts. Unfortunately, a contemporary colleague demonstrated that his forecasting skill measure would have been improved if he had never issued any tornado forecasts! This can be a problem when forecasting rare events, and tornadoes are rare events! [see also here]

What we call the "integrated warning system" evolved without much conscious control over the years. In the time before radar technology, the system depended on spotters, who would see an oncoming tornado and provide that information to municipal authorities who could sound sirens (originally installed as a result of the cold war!). This system depended on spotters learning how to recognize what they see, and isn't very effective at night or in situations with poor visibility (e.g., rain-wrapped tornadoes). There probably were fewer false alarms but lead times could be short. If the town hadn't deployed spotters, then a tornado could strike literally without warning.

As radar evolved to become a cornerstone for the Weather Bureau (WB -- to become the National Weather Service -- NWS), longer lead times became more likely -- at least in situations where the storms that would produce tornadoes had recognizable signatures associated with tornadoes. In the meantime, the WB/NWS began its policy of abandoning the task of warning dissemination to the broadcast media. Hence, the public face of weather warnings eventually ceased to be the NWS, to be replaced by TV and radio weathercasters. Moreover, many municipalities began to develop some sort of "emergency manager" (EM) who has the responsibility to sound the tornado warning sirens, operating from an Emergency Operation Center in municipal office spaces.

In today's world, the EMs can also play the CYA game, often deciding to sound sirens based not on spotter reports, but when the NWS issues a warning that includes some part of the county in which the municipality is located, or even in neighboring counties. This results in a lot of unnecessary warnings, as received by the public. The NWS warning may not include their specific location, but the EM blows the sirens for them anyway. Broadcasters sometimes issue their own warnings or downplay the NWS warnings. The partnership between the NWS and media weathercasters is an uneasy one, but there's evidence that it can work well most of the time, as well as fail miserably sometimes. Spotters in this day and age seem mostly acting to confirm the presence of tornadoes for storms that were warned-for (based on what the radar shows) by the NWS, rather than acting on behalf of their municipalities.

What can we do?

With the existing system of dichotomous warnings, we're overwarning for tornadoes -- no doubt. I'm of the opinion this does have a negative impact on the perceptions of warning recipients and how they react -- seeking some confirmation of the threat is likely before they respond, no matter how strongly the warnings are worded. People aren't doing what we tell them to do in the existing system!! It's ignoring reality to behave as if they will. It's not even obvious to me that we really want to be telling people what to do. The public needs to consider their own unique personal situation, which always involves more factors than just the storms. What they should do in a given situation is a decision they have to make on their own. If they don't already know what to do, it's unlikely we can offer much to help them out as individuals during the minutes between the warning and a tornado. We can provide much more in the way of public education so that they know what to do when a tornado threat arises.

The NWS warnings currently are issued for areas that are larger than the damage paths of the actual tornadoes, in large part owing to our meteorological uncertainty. As received by the public, the warned-for area can be even larger than that originating from the NWS. Even if we managed via some miracle to have tornadoes occur in every NWS warning and never outside of such warnings, some people in warned areas would not experience tornado damage. Would they consider those false alarms? We need to understand more about public perceptions via hard data, rather than anecdotal input and the opinions of meteorologists and bureaucrats! If we change the warning system, there's a lot we need to do before we change it. The lessons from changing to probability of precipitation (PoP) in the mid-1960s should be considered compelling evidence for being careful how we proceed if we go down the road of changing the warning system to express uncertainty in our warnings.

Even if the NWS converts to graded threat levels (i.e., according to their confidence regarding the occurrence of a tornado in their warned area), the public gets very little information from the NWS. What the public receives is not necessarily under NWS control!

Improving the science that might someday help us distinguish tornadic storms from similar storms that won't go on to produce tornadoes would be really helpful, but there are already many scientists working on this and related topics. The big challenge to us is to learn how to communicate uncertainty within our warnings in such a way that the general public knows how to use the information about storms we realistically can give them when making their decisions about what to do. See here for some discussion. Unfortunately, work on this topic is agonizingly slow to get started and it involves more than just meteorologists. Further, to the best of my knowledge as I write this, even efforts to ensure that the existing science will be applied and applied properly (via substantive training) to improve the warnings are receiving only very modest resources.


Dialog added -- 29 May 2011 -- from colleague Roger Edwards:

Anecdotes of false alarms are powerful emotive stimulators in these arguments; but as scientists we ought to be able to understand false alarms to a degree that is both reasonably objective and more meaningful than bulk FAR stats for TOR warnings that are provided now.

As far as I know, we simply don't know how big and bad of a problem that false-alarming is. Before speculating in too much detail on solutions, let's first understand the false-alarm problem better---preferably in a statistically defensible and analytically reproducible manner. The issue: How bad is it, and where?

No doubt it would be of value to have more than just bulk statistics. I have neither access to the data nor the resources to do a proper job of it. This clearly needs to be done by someone in the NWS.

Anecdotally and through some studies (i.e., Eve Gruntfest's), we understand that false-alarm perception involves more than any given single spatial false alarm, but also, the commonality thereof. I'll offer this starting idea for a solution...feel free to improve upon it however may be most beneficial!

Grid the nation E of the Rockies or E of 100W gridded at 1-km intervals. Paint every tornado warning polygon on those grids since the advent of so-called "storm-based" warnings (which actually are still strongly driven by county lines, against their intended purpose...but that's another issue). Each warning that crosses part or all of a grid box lights up that box once, creating a singular score.

For every tornado path that crosses part of a box that was in a valid warning, subtract one. [Not sure yet how, or if, to score unwarned tornadoes.] This allows for some "in and near" freedom, since the tornado MUST be no more than a km from any person who resides in a box in order to count as a hit and take away from the false-alarm scoring.

Objectively analyze the resulting grid scores using your favorite kernel density estimator(s) and you'll have a splendid map of not only the commonness of false-alarming, but its geospatial distribution. This also can indicate potential biases from CWA to CWA, in case there is a problem at a particular office that needs intervention/amelioration. It's a rather primitive idea, granted; and you'll have to acknowledge the innate coarseness of tornado-path mapping (i.e., they're typically linear with a single number signifying path width) that may leave some grid boxes erroneously "lit" or "unlit" by tornadoes.

Sounds great!! I'll look forward to seeing the finished product! Unwarned-for tornadoes probably need to be considered separately, since they are quite distinct from false alarms. It would also be useful to see gridded maps of the warnings, per se, without doing the subtractions for the tornadoes. Methinks variations in local office policies would show up clearly. Actually, I'd like to see 1-km gridded maps of

This is just an idea but it might be quite interesting. Lead times could be tough because it would require information about timing that's not documented for every tornado.

Such a project should also include the time dimension (say annual maps), so we can see how the distribution changes over time.

It might also be of some interest to see more regarding the performance of the non-NWS parts the "integrated warning system". But I suspect no one's keeping track of this -- for example, broadcasters collectively seem rather disinclined to keep and make public their verification statistics. The same applies to EMs. Hence, there's a huge component of the process that's not being documented in a systematic way. That is symptomatic of this problem -- major elements of the "integrated warning system" are not being documented and so are not subject to scrutiny.