An outsider looks at VORTEX2


Chuck Doswell

Posted: 18 July 2010 Updated: 23 July 2010 - added more reader comments and my responses

This is my opinion. If you wish to communicate your opinion regarding this topic, you can contact me at - either use the email hyperlink or cut and paste after replacing _at_ with @. However, if you're not willing to have your comments posted here, along with my response, don't waste my time or yours.


This essay concerns itself with a number of issues associated with the VORTEX2 project.

Some points to note:

  1. My proposal (with co-investigator Prof. R. J. Trapp) to participate in VORTEX2 was declined, apparently owing to the decision not to include aircraft systems in VORTEX2 (presumably because of their large cost).
  2. I’ve participated in several scientific field observation campaigns in the past, in various capacities, since I was a graduate student. This includes my earliest storm chasing experience as part of a formal scientific field observation campaign (Doswell and Moller 1985, which can be seen here).
  3. Many of the investigators and organizers involved in VORTEX2 are personal friends of mine. I've always hoped that the program would be fabulously successful, despite my personal doubts about several aspects of its organization (see below).
  4. As explained in Doswell et al. (1986), if we already knew how to forecast what weather was going to happen, we wouldn’t have any need for such a project. This creates something of a dilemma – to achieve the goals, we need to collect certain types of data, but we have the best chances to succeed in collecting those data for events we’ve forecast correctly!

I believe that a field observing campaign has some similarities to a sports team: in particular, when a sports team is losing, dissent and bitterness is rife, whereas when the team is winning, everyone is happy. The meteorological field campaign counterpart to a sporting loss is missing the targeted weather events when they occur and having lots of data on weather events that fall short of the intended processes. VORTEX2 has missed a substantial number of events in two years. – understandable, since we have to admit we don’t know how to forecast these events.

I don’t want to repeat everything in the paper by Doswell et al. (1986 - available here), but some points need to be made. Since most field campaigns feel the need to incorporate forecasting to aid the decision-makers in making field operational choices, the issue becomes two-fold:

  1. Are the forecasts only intended to support the field operation or is forecasting the event a meaningful component within the overall campaign? This has a direct bearing on the issue of what field data are collected, because if the forecasts themselves are to have any value, they must be verifiable with the data likely to be available at the end of the campaign.
  2. Who are the forecasters? Their experience and track record of success can be of value in decision-making if the decision-makers choose to respect the input from their own forecast team. There’s no point having someone forecast for the campaign who’s not well-respected.

In my experience, the forecasts for field operations historically haven’t been given much consideration by the operational decision-makers. Since the decision-makers in field campaigns usually are researchers, not forecasters, they typically have little or no respect for the forecasts. This, in turn, has an impact on who might be willing to help out with forecasting for the campaign.

This essay is not about establishing blame for anything. Blame is a waste of time. Rather, my goal is to articulate some concepts that should be considered for future field campaigns regarding tornadoes, based on what has happened during VORTEX2. I welcome comments by ‘insiders’ or anyone else who wishes to offer them to me, be they positive or negative regarding my opinions here - especially if said commenters are willing to allow me include our exchanges on this website.

I. VORTEX2 objectives

As can be seen from their website, the stated objectives of the VORTEX2 (hereafter, V2) campaign include:

Forecasting is explicitly mentioned here, although I believe the implicit objective is not about forecasting in the traditional sense – it’s more about the forecasts known as “warnings” (which are forecasts, of course, but traditionally are distinguished from what is done at, say, the Storm Prediction Center [SPC]).

As an outsider, I don't know all the details about how forecasting was organized and carried out during the campaign - I do know that the operational decisions didn't always match the forecasts provided by those tasked with forecasting for the campaign. There are many factors besides the weather that go into making a field observational campaign decision, but I believe that when forecasters make good forecasts, only to see that their advice is ignored or dismissed, this doesn't leave them feeling good about their participation. I can empathize.

When I read the objectives as offered above and compare them to the operation as it was carried out, it’s not at all clear to me that the program was going to deliver much regarding the forecast/warning aspects of these objectives. I recommend the organizers of VORTEX3 read Doswell et al. (1986), including the discussion of "testable hypotheses". From where I sit, it appears that V2 was primarily a large "fishing expedition" regarding these overarching objectives. In fact, the "objectives" are stated in the form of questions, which in my view of things means they aren’t objectives at all! How can an objective be a question? Presumably, if one can answer such a question, the pseudo-objective has been achieved. Did anyone involved in V2 have a clear idea of what to look for in trying to answer the questions listed in these pseudo-objectives? What data would be necessary to answer these questions? Were the observing systems used during the project matched to those objectives? What answers to these specific questions can be derived from the data collected? As an outsider, I see this set of objectives as a formula for inconclusive results.

Furthermore, if forecasting were to be a major element within the VORTEX objectives (which it evidently was not), the forecasting component of the project would need to be structured in such a way that the forecasts could be verified with the data collected, and a major forecast verification effort should have been designed into the experiment. I believe that mentioning forecast improvement is little more than a "motherhood" statement and that the structure and operations of V2 were not appropriate for an experiment designed to improve forecasting. The data collection was primarily aimed as storm-scale, and tornado-scale data gathering, with only a few token soundings in the storm environment. Although learning more about storm structure and tornadogenesis might (or might not) improve warnings, this would have relatively little impact on severe storm and tornado forecasting.

If I grant that V2 was not really aimed at forecast improvement but actually at learning about storm-scale tornadogenesis, this is still a worthy goal. From my viewpoint, if the V2 team managed to collect data only on storms that failed to produce tornadoes, that would represent a major contribution to the science! As it turns out, they were successful in obtaining data on tornadic storms, as well, including storms that are only "weakly" tornadic. In effect, they’ve captured a substantial spectrum of storm types, although they may not have gathered any data on "high-end" tornadic storms (e.g., Storm A on 03 May 1999). From these data, it should be possible to learn more about tornadogenesis. But I doubt seriously that the pseudo-objectives will be met, if this means obtaining unambiguous answers to all the questions posing as objectives. There’s no way that in situ data confined almost exclusively to the lowest 10 meters will allow comprehensive answers to the questions offered as objectives – the pseudo-objectives and the data are mismatched.

Note that numerical models cannot be substituted for missing data (primarily above 10 m AGL). If we assume that to be the case, then we have shown that VORTEX1 (hereafter V1) was a failure. That is, it was abundantly clear (to me and to others) from V1 that models were no substitute for real observations! Our hypotheses were based primarily on implications from the contemporary model simulations, whereas the field observations indicated clearly that our hypotheses were inappropriate for what we saw in the field. For example, the so-called "cascade" paradigm for tornadogenesis [i.e., first, the formation of a mid-level mesocyclone, followed by development of a low-level mesocyclone, followed by a tornadic vortex aloft, followed by surface intensification of the descending vortex], which was a basis for hypothesis formulation, simply was not a common mode for tornadogenesis.

II. Researchers as decision-makers

In my experience, academics and abstract researchers are really poor at decision-making under pressure. This is especially so when the program doesn’t have a single, dominating figure at the top. Observational field campaigns need to be run like military operations, where the decisions ultimately fall upon one "field general" whose direction of the operation will be followed without question. Let anyone who has never made a bad decision be the first to cast stones at such a leader. Moreover, if anyone inclined to disagree with the leader's decisions has never made such important decisions in their career, then they have no right to dispute or second-guess those decisions. Follow your orders and shut up.

It’s also bordering on axiomatic that researchers are poor forecasters. The skill of a researcher is associated with dogged, detailed consideration of all the components of the research project. There’s no pressure while doing research to make a decision within a limited time - researchers are rewarded for their insight, not their ability to make good decisions under pressure, without complete information. Forecasters, of course, must make good decisions under just these circumstances every day. Nevertheless, forecasters have been, and apparently still are, second-class citizens in a community dominated by researchers. If you want to have the best forecasters help with the process, you need to respect what they have to say. If you’re satisfied with just anyone as a forecaster, of course, then I might ask what’s the point of having a forecast team at all? What role will the forecasts play in decision-making?

Decisions made by a committee are notoriously slow and traditionally considered inferior - for good reasons. In order to conduct a field operation on the scale of V2, a single person has to be responsible for all the major decisions regarding the operation: go/no go, target area, primary objectives for the day, operating mode, etc. The field teams need to have some autonomy, but it should be very limited. For the most part, for the operation to succeed, the field teams need to follow orders. Period.

I know that the diverse investigators have different needs for their specific objectives - this is a formula for conflict. When each investigator has his/her selfish interests as their primary goal, it's going to be difficult to arrive at a consensus when the main decisions are being made by a committee. This slows down an already slow process, when what's needed is quick decisions if the teams are to reach their target areas.

It's been my observation that there was a lot of second-guessing going on by the participants. Individuals were unhappy with the collective decisions and some tended to wander off on their own, rather than subordinating their interests to the overall goals of the campaign. It's my understanding that consensus usually emerged about operational decisions after what could be considerable time lost in reaching that consensus. Of course, consensus is usually the best strategy, but if reaching it takes too much time, then even if the consensus is right, the objectives could suffer. All of this was predictable, given the nature of the decision-making process as it was laid out, of course. Some late successful intercepts may have reduced the griping - success tends to do that - but I was hearing a lot of griping and complaining. This was not a happy team for most of this year's campaign.

III. Funding issues

I believe that the way this campaign was funded was another formula for problems. NSF decided that the way to go was to have all the potential investigators submit their own individual proposals and those selected for support would, in effect, be free agents. This way of doing business in a field campaign exacerbates the well-known tendency for scientific prima donna-ism. That is, V2 was a collection of individually-funded investigators working in what was hoped to be some sort of overall organization. This approach runs contrary to all that I know about research scientists, who generally tend toward being lone wolves, seeking to push their programs in preference to all others. Note that I'm not saying it’s a bad thing to pursue your own research objectives. Only that an "organization" including those who decline to subordinate their needs to those of the team isn't a coherent organization. Although I can't hold up the original V1 program as a paragon of virtue with regard to PI subordination to team goals, I believe it may have been far better in this regard than V2.

Despite this built-in tendency for fractious debate about operational decisions, it's my understanding that this went better than I expected. If so, it's a credit to the participating PIs, but I will continue to believe that funding the project as an aggregate of individually-supported projects is flawed. If the overall project is funded, rather than individual investigators, then NSF should give preference to support individual scientists who agree to work with the V2 data to produce specific scientific results in funded research projects that dovetail within the overall objectives of the project.

IV. The ponderous armada

It became clear to me shortly after beginning to storm chase (in 1972) that fixed facilities have an enormous handicap when it comes to collecting data on severe convective storms. The history of NSSL shows that well-sampled weather events during the window of NSSL spring operations were rare, amounting to one every few years. This creates big concerns for sample size when trying to generalize the results of storm research. A handful of cases means you're essentially forced to overgeneralize. It makes good sense to obtain more samples by going mobile with the observations. This was shown pretty conclusively during V1, and certainly was true in V2. However, I believe that the first season of V2 operation (2009) gave some strong hints that the armada had grown too large to be effective. With the continuing growth in the size of the fleet for 2010, a threshold was crossed. I believe that the logistics and response times for this number of vehicles have become problematic to the point of reducing the opportunities for a wholly successful intercept.

With a large number of experimental data gathering systems, it becomes probable that one or more of these systems will be struggling on any given day. Day-to-day operations over an extended period means the systems will require a lot of maintenance. The chances of having all systems up and operating at 100% capacity for a given storm can become pretty small. It helps when all the data-gathering systems have been thoroughly tested and used in earlier data collection efforts. Yes, an important issue is the testing of new observational capabilities, but this should be restricted within programs of this size. Some considerable preference should be given to observational capabilities that have proven themselves to be reliable in the past during actual field operations. New, unproven, systems can tag along, but they shouldn't be given equal weight when it comes to setting priorities for a day's field operations. A large field observing program should never depend on unproved observing capabilities.

The vast number of vehicles makes every activity slow down. Just checking into a motel, buying fuel, or going to the toilet becomes a time-consuming operation. Some vehicles are inherently slow-responding - their size and weight make them sluggish. Sheer numbers make logistics complicated, which can limit the chances for wholly successful intercepts (i.e., those with all participating observational components in place, able to carry out their designated observing duties).

Communication and coordination among vehicles has always been a serious problem, made worse as the number of vehicles increases. In some rural areas, the V2 armada could saturate a cell phone network and radio communication among vehicles has always been problematic. Field coordination depends on reliable communications, of course. I saw numerous times (and I was avoiding the V2 armada, for the most part, during my chase vacation) when the V2 vehicles seemed to be bunched up during the time when they were working a storm. It appeared to me (as an outsider) that this was a problem - each vehicle has an assigned location during an operation and when they're bunched up, it appears that someone isn't in their assigned location. This was a problem during the V1 campaigns, and it seems itmay not have been cured entirely, yet.

It appears likely to me that a future V3 operation should have fewer vehicles if it hopes to be effective. This may limit the scope of the objectives, to the exclusion of someone's pet project (like mine was) but it allows the remaining objectives to be met more easily than when the armada swells to such an extent that operations are impeded by sheer size.

V. Media and private chaser tag-alongs

The problem with media tag-alongs and "embedded" reporting from the field is that it encourages more and more chasers to mingle with the armada. This came to a head this year, when Dr. Wurman complained via TWC about private chasers impeding his data gathering. You can’t have free publicity for the program without the side effect of encouraging tag-along private chasers. Unfortunately, this genie is already out of the bottle. There's no easy way to prevent this mingling of private chasers with the armada, now. I maintain that you can't have it both ways: lots of media publicity regarding the campaign but without interference from private chasers. When you draw attention to the program, you automatically create followers who will crowd the highways while you're trying to collect data. It's hard to hide the armada on a chase day, and you don't help that by having daily media broadcasts from embedded media teams.

VI. Summary

As I see it, V2 will wind up contributing to the science of tornadoes. Even a fishing expedition can accomplish this much. As disorganized as things seemed to be, valuable new data were obtained. The project should not be seen as a failure, and I would be the last person on Earth to claim so. Nevertheless, I think there are some very important lessons to be learned from this effort, hopefully to be remembered when the time comes to plan V3.

  1. NSF needs to re-consider how it funds field campaigns of this sort
  2. Program objectives should be developed to match the data collection systems, or vice-versa
  3. Testable hypotheses should be the principle used for developing most of the program objectives
  4. A single person should be in charge of making the major operational decisions on any operational day and that person should have proven capabilities for making decisions with limited time and information - this is not a task for just anyone
  5. All participants should be willing to subordinate their personal goals to those of the team, and have to perform their duties as determined by the team decision-maker without question or complaint
  6. If forecasting is to be a component of future field observation campaigns, a very different approach to forecasting needs to be developed
  7. People with proven forecasting ability should be recruited and their assessments need to be given appropriate value by the decision-makers
  8. Top priority should be given to proven mobile data collection systems
  9. The armada needs to be limited regarding the number of vehicles
  10. Further investigation into communications should be done
  11. Media publicity should be restricted to before and/or after field data collection begins
  12. All participants need to understand that collecting scientific field data is hard work, especially since it's likely that the team needs to operate on consecutive days - you have to make hay when the sun shines!

Scientific storm chasing is not about personal rewards - it's about long hours of unglamorous work in support of scientific objectives. For instance, I'm proud of the dedication I saw exhibited by the "Stick Net" teams: that's the kind of unselfish participation that all participants need to be willing to contribute!

Update (20 July 2010) First comments! This is the unedited text of an email exchange:

On Jul 20, 2010, at 10:24 AM, M. Coniglio wrote:

You make some very good points, and are probably far from being alone in many of your thoughts, but I think it's unfair to say that V2 missed a substantial number of events. Assuming you're referring to the events that cause most of the trouble (Greensburg, Spencer, etc.), did a substantial number of these events even occur in the V2 domain during the operations periods? The obvious one that was missed is the May 22 event in South Dakota, which I'm guessing is responsible for much of the complaining that you mentioned. But what others were missed? The Campo storm could be included in that category, although I'd argue that a storm of that quality was not at all obvious that morning and it is very understandable as to how it was missed. Certainly many of the May 10th tornadoes fall into that significant category, and I'm sure V2 would like to have that day back in particular, but data was still collected by several teams, including (I think) dual-doppler on tornado dissipation and subsequent tornadogenesis near and east of Seminole, on a very difficult storm to chase, let alone to collect data on. The June 11 event near Hoxie, KS comes to mind, but I'm not sure ANYBODY documented that event (the target of Colorado on that day was apparently too hard to ignore for every chaser that I'm aware of), nor is the significance of those tornadoes in Kansas known. I'm hard pressed to think of another event of these types that V2 missed during the operations period. All excuses aside, the bottom line is that V2, by and large, missed those events, so it would be silly of me to suggest that mistakes weren't made. But if you're going to point out these mistakes, then it's only fair to point out that data was collected on, by my count, 35 supercells in the spectrum of unicell-type storms to significantly tornadic storms (Goshen Co., Slapout, Seminole), and many non-tornadic and tornadic storms in between. I might add that the VAST majority of these storms were tornado warned. It's disingenuous to point out the few mistakes while ignoring the many successes.

Is this the word you want to use, here? According to,

dis·in·gen·u·ous –adjective
lacking in frankness, candor, or sincerity; falsely or hypocritically ingenuous; insincere: Her excuse was rather disingenuous.

It's possible that my remarks were UNFAIR, but I hope my SINCERITY in making these remarks was not a matter of concern. Assuming that this is not the right word, let me take up the issue of the "fairness" of my comments. I anticipated that someone associated with V2 might be somewhat defensive with respect to my comments, and I think I interpret the foregoing to be moderately defensive. As evidence for that, I want to point out that I'm NOT guilty of "ignoring the many successes" ... in fact, at the end of section I, I said "As it turns out, they were successful in obtaining data on tornadic storms, as well, including storms that are only "weakly" tornadic. In effect, they’ve captured a substantial spectrum of storm types, although they may not have gathered any data on "high-end" tornadic storms (e.g., Storm A on 03 May 1999). From these data, it should be possible to learn more about tornadogenesis." I didn't have the count of supercells on which data were collected, so it's certainly a substantial number. However, I might ask ... of the 35 supercells on which at least some data were collected, how many of those had at least 80% of the teams working them, and for what fraction of the lifetimes of those supercells? As an outsider, I don't know the answers, of course, but it could be considered somewhat misleading to throw out the number of supercells on which at least SOME data were collected, if at least some of those 35 supercells only had LIMITED data collection.

Of course, second guessing will always be a part of storm chasing and field programs- that's just the nature of it as you are well aware. My observation is that the V2 campaign was particularly prone to armchair quarterbacking from some of the non decision makers on the inside and from outsiders, for a number of reasons. For the most part, the number of diverse interests and people involved in the project (which you pointed out) and, I suppose, a lack of a true appreciation of the difficultly of the decisions and the number of factors that needed to be taken into account for every decision were the main reasons for this second guessing. I also don't know what the gripes were that you're referring to or who they came from, under what circumstances they were made, etc., so I can't help but think your comments give more credence to the griping than is perhaps deserved. Some complaints I'm sure are legitimate and should definitely be considered for future field programs. But consider this- if the gripes are similar to those that I have heard first hand, then they spanned a wide range of opinions that, at least to my limited experience, would have rendered solutions that pleased everybody nearly impossible.

Of course, part of what I'm saying is that it's quite possibly nearly impossible to have pleased everybody, just by the way the program was designed. Participants in such a program should be PREPARED for disappointment. It might not forestall ALL the gripes, but it might cut down on them. I recognize that even the best efforts to forestall griping won't prevent it entirely.

For example, I heard teams grumbling that they were worn out and were being pushed too hard, but also heard teams said they wanted to be pushed harder. Not surprisingly, the less nimble teams appreciated choosing a target area and sticking with it, whereas the nimble teams didn't mind frequent adjusting and retargeting. Some teams thought it was fine to have platforms spread among other storms in proximity, others thought the goals of V2 required that teams should stick with the target storm. In one sense, these are all legitimate claims and the sheer diversity of the complaints resulted from the sheer size of the field campaign more than anything, but I don't believe the claims resulted much from the quality of the decision making itself. I wonder, is it even possible to have a field program, even a smaller one than V2, without a lot of grumbling and second guessing? Would a military approach have resulted in more effective operations, given the scope of the project? Perhaps. Would it have resulted in less grumbling and a more happy team? I doubt it.

To some extent, griping and grumbling is inevitable ... a fair point. I'm not going to reveal my sources, but some of the things I heard did indeed flow from the quality of the decision-making. I was not the only person who felt that having a single person making decisions would have been a better plan. The only way to prevent griping and grumbling is for everyone to feel that THEY were successful in every mission - clearly an unlikely outcome. We could get into an interesting discussion about the optimum strategy for decision-making, but if you allow me the analogy between a field observational program and a military combat campaign, I'd point out that the military has the structure it has for good reasons. I think those reasons apply here. There's lots of griping and grumbling in the military, too. Many people think they know better than a field general! One hopes that military leaders get into a position of authority in these matters by virtue of their demonstrated leadership skills at lower levels. Naturally, this ideal isn't always achieved, but there is a process for selecting field generals, however flawed it might be. Running things by a committee and arriving at consensus is a formula for drawn-out decision-making, whereas the atmosphere's timetable (coupled with the ponderous movement of some armada elements) may not wait for a consensus to emerge. Being right in the decision but not having reached it in time for effective deployment can be just as bad as not being right.

Part of leadership is associated with making sure the followers understand why things are happening and being prepared with a set of realistic expectations about how things actually would go. You can reduce the grumbling and griping if you take the time to explain things. It's not evident to me (as an outsider) that this was done, but perhaps I'm simply unaware of what was done along such lines.

In V1, there was a single leader to whom everyone deferred. He sought input, of course, but his decisions were respected by most of the participants, even when folks disagreed with them. No such leadership emerged in V2 - this is by no means intended to offend anyone (although it might). I observe that the program design virtually guaranteed that no such leader COULD emerge.

Regarding the forecasting, you'll get no argument from me that having skilled operational forecasters in the field probably would have been the best way to do the morning forecasting. In fact, there was a serious attempt made to have operational forecasters with the armada in 2010. For various reasons, it did not happen (except for one of the six forecasters). But the fact that the forecasting was, by and large, not done by operational forecasters does not mean that the difference between doing research and forecasting was not appreciated by the PI's or that the PIs ignored the forecasters.

To some extent, this is convolving two different points ... one is the qualifications of the forecasters, the other is how the forecasts were used by the decision-makers. For humans, perception IS reality, and if the forecasters perceived that their efforts weren't being respected, then what actually might be the case regarding respect is primarily irrelevant. I have input that at least some forecasters had the perception that their input wasn't being used (or used properly) in making team decisions.

I was one of the six forecasters, so I'll let others comment on the quality of the briefings and forecasts if they choose, but by and large, one of the scenarios offered by the forecaster usually ended up being the target for the day.

I certainly would invite other insiders to offer comments on this - some of the input I had is not necessarily 100% consistent with your description.

Furthermore, I think it's unfair to suggest that "just anybody" did the forecasting and that a disrespect of forecasters' decision making caused V2 to miss a substantial number of events. If your criticism is that the targets could have been chosen better if operational forecasters had done the forecasting, then fair enough. Personally, although I have never had to make a forecast operationally, I am very respectful of how difficult it is to forecast for a living. But to my knowledge, you did not sit in on any of the conference calls to listen to the forecast briefings or discussion of logistics, so I have to assume that you are relying on second-hand information or generalities to judge the quality of the forecasts and the decision making and how the forecasts were used to make the decisions.

Fair enough ... I don't mean to disrespect anyone's efforts at forecasting with the throwaway description of "just anybody" ... but ... If you want high-quality forecasts, you should seek proven expertise in that domain. Being a meteorologist, even a severe storms meteorologist, is not the same as being an experienced, successful severe weather forecaster. Note that I do not include myself in the category of "experienced, successful severe weather forecaster"! And if a decision is influenced by factors other than the forecast and goes against the forecast recommendation(s), then that should be explained to the forecaster after the decision is made. Was that done routinely? Or did such a scenario never happen? If the forecaster recommends action A and other factors dictate action B, the reasons for that need to be made clear to the forecaster as soon as possible, or you're likely going to receive the default assumption: the decision-makers didn't believe my forecast.

Bottom line- I believe that V2 was much more successful than a reader of your thoughts would be led to believe.

Again, I think this is an overly defensive statement. See section VI, where I said: "As I see it, V2 will wind up contributing to the science of tornadoes. Even a fishing expedition can accomplish this much. As disorganized as things seemed to be, valuable new data were obtained. The project should not be seen as a failure, and I would be the last person on Earth to claim so."

But I'm happy to have the opportunity to deny that I believe V2 was a failure. I'm simply saying that a number of factors contributed to it being rather less than it might have been, and I hope that some consideration of these issues will be part of the planning for V3.

Update (23 July) More comments! And more and more still ...

On Jul 21, 2010, at 2:00 PM, Paul Markowski wrote:

I just finished reading your new essay on VORTEX2. It's an interesting perspective. You're certainly free to write whatever you want, but I'm surprised that you would write an entire essay based on hearsay, and make claims without providing any direct evidence to support your claims. It seems that you were not too informed about VORTEX2 operations or the details of its successes and failures. I guess this is not surprising since all of your information is necessarily secondhand (or worse). I've commented on some of your comments below. I speak for myself only. I have not yet shared this response with any member of the VORTEX2 Steering Committee, PIs, or other participants.

A fair response, although ... (1) My comments are NOT a scientific paper and so there are no formal standards of evidence, (2) I made no claims that it was based on anything other than my own observations and comments from participants to me, which certainly constitutes hearsay. Your comments also form an interesting perspective, which seems substantially defensive - that's actually a good thing and I appreciate your taking the time to respond in detail.

I'm just trying to set the record straight, which perhaps unavoidably comes across as defensive.

Yes, of course. I appreciate your willingness to share your viewpoint.

"Testable hypotheses should be the principle used for developing most of the program objectives." (3rd summary bullet)
"From where I sit, it appears that V2 was primarily a large "fishing expedition" regarding these overarching objectives."

Of course, every question in the Science Program Overview (SPO) easily could have been cast in "hypothesis form", and I don't see how this would have changed the operations strategy and experiment design.

Goals stated as testable hypotheses require the matching of goals to observational platforms. I dispute that this is somehow equivalent to the pseudo-objectives I cited in my essay. In V1, there were many interesting questions we simply couldn't address owing to the absence of an observing system capable of providing unambiguous data by which the hypothesis could be given an adequate test.

Research motivated or guided by well-posed questions versus well-posed hypotheses is really a matter of personal preference in my view (it's sort of like giving your answer in the form of a question on the game show Jeopardy---either way, the contestant must know the "answer" to a "question"). It seems that some in the community actually frown upon overly specific hypotheses as "leading to overly narrow research that is conducted with blinders on" (this is not my own feeling, but I'm quoting a review of a declined NSF proposal I wrote as an assistant professor; the proposal listed a number of hypotheses to be tested).

If the questions are indeed well-posed, then I suppose it might be considered a matter of choice. I don't believe the questions I quoted are particularly well-posed, however. They defy the production of any metric by which one might assess the outcome of the campaign. The hypotheses we posed for V1 may have been seeking answers to the wrong questions, but I believe they were well-posed, particularly in the sense that we defined the measurements necessary to answer the questions, and what sort of interpretation of the data would constitute refutation of those hypotheses. I would challenge you to do something comparable with the pseudo-objectives for V2. As for what a particular reviewer might have said on a declined proposal - I can't see that to be very relevant. In any scientific community, you can find many viewpoints, including those that seem pig-headed or just plain wrong.

The questions you cite are not from the peer-reviewed SPO. They are from CSWR's VORTEX2 website. I admittedly did not monitor this website frequently (nor did I monitor other VORTEX2 websites---there were at least 3 out there, not to mention several Facebook sites out there), but I presume the intended audience for the science questions you're quoting was the general public and not science reviewers.

So you're saying that the goals of the CSWR website *aren't* the project goals? That's an interesting, curious situation.

I don't see where I've said that. I've only said that I think it's fairer for you to critique the science of a science document written for scientists rather the cover page of something apparently intended for a non-scientist. You're free to critique whatever you want, of course, but I don't think a credible reviewer would evaluate the scientific merits of a research project by skimming the 48-point-font bullets on a webpage targeting a lay audience. You might as well critique the explanation I gave my 5-year old for why daddy had to be gone for 7 weeks this past spring. I'm sure the reasons I gave him also would have failed to meet the expectations of NSF reviewers.

Interesting analogy. I'm certainly willing to admit that it would be more "fair" to have addressed my point to the goals in the SPO. Nevertheless, the same objection I have to these goals in the form of questions applies to those in the SPO. I think we've agreed to disagree on this point, which you've dismissed as something unworthy of discussion.

Note that the SPO can be downloaded from this website by more curious visitors.

Here are a few of the questions from the tornadogenesis section of the SPO (Section 5.1):

1. How is the vortex line distribution within supercells related to downdrafts (particularly RFDs)?
2. What are the dominant forcings for RFDs, as a function of location within the RFD, stage in storm evolution, and supercell type (e.g., tornadic vs. nontornadic, low-precipitation vs. high-precipitation)?
3. What processes lead to the rapid intensification of low-level vorticity that results in tornadogenesis?
4. How are these processes affected by the thermodynamic fields and microphysical characteristics of the parent storm?
5. Does tornadogenesis require a balance between low-level buoyancy and angular momentum in the incipient vortex?
6. What is the orientation and magnitude of baroclinity above the surface in tornadic and nontornadic supercells?

It is rather trivial to cast each of these in the form of a hypothesis, for example, the first question can be posed as a testable hypothesis as follows:

Hypothesis: The counterrotating vortices that straddle the rear-flank downdraft are always joined by vortex lines that form arches, with one "foot" of the arch residing in the cyclonic vortex and the other "foot" residing in the anticyclonic vortex.
Test: Vorticity analyses derived from mesocyclone-scale dual-Doppler wind retrievals (DOW6, DOW7, NOXP, and UMASS XPOL) always reveal vortex line arches joining the vorticity couplets.
Refute: At least one case can be found for which vorticity analyses derived from mesocyclone-scale dual-Doppler wind retrievals fail to show vortex line arches joining the vorticity couplets.

OK ... in order to accomplish the transformation in your example, one would have to know that it really relates to counterrotating vortices and the hypothesized "arched" vortex lines, neither of which is mentioned in any way in question #1 of the tornadogenesis section. This transformation might be trivial for an insider, and it might indeed be discussed at length in the "SPO" (I didn't see it) but your transformation of the original question can't be deduced from the question #1.

Surely you are mistaken that you have never seen the SPO. Since you indicated that you submitted a VORTEX2 proposal, then you also must have submitted a one-page summary of your VORTEX2 plans and objectives (sort of like a "letter-of-intent") for inclusion in the SPO. All VORTEX2 proposals had to be preceded by these "one-pagers", which appeared in the SPO's Appendix J. Appendix J was not permitted to be shared with the public or even other PIs. I am legally forbidden from confirming whether you contributed to Appendix J, but you yourself have indicated that you submitted a proposal. It seems you attached your name to the same SPO that you now say you never saw (surely you didn't attach something with your name on it to the proposal you never read---I can't imagine that you'd ever accept co-authorship on a paper you had never read), and the content of which are now objecting to, 5+ years after-the-fact. Something's screwy I missing something? I'll also use this opportunity to restate that it's unfortunate that we didn't hear about all of your objections years ago when you were in the loop.

It's evident that you're quite exercised about my comments, to the extent that you're continuing to assert things that suggest strongly that it's unethical for me to offer any critical comments about V2 operations at this date. I regret that you've become so upset, but I'm trying to have a discussion here that hopefully will be enlightening to any who might choose to read it. I'm giving you the same unlimited opportunity to be "heard" that I'm claiming. It was my hope to stimulate discussion and in that I feel I've been successful, but I see no need to imply that I'm being unethical. Apparently, you've chosen to be offended personally.

Isn't it the "insiders" who are the ones actually carrying out the research? Your stance has been, as I understand it, that the researchers' approach to conducting the experiment was flawed because the researchers were not guided by sufficiently specific hypotheses (the SPO, which also had your name on it, listed questions to be answered rather than hypotheses to test). But if it's "trivial", as you and I apparently agree, for a researcher/insider to go from a question that appears in a document intended for science reviewers of potentially diverse backgrounds (for all I know, an atmospheric chemist could have been one of the reviewers of VORTEX2--I don't think this is an extreme possibility given that I've been asked to review field work proposals submitted by atmospheric chemists) to a much more focused, esoteric hypothesis, how can you say that the researchers would have been so much better off being guided by objectives guided by hypotheses rather than questions?

Basically, because I felt that actually doing that exercise in V1 was very helpful. All our objectives had been through a substantial period of debate about what we could and couldn't do with the observing systems we had, during the process. If you feel that was all done successfully, then I believe the document's stated goals should have been in that format, rather than these nebulous questions posing as objectives.

Another point that you seem to have forgotten is that the SPO is merely an overview document (the "O" in "SPO" stands for "overview"), and that the SPO was followed by ~15-20 more focused, more specific science proposals, roughly one for each PI involved in the project (many PIs submitted collaborative proposals, so the number of proposals is less than the total number of PIs). Each of these proposals also went through the standard NSF proposal review process. There's no doubt that these proposals had much more detail than what was in the SPO (I am not privy to the contents of these proposals---only reviewers and the NSF program officer get to see the contents unless the proposer volunteers to share the proposal with others), and I think it's likely that some proposals actually did list specific hypotheses to be tested rather than specific research questions to be answered. The SPO is supposed to serve a bit like an umbrella for all of the individual science proposals that are submitted after NSF tentatively decides to support a big project. Each PI's proposal is supposed to propose science that is in line with the broader goals articulated by the SPO, but ultimately each PI retains considerable autonomy, and it's up to reviewers and the NSF program officer, not the project's Steering Committee, to identify proposals that don't seem to fit under the project's umbrella. Whether the system should be changed to reduce this degree of autonomy is another question entirely, and it's well above my pay grade.

Of course, one of my expressed concerns was indeed about this degree of autonomy. If, as an outsider, I wanted to know (or refresh my memory) of the project goals, I believe it would have been most effective to state them as testable hypotheses. You dismiss this as irrelevant and unnecessary - you may be right, of course, but I don't believe so.

Evidently, I saw the SPO some years ago, but it would be a pretty substantial exaggeration to say I was a co-author. Having our one-pager in the Appendix is pretty distant from being a co-author. I've already indicated that it seemed unlikely to me the things I wanted to do in conjunction with V2 were going to happen, based on feedback from some members of the Steering Committee. This (and other factors) likely led to pessimism regarding changing much in the SPO, so that may have been why I chose at that time to say nothing. I understand your feelings in this matter, and I'm sorry if I've upset you, but my motives aren't to attack you or V2 ... I'm simply hoping that if mistakes were made, we can learn from them.

I followed your suggestion and downloaded the SPO - I was very curious to see the "questions" (pseudo-objectives) for the interaction between the storms and their environment:

*What are the dynamical, thermodynamic, and microphysical natures of interactions between supercells and other supercells? Between supercells and ordinary cells?
*When tornadogenesis occurs during a cell merger, does low-level outflow from the neighbor cell increase low-level convergence, low-level horizontal vorticity, or both, in the main cell?
*Do relatively warm (cool) outflow from other cells promote tornadogenesis (tornado dissipation)?
*During a cell merger, do microphysical and thermodynamic interactions aloft lead to downdraft formation and tornadogenesis?
*Can these scenarios be identified in an operational setting?

In the SPO section on instrumentation, the following is the list of observing systems " required (Table 4.1):

Fixed observing systems (e.g., NWRT-PAR, S-band radars; see section 4)
C-band and X-band radars
mobile soundings
mobile mesonets
stick net

I just don't see how the questions posing as objectives can be answered by the "required" systems. Perhaps I'm just obtuse?

I still don't see how this exercise has proven anything other than that I can rearrange sentences

Sorry, but your transformation appears to be a lot more than rearranging sentences!

while quadrupling the number of characters required to convey the science to reviewers (space is precious indeed---the new NSF rules that were applied to VORTEX2 limited the SPO to 15 pages; the SOD's of the pre-VORTEX2 era had no page limit).

Obviously, I have no control over NSF's rules but if you believe that the page restrictions force you to abbreviate the GOAL descriptions, then I'd sure be looking to excise something else from the document to make room for a clear, unambiguous description of the goals and what is needed to achieve them. I guess we're going to have to agree to disagree on this issue.

Regarding your comments related to the need to "map" science objectives to the available instrumentation (e.g., you wrote "Program objectives should be developed to match the data collection systems, or vice-versa"), isn't this exactly what the Experiment Design Overview (EDO) does (quoting the SPO: "Detailed descriptions of how VORTEX2 instrumentation was to be used to investigate each science question appear in the EDO")? The EDO was submitted with the SPO, as required by NSF; in fact, NSF explicitly requires this sort of mapping. The EDO also is available from the website you cited.

No, not quite, although it might be close. When we designed testable hypotheses in V1, we did so with an awareness of the observing capabilities that we were likely going to have on hand. What you're describing is the consequence of what you admit to being a paperwork requirement. What I'm talking about begins at the very first part of planning the program's main objectives. If we need a system to accomplish a goal, and we don't have that system, then we either had to scrap the objective, or start working on obtaining an observing capability we didn't have.

It's a requirement but can you please show me where I said (or even suggested) that the connections we made between instruments and objectives were only a consequence of this requirement?

You're putting words in my mouth, which is precisely what you've accused me of doing. Your point was that it was an NSF requirement. When I went to the SPO, following your advice, I found I couldn't use the document to put together the information contained in your transformed objective (from question to testable hypothesis). Perhaps I missed it ... if I couldn't do that, then I (as an outsider) would be forced to conclude that your goals (stated as questions) might not be answerable in the context of the data collection.

Do you really think VORTEX2 was planned any differently, i.e., with pie-in-the-sky ideas thrown on the table without any consideration of observing capabilities (laughter)?

Of course not, but I maintain that developing testable hypotheses is a worthwhile exercise and if one does that explicitly at the outset, then there can be no ambiguity about the match between objectives and the observing systems.

Regardless of the merits and demerits of posing questions versus hypotheses, the SPO was made available to all potential VORTEX2 well before it was due at NSF [and a predecessor document, the Science Overview Document (SOD)---you might recall NSF changed the format for field experiment requests during the time we were planning VORTEX2---was available for comment for months]. The Steering Committee openly solicited comments and suggestions, but we received no comments from you about the design or goals of the project. It's unfortunate that you're only now providing a critique of the SPO, >5 years after it was drafted. It was supposed to be a duty of all PIs to review and suggest revisions of the SPO prior to its submission. Your feedback would have been much more helpful to the PIs at that early stage, and if not then, before the project started (4+ years elapsed from the first SPO submission until the start of VORTEX2), and if not even then, after Year 1 of the project, assuming you had formulated at least some preliminary opinions after Year 1. Receiving this feedback now is not helpful, and I find it a bit disrespectful that this feedback is not even being presented directly to the people involved with the project. The Steering Committee tirelessly worked on that document for years, and without pay, in order to make VORTEX2 a reality, and the fruits of these volunteer efforts benefited the research of dozens of PIs (both scientifically and financially). It seems that you weren't all that concerned about improving the project, but you were happy to reap the rewards if the project was funded (based on the fact that you submitted a VORTEX2 proposal).

I assume you meant to say "...the SPO was made available to all potential VORTEX2 [PIs] well before it was due at NSF..." Sorry I was so late in commenting about it ... I was never a V2 PI, actually. I understand completely your reaction. I have no excuses and don't really recall why I chose not to be more vocal about things. I'm speculating here, but it wasn't at all clear to me that what I wanted to see happen was going to be part of V2, and it turns out that that was a correct forecast. Could I have made a difference during the planning? I don't know and it would be idle speculation at this point anyway. I DID attend the planning meeting in the Quartz Mountains, but I don't believe the goals had been articulated (in any form) at that point. I was effectively "out of the loop" after that meeting, so perhaps I wasn't particularly compelled to offer opinions. I honestly don't recall now. Nevertheless, I don't believe that this disqualifies me from offering opinions now.

I should also comment about the accusation here regarding disrespect and my willingness to "reap the rewards" if my proposal had been funded. I grant that you may feel disrespected by my comments - that certainly wasn't my intent, of course. My essays are deliberately provocative, and perhaps I went too far. Fair enough. But no disrespect for you is intended. As for my rewards, had my proposal been funded - perhaps I'm being idealistic, but I think the scientific community would have reaped the rewards of that work, not me. Yes, I'd have had a month or so of salary each year for the duration of the proposal, but this wouldn't result in my becoming wealthy - modest compensation for the work involved. If I were so inclined, I could be offended by the implication of my venality regards this issue - but I understand your position, and I choose not to be offended.

Regardless, it's worth noting that VORTEX2 was supported in an unusually competitive environment. It beat out two other major field project proposals that year, despite VORTEX2 having an unusually high dependence on resources outside of NSF's Deployment Pool. This could not have happened without extremely favorable reviews.

Well, I mean no offense by this, but funding decisions at NSF are not necessarily driven completely by scientific merit and favorable reviews. Surely your own experiences have shown you that. The fact that V2 was funded is great and certainly the Steering Committee surely worked long hours without direct compensation. I haven't said that V2 was a failure ... which your prickly reaction seems to imply.

Nope, I just found your views of the project as an admitted outsider to be incongruent with my views as an insider. For some reason I felt compelled to try to set the record straight, although it's not clear to me whether this is a good investment of my time (it is taking away from VORTEX2 analysis as I speak!).

Thanks for your expenditure of valuable time to express your views as an insider.

"A single person should be in charge of making the major operational decisions on any operational day and that person should have proven capabilities for making decisions with limited time and information - this is not a task for just anyone" (4th bullet in your summary).

"Decisions made by a committee are notoriously slow and traditionally considered inferior - for good reasons. In order to conduct a field operation on the scale of VORTEX2, a single person has to be responsible for all the major decisions regarding the operation: go/no go, target area, primary objectives for the day, operating mode, etc. "

These statements seems to be based on intuition or arguments by authority rather than any firsthand experience with the project.

If, by intuition, you mean insight gained from experience, I would put my statements in that category. I've seen examples of the "committee" process fail in other field programs, where I WAS a full participant. It's not an argument by authority, unless you believe that I'm citing myself as evidence. I offered my opinions, based on my experiences and "hearsay" input regarding V2. It certainly was NOT the result of firsthand experience with V2, of course. But it was based on firsthand experience in a number of other field observation campaigns. Are you arguing that decision by committee is generally a superior process to leaving the tactical decisions to one person? What evidence can YOU summon to support that as a general (no pun intended) rule?

I'm afraid this is now straying into a semantics debate that's not very constructive.

Sometimes, semantics are important. I see "argument by authority" as a pejorative description of my comments. If I were to accept this description as accurate, I would have to repudiate my comments, as an argument by authority has no logical weight.

I believe your repeated reference to my responses as "defensive" could be found to be equally pejorative. I don't believe the everyday use of the term "defensive" really applies to someone who's merely attempting to refute misstatements of fact.

I don't see that as being equivalent to your accusation of "argument by authority" - in fact, seeing the two as equally pejorative seems to validate my opinion that you're being defensive. I will admit that some of my reactions to your comments have been defensive, but by your impugning my ethics, I'm obligated to defend myself. I'm doing my level best to avoid escalating the emotional level of this discussion.

It seems as though you have the impression that every decision was made by committee. This is simply not the case. Morning targeting decisions involved a lively discussion among forecaster(s) and PIs. The mission scientist, a single person, made the decision. In virtually every case in the two seasons, there was a clear consensus target that emerged from the discussion, and the mission scientist usually went with the overwhelming majority thinking (although this was not a requirement). On a rare 3 or 4 occasions in the 2 years, there was a significant split in opinion---but the final decision still remained with the mission scientist. In the field, the choice of a target storm was made by the Field Coordinator (FC), but the FC solicited input from coordinators for the reasons I gave previously (e.g., there was a need to know which instruments would be in play if a re-targeting decision was made, lest the day's mission degenerate into a free-for-all with no chance of coordinated data collection). Isn't this the approach you are advocating? All of this is described in the Operations Plan, which is another public document.

Perhaps my second-hand information is faulty. Another possibility is that your viewpoint of how this worked is different from that of some of my sources.

I'm not just stating my viewpoint, I'm re-stating what's always been in the Operations Plan (I was one of the primary authors, so I'm pretty familiar with it). I must have missed the memo that we discarded it.

Are you willing to accept the logical possibility that what's in the Ops Plan might not correspond perfectly with what happened in reality?

There are various schools of thought regarding decision-making. I cited the analogy with military operations in my response to Dr. Coniglio ... any analogy has weaknesses, of course, but do you think military operations should be run by consensus rather than a hierarchical process? Do you deny ANY parallels with field observation campaigns?

Of course not. But your understanding of VORTEX2 operations is simply off-track. Decisions were made by one person with input from a number of sources. This input was essential for a variety of reasons I won't rehash. Don't the generals in the military obtain input from their subordinates?

If your description of how the process worked is accurate, I'll concede the point. Yes, of course generals seek input from subordinates.

Can you cite any specific cases in which a dataset was compromised because of involving multiple PIs in the operational decision-making process? If developing consensus takes too long, then the consensus approach is obviously inferior, but I'm not aware of a single event in 50+ intercepts in which a delay due to a group discussion cost us data (there were certainly a few days we could have arrived earlier on a storm, as is true for every solo chaser, but these delays were unrelated to committee discussions). On the contrary, there are at least a half-dozen cases in which a quick (strictly limited to at most a few minutes) coordinator discussion or poll led to us getting data that we would not have had otherwise. These datasets are worth literally tens of thousands of dollars each---obtaining even one extra case is precious!

I believe that 10 May was an example, although it seems I would be hard-pressed to satisfy your stringent requirements since I wasn't involved firsthand.

I actually cite 10 May as a case in which the input from multiple PIs led to data that we would not have had otherwise (albeit far from what we had hoped for, but still better than zero data)! In the planning of the project, no one believed we could collect nested, multi-scale observations while chasing fast-moving storms. Storm motion on 10 May was forecast to be in excess of 50 mph (I believe some observed motions might have been closer to 60 mph). This is an excessive storm speed for any solo chaser, and for VORTEX2 teams hoping to do transects, set up dual-Doppler lobes with stationary radars having baselines as small as 10 km, etc. (these are strategies for typical slow-moving storm deployments), it was believed that there would be no chance for collecting an integrated dataset. On fast-moving days the strategy was to set-up a meso-beta-scale radar network rather than try to "chase" the storm by leapfrogging radars (this was also the deployment plan for the "tethered phase" of VORTEX2 in the original SPO).

A radar network comprising DOWs, SRs, NOXP, UMASS XPOL, CIRPAS, etc. was centered on I-44 near Stroud, which, as it turned out, was at the centroid of the SPC High Risk, maximum tornado probability contours, and location where high-resolution model output showed the longest-lived and most intense supercells. The box provided dual-Doppler coverage in a roughly 100-50 km rectangle straddling I-44, with the long axis of the box roughly parallel to the orientation of the dryline (NNW-SSE). We had hoped that casting a "big net" would give us the best chance of obtaining dual-Doppler data on at least one good storm, with the potential to also observe multiple storms, storm mergers and other storm-storm interactions, and environmental heterogeneity (these also were goals of VORTEX2, although they became more ancillary objectives when the scope of VORTEX2 was reduced relative to what was originally proposed in the SPO).

As the event unfolded, it gradually became clear that the box was centered in a gap in the broken line of supercells (i.e., there was a minimum in activity where the maximum in activity was forecast). There were supercells along the KS-OK border and others in the Norman-OKC area (I won't elaborate on the complicated mergers that also might have affected the evolution, but I'm sure you're familiar with these as well). After some discussion about what to do with the "radar box" (the discussion was hampered by some communications issues, as it turns out)---it appeared likely that keeping the box fixed would result in zero data---we attempted to move the box southward (in the planning of how to tackle storms moving at 50-60 mph that took place on the evening of 5/9 and morning of 5/10, it initially had been decided that we would not attempt to move the box once it was established, owing to the inability to "chase" storms moving at that speed). Radars scrambled to get south to intercept the "storm of the day" that moved from Norman to Fort Smith, but the research-quality data collected were limited. Data were collected near Seminole by NOXP, P9 and their disdrometer, and perhaps one of the TTU or UMASS radars (I'm not sure which, if either) and perhaps some of the other mobile mesonets. But if it had not been for coordinator chatter, the radar box would have remained tethered to Stroud and we would have had zero data. We of course would have liked to have had dual-Doppler on the storm scale and mesocyclone scale as a tornado passed through an array of sticknets and tornado pods, plus dual-polarimetric and disdrometer data, but NOXP's single-Doppler, dual-polarimetric data of a cyclic tornadic supercell and P9's disdrometer data (with perhaps some additional mobile mesonet and mobile radar data that I'm just not as familiar with yet---analyses have barely begun at this point) will have to do. The "radar box" strategy on 10 May would have been brilliant if the best storms would have occurred where forecast, but it probably looks like an poor strategy in hindsight given how the event unfolded. Nevertheless, the understanding that the issues of 10 May were due to too much coordinator discussion is simply false. The salvaging of some data collection near Seminole would not have been possible without the coordinator discussion that led to us discarding the strategy to hold the radar assets in place.

I think this is a classic example of how one's stand on some issue depends strongly on where you sit (an interesting aphorism I attribute to Andy White). Evidently, you see 10 May as a shining success for V2, whereas several folks with whom I interacted had a very different view. I'm not able to say who is right, but I very much appreciate your effort to clarify how you saw that day unfold.

Nevertheless, I do appreciate your offering more information in this regard ... can you cite specifics regarding the half-dozen cases?

Saying that I see 10 May as a "shining success" is a pretty gross misrepresentation of my summary. Here are the actual excerpts from my previous description (quoting my own reply): "salvaging of some data", "10 May disappointment", "communications failures", "NOXP's single Doppler ... will have to do". I don't know a single person in V2 who wasn't disappointed with 10 May. You stated that the failure was the result of too much time required to reach consensus and involving too many people in the decision-making, both of which ultimately stemmed from a poor operational design. Although I'm sure it sounds like a reasonable theory, this is just not the reality of what transpired that day.

OK ... that's fair. You did indeed offer qualifying phrases. And any shortfalls regarding the data collection may indeed not be attributable to "too much coordinator discussion", although I'm not yet willing to concede the the operational design was flawless. You've made a number of points that offer evidence to that effect, of course.

Here's a list of cases that would not have been obtained without coordinator input:

5/26/09 (at the time, this was the best sampled supercell in VORTEX2; it may be the best observed left-mover in history, although the best data collection was after the storm was most intense)
6/6/09 (nontornadic supercell in north central Nebraska)
6/7/09 (weakly tornadic supercell in NW MO--among the best-observed VORTEX2 storms, certainly in the top-5 for 2009)
5/10/10 (tornadic supercell outbreak---see above description)
5/21/10 (supercell near WY-NE border, tornadic prior to intercept)
5/26/10 (nontornadic supercell near DIA--one of the best observed nontornadic cases in VORTEX2)

My point is that there is not a single person at the PI level in V2 that, if making decisions autonomously, would have gotten us more data over the duration of the project than we got by having input on targets (either in the morning or in the heat of battle) provided by a number of experienced individuals. David Dowell and Erik Rasmussen served as Field Coordinators. Their experience and credentials are impeccable. And I think (hopefully they would agree) that at least a few times the input they solicited from other coordinators was helpful in their decision-making.

I respect your opinion and your informative response to my question ... I even accept that it's possible that no single PI, operating alone, would have done any better. Regarding the latter, of course, your comment is only speculative and there's no way to validate it, now. I also respect David and Erik.

I really did not consider it to be all that speculative (it is based on some data, although admittedly anecdotal). I'm basing it on my recollection of day-to-day targets that were advocated by different PIs in the weather discussions. We caught a supercell on all but a small handful of "go" days in 2010, and the days on which we failed to intercept a supercell were typically the result of there just not being supercells anywhere (e.g., because of rapid upscale growth into an MCS). All of these supercell "catches" (43 in number) were the fruits of choosing targets based on a consensus of O(20) active participants in the forecasting discussion (again, one person, the Mission Scientist, had the final say, however). Although my memory of all of the specific daily targets advocated by each of the many PIs is far from perfect (I certainly can't claim to have written them down or kept track), I don't recall there being anyone who would have nailed the target on more days than the consensus. My sentiment is based on the fact that this would have required one person to nail the forecast on almost every single day of the project, and I seem to recall almost everyone having at least a few misses from time to time, although perhaps I'm just projecting my own misses onto others!

I'm not actually advocating a single forecaster (in isolation), of course, but it remains a logical possibility that one or more of the forecasters might have done better than the consensus. In the absence of hard evidence, there's no way to be absolutely sure. But your justification is plausible. It's also possible that some forecaster not on the team would have done better than consensus, although that's a different topic that we've already discussed.

Between 27-29 tornadoes were observed by at least one VORTEX2 asset within the storms targeted in 2010. This is obviously a misleading tally, because, as one would expect, there are far fewer integrated datasets with simultaneous, multi-scale observations, which was the goal of VORTEX2.

Funny - I don't see that goal listed in the pseudo-objectives (i.e., "questions") I cited ... this was THE goal? Why isn't it listed?

Again, I think it would be fairer of you to critique the science objectives of the SPO, which was intended for NSF reviewers, rather than questions displayed on a web page targeting non-expert visitors.

Fair enough, although I think there should be a clear congruence between the actual project information and that on a web page intended for the public. It may have been outside your control, but I see nothing on the page at <> that indicates its unofficial status or that it's a CSWR web page.

The SPO makes it pretty clear that there was a need for simultaneous multi-scale, multi-platform observations. This also has been harped upon in the numerous public presentations overviewing VORTEX2, many, if not all of which can be found on the web (e.g., those given at the 2004 and 2008 AMS SLS conferences, the 2009 ECSS, a National Academies Board on Atmospheric Sciences and Climate Forum, etc.).

I have no problem with a stated need for simultaneous multi-scale, multi-platform observations. My concern is regarding the objectives.

But your essay suggests that we had a problem finding the right storms because of not using forecaster input in the optimal way, having decisions made by too many people, having too much inertia, etc. I think the 27-29 tornadoes figure indicates that whatever we were doing to find the right storms worked extremely well (maybe ~25% of cases have what I'd call "good multi-scale, multi-platform, integrated datasets", which might seem like a small fraction, but the fraction of "good cases" in a smaller project like SUBVORTEX, which involves only 4-6 vehicles, was about the same).

This is YOUR interpretation of my essay (as evidenced by "... your essay suggests ..."). You may be reading into my comments. Perhaps I could have stated them in such a way that I would provoke no defensive reactions, of course. But I'm not suggesting what you described ... I'm suggesting that some changes could have improved the operation.

A chaser's goal is typically just to see the tornado, not collect data. By this baseline metric, it's hard to imagine that there's a single chaser on the planet who could have done better than 27-29 tornadoes in 6 weeks of chasing.

Interesting argument ... I know of at least one chaser who likely exceeded that success rate by a significant margin. And he was indeed on the Earth, not some other planet. There may be others.

Well, that's surprising, but I'll take your word for it. Indeed, VORTEX2 would have done better if we had identified this person in advance and had him/her make all forecasts and targeting decisions. But I don't think that affects my argument. Assuming this person isn't that successful every year (if s/he is, then you would be doing the field a great service by revealing this person so that s/he can be enlisted to be sole decision-maker in VORTEX3 in 20 years), and that one cannot know a priori which chaser or forecaster will be the most successful in a given year, doesn't that still argue for having multiple inputs involved in the decision-making process?

I didn't say that this person would have done better as a VORTEX forecaster ... as you yourself said, chasers don't operate like a mobile observing team. I was only reacting to your comment, which seemed a bit hyperbolic.

In my view this tally is a direct consequence of the consensus approach. Most chasers can't take a 2-minute poll of 20 experts before deciding to head west to storm A or north to storm B---what chaser wouldn't want to be able to do this if the capability was present? I think it's safe to say that you yourself at times probably would have liked to have chosen a storm based on a "short-fuse" consensus of storm chasers. I also don't see how one person could be tasked with the decision when the goal was to collect integrated, multi-platform datasets. As an example, a hypothetical Field General probably wouldn't want to declare a new target storm without hearing from PI's A, B, and C that their instruments were just deployed 2 minutes ago and will take 30 min to be picked up, resulting in potentially 60-90 min elapsing until data collection with these instruments can even begin on the new target storm.

Having one person make decisions is not formally equivalent to having only one input to the decision. This is a straw man argument, based on a misinterpretation of my words.

The biggest misinterpretation here is your thinking that decisions were made by multiple people, when in fact, decisions were made by exactly the process you're advocating, i.e., one person with multiple inputs. See above.

And see my response to your earlier assertion on this topic

"It's my understanding that consensus usually emerged about operational decisions after what could be considerable time lost in reaching that consensus."

This is yet another statement presented without any supporting evidence. Can you be specific? This was not my experience. Can you provide a list of datasets that were missed or abbreviated because of this? I can understand that you might not want to reveal your sources, but I don't see why you can't at least provide some examples/cases/dates. There are a number of cases for which the consensus got us data we would not have had otherwise.

As you yourself have taken some pains to point out, I'm basing my comments on hearsay, not the V2 logbooks. I again cite 10 May as a candidate.

Your understanding of the 10 May deployment is just not on-track, unfortunately.

At least not on-track according to your perspective.

FWIW, my perspective from that day was as Mission Scientist, and I was physically in the FC vehicle during the discussion about what to do about the radar box. I'm happy to fall on the sword for any missed data that day, but it's simply false that the radar box would have been moved without a group discussion taking place, given that all of the prior planning for how to operate on 10 May called for establishing a fixed meso-beta-scale radar network with the mobile radars. 10 May is the only case in VORTEX2 for which special deployment strategies were considered in advance of the event---we all recognized the potential magnitude of the event and the enormous difficulty of 50+ mph storm motions. Discussions of deployment strategies began no later than the morning of 8 May, and perhaps even as early as the evening of 7 May. 10 May was a no doubt a painful miss, especially given all of the pre-planning. In hindsight, it's my view that the failure was the overall plan for how to deal with fast-moving storms (we never used this strategy again, by the way), that is, to establish such a large-scale radar box before storm development, with the intention of keeping it fixed (the thinking was that moving radars might be risky given that there's dual-Doppler coverage forfeited when radars are on the move, and we wanted to maintain continuous dual-Doppler coverage as long as possible). On the morning of 10 May, and even into the early afternoon, it seemed like a slam dunk (to me anyway) that there'd be at least one storm moving through at least two of the dual-Doppler lobes in this network, given the sheer size of the box and likelihood of numerous supercells. We were thrown a curveball (I suppose really a slider, which is a fast curveball), and when it became clear that we'd have to abandon the fixed box strategy, the event was already well underway, with the added difficulty of "battlefield" communications challenges.

As I've stated ad nauseam, it was a coordinator discussion that resulted in the departure from the original game plan (and at least some limited data collection), not a coordinator discussion that doomed us to a missed opportunity, as you've asserted. I won't make this point again. I really don't care what you think, I just want the facts posted somewhere close to the errors. (I believe that our discussion, by the way, has now doubled-up your original essay's length.)

And I'm posting your comments so that your side of things indeed can be known. It's not particularly meaningful to me whether or not you care what I think, but I'm pleased that you've taken the time to offer your perspectives. I don't care about the length of this discussion either - in fact, I'm delighted that it now exceeds the length of my original post. From my perspective, it's been a success - in part owing to your willingness to "mix it up".

Adding a bit to the lengthy discussion above, if we had known that the Norman storm would be the big show as soon as it was initiated, there indeed would have been time to move the radar box. But for a while we were nervous about moving the box in response to the Norman storm because it looked like there was still plenty of "room" for at least another storm to develop along the dryline between south OKC and ~Stillwater (roughly the region of the break in the line of supercells). Moving the box only to have the best storm move exactly where the box was originally located would have been utterly unforgivable. Thus, we waited for what ended up being too long to wait given the 50-60 mph storm speed and more eastward motion (the motion was from ~270 degrees rather than ~230 degrees as forecast, which meant the box needed to be moved farther south than we were anticipating even after the discussion began about abandoning the box). There were also some communications issues, as mentioned above, that hampered a speedy dismantling of the box. Again, without the coordinator discussion there would have been nothing but dual-Doppler of clear-air returns along I-44. I'm not sure where the "time lost in reaching consensus" theory for the 10 May disappointment originates, but my recollection of the day's happenings is pretty different.

And that's fair enough. If any of those expressing concerns for how things went on 10 May are willing to offer comments, I'll be appreciative.

"People with proven forecasting ability should be recruited and their assessments need to be given appropriate value by the decision-makers." (7th bullet in the summary)

"It’s also bordering on axiomatic that researchers are poor forecasters. The skill of a researcher is associated with dogged, detailed consideration of all the components of the research project. There’s no pressure while doing research to make a decision within a limited time - researchers are rewarded for their insight, not their ability to make good decisions under pressure, without complete information. Forecasters, of course, must make good decisions under just these circumstances every day. Nevertheless, forecasters have been, and apparently still are, second-class citizens in a community dominated by researchers. If you want to have the best forecasters help with the process, you need to respect what they have to say. If you’re satisfied with just anyone as a forecaster, of course, then I might ask what’s the point of having a forecast team at all? What role will the forecasts play in decision-making?"

You seem to be unaware that we had two dedicated forecasters each day (one for the Day 1 period and the other for Day2+). My impression was that the forecaster's targets were folded into a larger consensus pretty well. Some of the forecasts were spot-on, some were dead wrong, and all of us on one day or another made stellar or lousy forecasts. This is why consensus wins.

Consensus wins by virtue of avoiding extremes. In the process, it is consistently mediocre. A really good forecaster can beat consensus consistently.

(SPC outlooks were also a part of the consensus---some outlooks we agreed with, others we did not fully buy into, and others had to be regarded differently because forecasting the storm that provides the best opportunity for coordinated data collection is not always the same as forecasting the area with the highest overall severe weather threat.) Again, it's hard to argue that any single person could have successfully gotten us to 27-29 tornadoes in a 6-week period. Are you saying that we should have blindly gone to the forecaster's morning target every day, and that anything short of that shows disrespect for forecasters?

No, that's not at all what I'm saying. As Dr. Coniglio has indicated in his comments, it seems I was unaware that forecasters were involved directly in the decision-making, at least to some extent. To that extent, it's a good thing! It was my apparently mistaken impression that decisions were made that accounted for more than just the forecast but without the forecaster being aware of that. So long as the forecaster is aware that some non-meteorological factor could drive the decision, I'm satisfied. If the forecaster's recommendations were rejected on meteorological grounds, presumably the forecaster would have been afforded the opportunity to hear the arguments against his interpretation and be given the chance either to offer counter-arguments or to concede the point. Is that the case? If so, I'm pleased.

I can't imagine any forecaster would have wanted it that way, especially since targets typically have to be adjusted as the days evolves. It also seems contradictory to argue against the deterministic approach to forecasting, as you wisely have in the past, only to argue that only one individual should make the daily targeting decision, if I have indeed interpreted your point-of-view correctly.

Once again, you're misstating my argument. One person makes decisions ... with all available input. Your apparent confusion about my position regarding determinism vs. consensus is a direct consequence of your misinterpretation of what I'm saying.

Reiterating, targeting decisions were made by the FC, with all available input. Sometimes the input was information about a team's ability to deploy (e.g., perhaps there were too few paved roads, perhaps instruments had to be retrieved), sometimes the input was about which storm to target (e.g., asking a team for a visual observation to confirm that a storm was improving or waning, or asking coordinators for new target suggestions given that the current target storm was weakening rapidly), and sometimes the input pertained to the best location to intercept a target storm (e.g., pinging coordinators for downstream locations that were believed to afford the best opportunities for deployment). Morning meetings were moderated by the Mission Scientist, whose role was to declare the plan of the day (e.g., a "go" or "no go" day, along with an initial destination; in the original design of the project, before the project had to be reduced in scope, another aspect of the plan of the day was to set a data collection priority for the day, e.g., spreading assets out over the meso-beta scale in order to better observe the larger-scale environment versus collapsing assets toward one storm). Perhaps you feel that the FC and MS roles should have been merged into one. The thinking behind keeping these roles separate was to allow the MS position to be rotated among Steering Committee members in order to accommodate the diverse science objectives of the many PIs as equitably as possible. Regardless, there was always someone on top in the decision-making process, contrary to your assumptions.

If so, that's good to know ...

The big misses in my view were 5/22/10 (large tornado in SD) and 5/31/10 (Campo, CO). Both were days we chose not to operate (fatigue played a role in both cases, although I'm not sure we ever would have gone to Campo even if well-rested given the environment that was forecast and our present understanding of what constitutes a favorable environment for tornadoes; a poor hotel location on the night of 5/21 also enhanced the fatigue factor in the 5/22 miss). On days we actually operated in 2010 (2009 is another story, as every chaser knows well), we virtually always found a supercell. I don't know the exact numbers, but I think we intercepted roughly 30-35 supercells this spring (the total number for all of VORTEX2 is 43 supercells, with only one clear-sky bust in each year of the project), and the majority could be regarded as the "storm of the day". Again, getting to the right area/right storm is an entirely different issue than getting the types of integrated data we sought in VORTEX2, but the numbers suggest that getting to good storms wasn't a problem.

Fair enough ...

I also disagree that researchers are universally bad forecasters.

Although what I actually said doesn't use the word "universally", I accept as valid the criticism that my words could be interpreted that way. I agree that it's not a universal trait of researchers, but would you agree that forecasting skill is relatively rare among researchers?

Yes, but what's your point? My initial reply (see below) states "VORTEX2 is perhaps special in its fraction of 'researchers' who are are good forecasters..."

My point is that I have no way to confirm that assertion at this point, other than your opinion. You probably are right, in fact, but is there any evidence to validate that? I doubt seriously that the people you listed have any verification statistics they can cite to support this assertion.

VORTEX2 is perhaps special in its fraction of "researchers" who are are good forecasters. I estimate that 50-75% of the PIs chase regularly or have chased regularly, and most of these chasers make their own forecasts for their chases (12-15 members of the daily discussions are also CFDG members). I can't imagine not wanting to target a consensus bullseye derived from a group comprising lifetime storm chasers Burgess, Magsig, Romine, Rasmussen, Dowell, Parker, Coniglio, Marshall, Weiss, Dawson, LaDue, Bluestein, etc. (apologies to those I left out---it's in the interest of brevity only). I think our tornado count backs that up. If the SPC forecasters didn't mind working every shift, don't you think they'd also benefit from having their outlooks derived from a consensus of all of their forecasts?

Long-time successful storm chasers are NOT necessarily good forecasters. Note that I'm careful NOT to include myself in the category "successful, experienced storm forecaster". The metric of seeing tornadoes doesn't qualify as a meaningful skill score regarding one's skill at weather forecasting. Even if we limit the forecasting to just severe thunderstorms on the Plains in the Spring, chase forecasting is very different from putting out a product for all to see on a routine basis for many years.

Your list of participants is interesting, and I might well want to hear their input and have a discussion about their various views before I had to make a decision. But none of those people is an experienced forecaster.

So are you saying we should have enlisted the chaser who saw >29 tornadoes this season, or are you saying that chase success should not supplant forecasting experience? Based on your criteria, I think an argument can be made that the only experienced convective storms forecasters are those at the SPC. Unfortunately, these forecasters were not available to VORTEX2 as I understand it, although, as I stated previously, SPC outlooks were routinely considered in discussions about potential targets.

It is, indeed, unfortunate. I'm certainly not saying you should have used the chaser who saw >29 tornadoes. I'm saying that chase success is not a metric for deciding who is or is not a good forecaster.

This is an interesting debate, but I think I'd like to put it on the backburner until I feel like we missed data because of forecasting shortcomings (to be clear, I'm not saying our forecasts or decisions were perfect; rather, I'm saying that I'm skeptical that our misses were the result of the wrong approach).

So if I understand you correctly, in your opinion, if V3 were going to commence field operations in 5 years, the process by which decisions were made should remain exactly as it was in V2? That there is no possible way to improve on the process?

We pushed hard for two years of operations in VORTEX2. The risk of spreading the project over 2 years was that we might have lost momentum and experience from one year to the next (even the veterans would have to re-learn efficiencies, adapt to new quirks in software updates, etc.). But we thought that this risk was outweighed by the need to have the flexibility to fix major failures between Years 1 and 2. These failures could have been technical or logistical. In every project there are things that can be fixed right away and other items that cannot be fixed without weeks to months of down time or deliberation/re-evaluation. We had a PI meeting in the fall after Year 1 operations, and one of the goals of the meeting was to discuss ways to improve Year 2 operations (including decision-making). The majority opinion was to keep doing things in Year 2 the way we did in Year 1. Most seemed to think it worked well, with the caveat being that we wanted to constantly remind ourselves (I mentioned this point in a prior reply) that obtaining input from multiple sources only works well if it happens very quickly. I believe the time limits imposed on "coordinator input periods" was stricter in Year 2 than in Year 1. So now that we have a second year under our belts, what would we do differently if returning to the field for a similarly sized project in just 5 years? I don't think I'm ready to answer this in full, as I'm still short on sleep from the 16.5 kmi logged in May and June. I also would not participate if the project was in 5 years. The personal sacrifice and my family's sacrifice for VORTEX2 was far too great. You can consider me to be retired from storm chasing indefinitely. If I was paid handsomely to consult for VORTEX3, my kneejerk feeling would be to use the same decision-making process. Again, I think the number of intercepted storms speaks for itself---finding storms was not a problem (the storms always can be more cooperative, of course, e.g., not rain-wrapped, slower-moving, more isolated, follow better road networks, etc.), and we were on the storm du jour more often than not (we were much more successful than I ever was as a solo chaser, although I'm probably not the best comparison). I'd want to explore ways of improving comms (e.g., reducing latency in chatrooms, connectivity dropouts, etc.) and the dissemination of real-time mobile radar data to teams. Obviously we'd want to hear from everyone else as well, just as we did last fall after Year 1 operations---perhaps others' opinions have been modified after Year 2. I don't think further discussion of ways to improve the project is best done online, however, so that's all I'm saying for now.

Thanks for your input regarding the question ... 5 years probably was too soon to contemplate at this time of collective breath-gathering.

"The armada needs to be limited regarding the number of vehicles".

This statement suggests it's your understanding that there was a case missed because of the size of the fleet. Can you provide any specifics to support this, or is this another argument by authority?

"Argument by authority" again is an inappropriate description.

Waypoints were set for the VORTEX2 teams based on posted speed limits, and 95% of the vehicles had no trouble reaching these targets on time (the exception is the SRs, which supposedly max out at 65 mph on an interstate highway, but I doubt you'd argue that we should have eliminated our only source of storm-scale radar data from the project). It is indeed true that we are not as nimble as a lone chaser who is willing to correct for a forecasting error by driving 90-100 mph. But there is no evidence that the VORTEX2 fleet could not move as quickly as the posted speed limits allow, with the obvious exception being days when roads were overcrowded with chasers (in which case a small armada would have been affected in the same way).

A smaller fleet is easier for one person to coordinate, but this was the reason for having distributed coordination duties, e.g., there were separate coordinators for mobile mesonets, radars, soundings, etc. In the planning of VORTEX2, you'll recall that pretty much everyone in the community viewed the problem with VORTEX1 and subsequent field projects being the lack of multi-scale, multi-platform data. VORTEX1 accumulated lots of surface thermodynamic data but very little at the time of dual-Doppler observations, and the dual-Doppler observations were from aircraft only (therefore nothing observed below ~400 m, time resolution was 5-7 min, and spatial resolution was ~300-500 m, if you can even believe the details in the analyses anyway given that they required 5-7-min long steady-state assumptions). VORTEX1 only marginally resolved submesocyclone structure. ROTATE collected accumulated O(dozen) dual-Doppler datasets that resolved submesocyclone structure and with typically 5x the temporal resolution of VORTEX1 datasets, but the ROTATE datasets were not accompanied by any thermodynamic data or storm-scale radar data. There have been tornado pod intercepts in recent years as well, but these datasets also lack context because of a lack of simultaneous observations of the parent storm. VORTEX2 was motivated by the need to observe the submesosyclone, mesocyclone, and storm scales simultaneously, and have wind data complemented by thermodynamic and microphysical observations wherever possible. (All of these arguments are made in the SPO.) These are ambitious goals indeed. Everyone involved with the planning of VORTEX2 (including yourself) believed that significant gains in understanding required us to collect these sorts of observations, difficulties aside. I think we've got an awful lot of data to chew on for the next 10-15 years, by the way.

Well, if the choice were MINE to make, I'd have chosen to have dropsonde capability, and would have sacrificed as many radars as possible to have that. I'm not a radar guy and I admit to that bias. Perhaps that played into my relative silence after the first V2 planning meeting.

You haven't said this, but just in case you or someone reading might think this is the case, the Steering Committee had no authority to make funding recommendations to NSF (for good reasons, as you can probably imagine). The only guidance we could provide NSF was to prioritize instruments as tier 1 (highest priority) and 2. We would have loved to have made every instrument tier 1, but the failure to prioritize instruments was one of the reasons for VORTEX2 being declined in its first submission, as you may recall. Again, the SPO was made available to PIs prior to its submission. We did not hear any arguments for making them tier 1 instruments, even from the dropsonde PIs, given their cost and the unavailability of the G-V (NSF was going to have to find a substitute aircraft outside of the deployment pool, which further elevated the cost, as I understand things). When VORTEX2 was cut from ~26 weeks to ~8 weeks (~3 weeks were "re-instated" later as an indirect result of the stimulus), the tethered aspect of the project was eliminated altogether, and this was the part of the project that was to leverage the Oklahoma fixed observing networks to focus on the larger-scale objectives of VORTEX2.

Thanks for that clarification. From my recollection, it was made quite evident to me, right from the start, that dropsondes were unlikely to be available. We wrote the proposal anyway and I'm told we had good reviews, but the choice not to fund dropsondes doomed our proposal.

You seem to be arguing that having all those vehicles to coordinate was done with sufficient ease that you collected more data than you can handle. Was it really all that easy? Are you saying there NEVER was a time when opportunities were missed because the ENTIRE armada couldn't be brought to bear? Really? Never?

There was nothing easy about data collection, and just the sheer number of hours on the road was, for me, the hardest part.

It goes without saying that there were times when the entire armada did not reach the target on time (whatever "on time" even means--at the same time? before something interesting happened?). This most often happened when targets would change and teams that traditionally must maintain large spacing between themselves and from the rest of the armada (e.g., sounding units and SR radars, which operate with ~40-km baselines) would find themselves with sometimes as much as 2-3x the distance to travel to the target as the teams nearest to the new target declaration (as Don Burgess would say, "Sometimes you're the bat, sometimes you're the ball"). Until it's legal and possible to drive these vans and trucks at 150 mph, this will always be an issue---it's just one of the challenges of trying to collect these types of datasets. The problem is independent of the number of vehicles, e.g., you could do a project with just 2 radars (using the same ~40-km baseline) and you'd still have this issue. Establishing dual-Doppler lobes, especially large lobes, takes considerably more time than simply arriving at a storm to take photographs, because typically one radar has a significantly longer drive than the other radar (and perhaps other platforms). So yes, naturally there are periods of a storm's evolution that are missed because they occurred before a lobe could be established, or while radars were leapfrogging to keep up with a storm. But I don't see how difficulty increases with number of platforms, assuming every platform can move at a reasonable speed---the greater the number of radars, the greater the odds of having two radars having a dual-Doppler lobe established over the storm (i.e., the duration of dual-Doppler data collection increases with the number of radars). Moreover, the armada was never "held up" to wait for a straggler before engaging a storm, which would have been limiting indeed. A target storm was identified by the FC, after which teams were on their own to do their missions under the direction of their own coordinators.

Perhaps the closest example of the inertia that you are envisioning would be the situation in which stationary instruments are deployed from vehicles, e.g., disdrometers, sticknets, and tornado pods. We would typically not want to switch target storms until these instruments were totally "out of play" on the current storm, even if the storm a county away was clearly superior on radar (in such a case, a solo chaser would switch storms at the drop of a hat). This is because switching storms usually would result in at least 60 min of zero coordinated data collection (it takes time to pick up 24 sticknets scattered over 15 miles of roads, and it takes time to reestablish dual-Doppler lobes, as explained above), which seems inferior to collecting, say, 30 min of coordinated data on the second-best supercell in the area, even if it's clearly nontornadic and the next storm down the line is clearly tornadic. I don't think it's correct to view this as a problem due to too many platforms. Rather, this is a challenge intrinsic to a project with the ambitious goal of observing a variety of meteorological variables simultaneously in a rare phenomenon on multiple scales.

I'm exhausted from this rehashing of the motivation, objectives, deployment strategies, and operations plan of VORTEX2. I'm calling it quits now.

Sorry to have sapped so much of your energy, but again, I'm gratified to hear your take on the situation. I'd be interested in knowing if there's ANYTHING you WOULD change about VORTEX operations if there is going to be a V3 someday. IF there's anything you'd recommend in the way of changes, what might they be? [see above]

"Some considerable preference should be given to observational capabilities that have proven themselves to be reliable in the past during actual field operations. New, unproven, systems can tag along, but they shouldn't be given equal weight when it comes to setting priorities for a day's field operations. A large field observing program should never depend on unproved observing capabilities."

It's not clear to me if this is just a general statement you're making, or if you're referring to a VORTEX2-specific problem. If it's the latter, can you indicate which platforms you considered to be proven and unproven, and on which days the unproven technologies swayed the forecasting/targeting decision? I'm not aware of any targeting decisions being swayed by an instrument. This seems to be another misconception you have about the project's execution.

Perhaps you're right. It is a general statement and the extent to which it affected V2 operations is not known to me. Apparently, it wasn't an issue. But I would still argue that if there's an untested observational system, its data shouldn't be mission-critical. Unless the mission is to test the system.

"I saw numerous times (and I was avoiding the V2 armada, for the most part, during my chase vacation) when the V2 vehicles seemed to be bunched up during the time when they were working a storm. It appeared to me (as an outsider) that this was a problem - each vehicle has an assigned location during an operation and when they're bunched up, it appears that someone isn't in their assigned location. This was a problem during the V1 campaigns, and it seems it may not have been cured entirely, yet."

Although you've offered no specifics, I presume you are referring to mobile mesonets (either NSSL-PSU or those from CSWR) given that these are the only platforms I can think of that one a priori might assume should spread out as much as possible. If you're talking about CSWR mesonets, their primary task was to deploy in situ probes ahead of tornadoes, so these vehicles would have (should have) been bunched up ahead of a mesocyclone track. With respect to the NSSL-PSU probes, these vehicles' primary task was obtaining observations in the rear-flank and forward-flank precipitation and outflow, a place where almost no non-VORTEX2 chasers are found, given the high likelihood of hail damage and poor visibility for photos/video. I highly doubt your observations of bunching up would have been in such a place assuming you wanted to avoid losing your windshield. It's entirely possible you would have seen these vehicles being bunched up in the inflow, however, as vehicles were relocating to another road because the storm's motion took the storm away from the road in the RFD/FFD region on which the vehicles previously had been operating (the idea in those circumstances is to get ahead of the storm as quickly and safely as possible, with spacing concerns taking a backseat to getting probes "back in play" in the RFD and FFD regions asap). There was also a disdrometer vehicle that was identical in appearance to a mobile mesonet---it's possible you would have seen this vehicle very near to another mobile mesonet. I can only guess as to what you might have seen, but I think it's unwise of you to infer anything from your observations unless you were (a) entirely familiar with the operations plan and each PI's objectives and (b) a part of the real-time communications during an intercept.

Fair enough ... but by being "unwise", I offered you the opportunity to enlighten me. One of the reasons this happened in V1 was that some of the crews simply didn't follow the plans. If that never happened, not even a single time, in V2, then I retract my statement and tip my hat to the V2 crews for doing their duty.



Doswell, C.A. III, R.A. Maddox, and C.F. Chappell, 1986: Fundamental considerations in forecasting for field experiments.  Preprints, 11th Conf. Wea. Forecasting and Analysis, Kansas City, MO, 353-358. (see here)

Doswell, C.A. III, and A.R. Moller, 1985: Scientific impact of southern Great Plains severe storm intercept operations – 1972 to the present. Unpublished manuscript, available here