Some concerns about the

use of conceptual models

 

Chuck Doswell


Created: 18 August 1999

Updated: 24 September 2006 - minor revisions and moved over from its former site.

This material is another expression of my personal thoughts. As such, it has no official standing whatsoever and is intended only to provoke discussion. Naturally, it does not necessarily represent the views of anyone in my chain of command. If you want to comment, please send your e-mails to cdoswell@earthlink.net


1. Introduction

Recently, a discussion about the definition of fronts has come up in a moderated newsgroup of which I'm a member. This has set me to thinking, not just about fronts, but about conceptual models in general, including that which applies to a front. I'm going to use the definition of a front as a starting point to the discussion I really want to have.

The original definition of a front can be found in S. Petterssen's classic 1956 text: Weather Analysis and Forecasting, Vol. 1, Motion and Motion Systems, McGraw-Hill, Ch. 11 (Fronts and frontogenesis) , pp. 189 ff.:

...let the term FRONTAL SURFACE denote a sloping surface or zone of transition separating two air masses of different density. Similarly, the term FRONT will denote the intersection of the frontal surface with a chart. In these definitions the emphasis is upon the SLOPING arrangement of a DENSITY contrast.

From this definition, it's possible to derive a number of characteristics associated with boundaries that fit this definition. These are described in the paper I co-authored with Fred Sanders about surface analysis (available here) as indirect characteristics of a front (a pressure trough, surface pressure tendencies, cyclonic windshifts, moisture contrasts, and clouds or precipitation), and we argued that they are not a proper basis for identification of boundaries as fronts. If the definition of a front is associated with the density difference, then only the density difference should be used to identify a front.

Now density differences are primarily associated with temperature contrasts - on a constant pressure surface, isotherms are isopycnics, but that's not the case at the surface. A "surface chart" is not a constant height surface nor is it a constant pressure surface. One way around the problem is to convert temperature to potential temperature (q), which is conserved for adiabatic processes. This accounts for some of variation in pressure and height on the surface, but a better variable would be the virtual potential temperature (qv) which is the potential temperature directly related to density. Of course, the virtual correction even near the surface is pretty small (at most a few K), so for a first cut, Fred and I proposed using q-gradients to define a front.

Note in the foregoing definition, Petterssen makes no reference to a value for the magnitude of the density gradient to define a front. Some believe (I think Fred Sanders does) that a threshold value for the q-gradient at the surface should be definitive - anything less than that magnitude would not qualify as a front. I have a distinct aversion to what amounts to arbitrary choices for definitive thresholds, so I prefer to avoid making them. What I prefer for defining a process is a qualitative distinction between it and other similar processes. Thus, I'd argue that a front is distinctly different from, say, a dryline, where the density gradient may be very small (at least at certain times of the day). The distinction isn't simply quantitative - the implied dynamics associated with the two processes are quite different.

This brings me to the doorstep of the discussion I really want to have. As interesting as this discussion about fronts has been, it really leads to a wider issue: conceptual models, in general. There are two basic notions here:

  1. Terminology: If we are going to use a term (like "front") we really need to agree among ourselves as science professionals what that term is intended to mean. I'm not opposed to changing definitions if that is the consensus among us, but we should all be using terms to mean the same physical process. Otherwise, we have the scientific equivalent of a Tower of Babel. If scientific terms (i.e., jargon) are being used as a convenient shorthand for a set of underlying dynamical processes, then the only way that shorthand can be useful is if we have reached a consensus about its meaning. Imagine if "front" meant "density contrast" to some people, while it meant "an elongated zone of cyclonic vorticity" to others! Thus, scientific terms must have agreed-upon definitions, from which we must not drift. "Definition drift" leads to the Tower of Babel!
  2. Dynamics: Once we have settled upon a definition, there is some set of implications about the physical processes. Some may be obvious, some may be derived readily from the definition, others may be only loosely connected to that physical process. To the extent that the dynamics follow directly from the definition, we expect to find them every time we see that physical process.

2. Using conceptual models

Once we accept a definition and its associated dynamics, then when we encounter a symbol for that process (as in the conventional symbols for a front, or as a term in a scientific paper), we are implicitly assuming the applicability of the dynamics associated with that symbol. So long as we all agree on what the symbol represents, we can take someone else's symbolic depiction of that process (e.g., the frontal analysis drawn on a map) to imply the presence of those dynamical processes. There are two basic problems I see with this:

a. Disagreements over the definition

The fact of the matter is that not everyone does agree about the definition of term used to describe a process. The disagreements that Fred Sanders included in our paper regarding frontal analysis during a workshop represent, to some large extent, differences of opinion about what defines "the front". What I find is that such disagreements are common. Hence, when we encounter a symbolic representation of a process such as a front, it turns out that there may well be underlying disagreements about the term and its dynamical implications. Is the disagreement over whether a boundary is a "true" cold front merely an academic exercise or does it have some reasonably profound importance? I'm arguing for the latter. If a boundary is a dryline, this says something about the static stability of the airmass behind the boundary: drylines typically have steep lapse rate air behind them (when they're advancing), whereas the air behind a cold front is typically pretty stably stratified. Hence, this isn't just an intellectual exercise - it might make a real difference in a forecast!

Moreover, it's not just a matter of the credibility of the analyst, although it's certainly true that some part of the usual disagreement boils down to different competency levels. If there is genuine, valid (in some sense) scientific dispute about the definition, who's to say that one answer is the only acceptable "right" one? That's not really the way science works - there's no argument by authority that's valid in science. But this runs us smack into the Tower of Babel problem!

b. Spatial and temporal variability in the character of physical processes

Even if we all agreed upon the definition of a physical process, it is a common characteristic of all atmospheric processes that they don't remain static. Real processes are nearly always complex and variable over both time and space. If we look carefully at the classical dryline, we find that the density difference across it tends to be rather small during the day (and, in some cases, may become completely negligible), but it becomes quite substantial overnight. Thus, it might meet the definition of a front at some point during the night, but would fail to meet it at some point during the day. Does it make a difference? In physical terms, the presence or absence of true frontal characteristics must have some influence on the processes! So do we want such boundaries to be labelled with frontal symbols during the night and dryline symbols during the day? Just when would we decide the transition has taken place? This could get really ugly, right?

But it's even worse than that! If we look at a feature, for example, the boundary shown in the NCEP analysis:

as a cold front across NE, KS, OK, and into the TX Panhandle is drawn as a cold front. However, at that time, as shown by the analysis of surface q provided by Eric Hoffman and collaboraters at SUNY-Albany:

the gradient of q normal to the boundary is apparently less than the gradient along the boundary. This isn't a classical front, dynamically! Observe the magnitude of the gradient is depicted on this chart with the colored "shading" in units of 10-5 K m-1 [or K (100 km)-1] as shown in the "key" in the lower left. The indicated gradients are not particularly intense, especially in comparison with cool season standards. 24 h later, the NCEP analysis, which continued to show this feature as a cold front, is now more or less consistent with the surface q-analysis!

Hence, the boundary has undergone a transformation (which by itself is an interesting topic but not relevant to this discussion), and what was not a classical front at one time has become one, literally overnight. Thus, if someone were to use the same symbol to depict the feature at both times on the chart, or the same word to describe it in some text, wouldn't this be misleading?

Furthermore, anyone who has tried to analyze surface charts often will find that features like fronts do not look the same from place to place, even at the same time. This can be seen in the preceding figure ... whereas the windshift and q-gradient are nearly colocated in IL-IN, and on into MO, the q-gradient seems to be lagging behind the windshift in OK, and there seems to be a windshift ahead of the q-gradient in MI, as well. This sort of spatial variability is probably the rule, not the exception.

3. Origins of conceptual models

Conceptual models typically arise as abstractions. The idea is that after someone has studied one or more events using real data of some sort, their findings get distilled into some sort of simplified version of the reality that was the actual object of study. Examples abound: the Norwegian cyclone model, the Browning supercell model, the classic frontal model, the "four quadrant" model of a jet streak, etc. In fact, it seems very popular at the moment for scientific papers to include a conceptual model as a summary of their findings. Naturally, this results in a plethora of competing conceptual models. For extratropical cyclones alone, there is the original Norwegian model, the Keyser-Shapiro model, the "Conveyor Belt" model, the STORM model developed at the University of Washington, and I may well be leaving out others.

The scientists who created the various competing conceptual models have considerable stake in getting these models to be accepted widely ... fame and fortune (literally) flows from general acceptance of a conceptual model. Further, students tend to pick their favorites out of what they've been taught and stick with them, come what may. Thus, we meteorologists tend to fall into various camps regarding which prototypical structures (and their implied dynamics) we impose onto the real data. I see a very strong tendency for the adherents of a particular conceptual model to try to shoehorn as many real-world examples into the "box" delineated by the conceptual model ... and to snip off any parts of reality that don't seem to fit their preconceived conceptual model. The former is an acceptable practice, but the latter is profoundly unscientific!

An interesting topic that is related to conceptual models is the development of classification systems. Most physical processes in the atmosphere are not carbon copies of each other. Rather, it seems that real-world meteorological processes come in different "flavors" - for fronts, there are cold fronts, warm fronts, and occluded fronts. For supercell thunderstorms, there are low-precipitation, classic, and high-precipitation versions. For tornadoes, there are supercell and non-supercell varieties. And so on.

This seems to be an implicit recognition that real-world processes have variability above and beyond the simplest abstraction of the process, so that subclassification tries to account for that variability within the same general class of phenomena. Of course, if there are arguments about whether an event belongs in the class at all (Was this a true supercell?), then there are just as many arguments (if not more) about which subclass in which the actual event belongs (Is this really an occluded front?). These "taxonomy" arguments are important, as I tried to suggest in some comments I wrote about a particular classification scheme (Doswell. Les Lemon, a long-time colleague and friend, once told me that a classification system has value only insofar as it enables you to make reasonably accurate predictions about the behavior of class members. In terms I have been using in this essay, the most justifiable classification of events should be along the lines of real qualitative differences in the dynamics of the processes ... not simply arbitrary quantitative differences. A warm front is not necessarily quantitatively different from a cold front (although it can be, of course), but there is a clear dynamical distinction between an event where the warm air is advancing versus one where the cold air is advancing.

The classification arguments are simply an extension of the arguments about conceptual models. There is nothing inherently wrong with developing conceptual models to summarize the findings of a scientific study. However, once the conceptual models have been developed, problems can arise in the use of those conceptual models.

4. Problems ...

Particularly when it comes to doing surface analysis, we seem to be compelled to delineate those features we believe we see in the data. By drawing a symbol for a cold front, whether we realize it or not, we're implying something about the ongoing dynamics, as I've already suggested, this can be a very risky business!

The basic rationale for putting frontal symbols on a surface chart seems to be a sort of shorthand for what we see in the data, so far as I can tell. Once we add the symbol, it is assumed that someone looking at the chart can see at a glance what features are present in the data and where they are. A series of such charts provides a convenient, easily-grasped depiction of the time evolution of those features. But all of this seemingly useful effort contains the potential for serious misinterpretation and, hence, to bad forecasts. If the analyst has not followed strict definitions in denoting the features, aren't the implied conceptual models of the processes being depicted in error? Wouldn't that present a large opportunity for misinterpretation and correspondingly bad prognoses?

Which definitions are being employed during the symbolic depictions of the processes? That is, if there are competing conceptual models, is everyone using the same version, or do different analysts use different conceptual models? It seems likely to me that most meteorologists have their own set of "favorite" conceptual models and they're unlikely to want to give them up for the sake of consistency! The variability we inevitably see in hand analyses is abundant evidence of this.

It seems to me that when it comes to surface analysis, I'm coming around to a position advocating letting the data (and the analysis of the data) speak for themselves. Why do standard surface analyses (such as those produced by NCEP) only include pressure and "features" like fronts? As Fred Sanders and I argued in our paper, it makes no sense not to have at least an isotherm and isodrosotherm analysis on the surface chart. We couldn't imagine analyses at mandatory pressure levels without isotherms, so why do we not put them on surface analyses? Of course, I'd prefer to see isotherms of potential temperature and isopleths of mixing ratio instead of isotherms and isodrosotherms on a surface analysis (see here for SUNY-Albany's version of this). If I must see someone's depiction of the features, at least let me see the analyses of potential temperature and mixing ratio, so that I can decide for myself what (and where) the features really are.

I'm not suggesting that we abandon completely the notion of using symbols to depict "features" but we either all should agree to what the symbols mean, or we should accept the fact that the symbols we as individuals put on a chart are unique to individuals and not try to get them to convey something specific to anyone else. When we write scientific papers using "terms" as symbols for the processes, we simply can not have the sort of individual freedom to use terms however we wish. The Tower of Babel is the inevitable result of that!

Personally, I believe that imposing conceptual models on the data, rather than adapting conceptual models to fit the data, is a dangerous and unscientific thing to do. We shouldn't be seeking to confirm our cherished conceptual models in operational practice, or in our scientific investigations. Rather, we should be seeking to understand the structure and evolution of the atmosphere, at least insofar as it's revealed by our data. If we can summarize our current understanding via a conceptual model, that's perfectly fine, but we have to recognize that all conceptual models are abstractions and don't represent anything more than an oversimplified version of what we see in the real data. As my graduate advisor, Prof. Yoshi Sasaki, once told me, "It [meaning science] all begins with the data!" If we find ourselves beginning with a conceptual model, then we're flirting with the danger of doing unscientific things by forcing the data to fit our preconceived notions.