NOTE: Just a reminder to readers
that as of this season, 2004-2005, Northeast U.S. winter storm
forecasts will no longer be available. This is regretable, but
necessary. I've been doing those forecasts for over a decade
now! It is sad to discontinue them, but responsibilities dictate
that the winter months be spent tuning and gearing up the tools for the
Atlantic tropical prediction season. I hope to continue this
seasonal prediction in the years to come. Also, if you have a
specific question regarding an upcoming event, feel free to contact me
(depending on the timing, situation, etc, I cannot guarantee a timely
response, but I'll certainly try!); I'll still be generally following
the Northeast winter storm events for my own interest (and, for this
season, carrying through with commitments to subscribers from last
Looking back at 2003-2004,
the forecast went generally pretty well. Early in the season,
with some rather heavy December snows in some areas, it looked like the
forecast would go down in flames. In fact, many readers inquired
whether or not there'd be a seasonal forecast update, given how
horribly wrong things were headed. For example, in Boston my
forecast called for 40" of snow, about 5-10% below normal.
December 2003 was the 7th snowiest December on record in Boston, with
21.5" officially recorded (the normal for December in Boston is
6.9"). That's over half the predicted seasonal total occurring in
only one month, and that month being the least snowy, on average, of
the core winter months (December-January-February). You can see
why the seasonal forecast looked like it was in serious trouble.
Reader comments (either negative or simply inquisitive as to "what went
wrong?") were fast and furious and, heading into January I seriously
considered re-assessing and issuing a new forecast. However, I
really couldn't find very good meteorological or climatological grounds
to do so. So, I begrugingly held onto the forecast I had.
Rather incredibly, it managed to hold up pretty well. Again,
using the Boston example, after their 21.5" in December, they scrounged
up only 17.9" then entire remainder of the season. That left them
at 39.4", just about dead-on with the forecast. I could, of
course, mislead by cherry-picking one site. And, indeed, Boston
was the best verified... coming within less than an inch of the
forecast... and the most incredible, as it was on target early in the
season for a major bust. But, generally, the rest of the forecast
went well. Far northern areas were predicted to exceed their
normal snowfall, while southern areas were predicted to fall
short. Using the standard stations I carry in my analog table, I
had only Burlington, VT above normal, with the following sites below
normal: Binghamton, Boston, Providence, New York City, Philadelphia,
Baltimore, Washington and Pittsburgh. Buffalo, Rochester and
Albany, NY were predicted near normal. The only real failure was
Binghamton and Rochester, who came in well above normal. New York
City and, to a lesser degree, Providence also came in above normal, but
I'd like to think that was largely just a mesoscale effect as both
locales received more snow than Boston (both cities, especially New
York, are obviously below Boston in their climatological snowfall
norm). Truly, we cannot use that as an excuse; when a forecast is
verified in hindsight... right is right and wrong is wrong. My
point is merely that there's no way such forecast will be perfect and
such mesoscale effects are "acceptable" error. At any rate, eight
of the 12 forecast points verified quite well. Certainly not
perfect by any stretch of the imagination, but I'll take it...
especially after how the season got started.
Though last season panned
out pretty well, you may recall that there was considerable hemming and
hawing over the vast range of solutions the analogs were
presenting. Unfortunately, we're faced with a very similar
situation this year. Take Washington, DC (DCA), for
example. You'll see below that both 1997-98 and 1995-96 rate as
two of the top analog seasons. At DCA these were both incredible
seasons on opposing ends of the scale. In 1997-98 DCA almost had
the remarkable occurrence of no measurable snow; they scraped up a
tenth of an inch from one event. On the flip side, in 1995-96
they were hammered with 46" of snow, an incredible amount for the area,
especially, as locals realize, DCA tends to receive and report snowfall
amounts on the low side compared to the vast majority of the DC Metro
area. This wild variation is, on face value, almost impossible to
deal with. In reality, however, an understanding of the
methodology helps "clean" this out. Simply from an outlier
standpoint, when you view the table below, 1997-98 appears to be the
outlier. Second, the seasons are ranked by how well they analog;
1997-98 made the "short list" by barely coming in last place. So,
1995-96 is a superior analog. No, that's not to say DCA will see
46"! Not only is that a leap, but 1995-96, while a superior
analog to 1997-98 is still near the bottom of the list. But,
conditions may run closer to 1995-96 than to 1997-98. Also, not
surprisingly, 1997-98 barely cleared the bar to make the short list
with the inclusion of a highly debatable analog parameter... NAO.
The North Atlantic Oscillation is a critical factor in controlling the
day-to-day weather, and it is a helpful phase to keep in mind when
doing short to medium-range predictions. It is NOT, however, a
good parameter to use as an input analog parameter for seasonal
prediction. It fluctuates too wildly. Nonetheless, as it
does hold sway over winter weather in the Northeast, I have included it
as a very weak analog parameter. That tiny addition allowed
1997-98 to just barely clear the threshold into the top analog
seasons. That doesn't explain 100% of the variations; for
example, 1980-81, another relatively low-snow season, ranks quite high
among the analog seasons. That'd be fine if most of our analog
seasons were running below normal, but they're not. Every site
has at least half of their analog seasons above normal, some have six
or seven of the 10 analog seasons above normal. And those
managing only exactly half are the seasons where the borderline analog
1997-98 yielded very low snow. So, for the most part, the analog
seasons are pointing to above normal snowfall. Yet, not only does
1980-81 rank high, but most sites have two of the top three analogs
yielding below normal snowfall. The ONLY exception is
Pittsburgh. So, in total, there remains considerably wide
variation among the analog seasons... albeit less than the raw data may
indicate (i.e., 1997-98 may not be tremendously valid).
To describe the
methodology a bit more, I've left intact the narrative from last
season, with only some minor touch-up or modifications to account for
any changes in forecast or methodology this season:
"We've already given an
idea of where we stand with respect to this seasonal prediction.
before we get to the actual numbers, just a bit of background on how we
perform this forecast. As is already obvious, it's an
prediction. One cannot blindly and reasonably come up with a
pattern prediction this early in the game (mid-November).
several global oceanic and atmospheric conditions tie in to the
pattern. And some of these are either slow to evolve or evolve
predictably. Therefore, if these conditions can be parameterized
numerically, we can compare this season's current or expected parameter
values to those of past seasons. This allows us to match up
seasons and, since these oceanic and atmospheric conditions have an
on the overall pattern, we can anticipate that snowfall will, in
be roughly similar. Of course, in lower snowfall regions, the
may be a bit weaker. This is because one or two events in a
can dominate the seasonal snowfall total. Thus, the lack or,
the occurrence of one or two such events can dramatically skew the
snowfall numbers in low snowfall regions. Nonetheless, in seasons
far from the norm this methodology can still work, even in those areas.
Given that, and a lack of any superior methodology, we can use
analog method throughout the Northeastern U.S.
As for what various parameters we're using... The El Nino phase remains a fairly dominant factor. Defining the El Nino phase, however, can be done differently depending on specifically what aspect of the El Nino/Southern Oscillation one is look at (winds, pressures, SSTs, etc.; we use SSTs, but even then, one can use a number of different SST measurements). For my purposes, I tend to use the Nino3.4 (combination of the eastern and central Pacific) sea surface temperature (SST) anomalies. In 2002-2003 this was the bane of the forecast, due to a stark split in the Nino 3 and Nino 4 anomalies; this masked (and rendered useless) the Nino3.4 SST anomaly. However, this is a rare occurrence and is not present this season. We also use the Atlantic SST anomaly. The value of this factor was increased last season due to an investigation. Evidence was presented in research by others that the Atlantic SST anomaly may be related to the dominant phase of the NAO (North Atlantic Oscillation) in the upcoming winter. I have long argued that the NAO is more of a predictand than a predictor. As such, I have complained vehemently at some predictions using the NAO as a key to the seasonal forecast. Still, the NAO is an excellent pattern phase to try to link in to. Thus, I would like to have some NAO predictor in our analogs. Thus, I expanded on the aforementioned Atlantic SST anomaly research and, in my research, the link between Atlantic SST anomaly and NAO was even stronger. As a result, the North Atlantic SST anomaly is a significant factor for the seasonal prediction. The Arctic Oscillation and Quasi-Biennial Osciallation are also factored into the equation. I should note also, since readers may ask why I've not used "this" or "that"... I have examined several other parameters that are known to have a relationship with the general winter pattern over North America. However, for this season I've not seen enough of a signal in these other parameters to utilize them effectively. Therefore, although they are valid parameters to use, I simply cannot draw any conclusions from them or establish any reasonable analogs for them. Finally, as some "protection" for the aforementioned North Atlantic SST anomaly, I do include analogs for the North Atlantic Oscillation itself. However, this is given a low weight due to the problem mentioned above... that the NAO is more of a predictand than a predictor." Added in 2004 is the PNA (Pacific-North American Oscillation). The use of this as a predictor is debatable and, as such, it has been given a low weight. Also added in 2004 is the Hurricane Season analog. There has NEVER been shown a strong connection between this and the winter pattern. However, several of the same driving forces do come into play (the failure to show much connection is, likely, due to changes in the parameters during the fall months), so I allow this parameter a small weight. Given it's lack of evidence supporting it as a predictor, and the fact that I already utilize the individual predictors, why in the world do I use it? Two reasons... One, some factors, like the NAO-Atlantic SST connection, are more useful further back in the year, near the hurricane season; use of this parameter helps us capture that. Second, I perform Atlantic tropical cyclone seasonal predictions and am well aware of their shortcomings; and they're based largely on similar parameters used for the winter season prediction. Clearly, "something" is not being captured... this is an attempt to do so. Nonetheless, the numbers don't lie. This is NOT, generally, a good predictor. So, it is given a low weight.
What does all this add up
to? Well, here's how the seasons stack up in each of the analog
factors. I'll simply list out the analog seasons referencing them
by their lead year (for example, the winter of 2003-2004 will be listed
as 2003, since that's when the analog data is from)...
Arctic Oscillation: 1950,
1951, 1952, 1957, 1958, 1959, 1965, 1972, 1974, 1976, 1979, 1980, 1984
Nino3.4 SST Anomaly: 1951, 1953, 1957, 1963, 1968, 1969, 1976, 1977, 1979, 1986, 1990, 1991, 1993, 1994, 2003.
N. Atlantic SST Anomaly: 1952, 1955, 1958, 1962, 1963, 1966, 1969, 1979, 1980, 1987, 1995, 1997, 1998, 2003.
QBO: 1953, 1955, 1957, 1961, 1964, 1967, 1969, 1973, 1978, 1980, 1986, 1990, 1993, 1995, 1997, 2002.
NAO: 1951, 1957, 1961, 1962, 1965, 1970, 1973, 1974, 1976, 1979, 1980, 1984, 1987, 1996, 1997, 2001, 2002, 2003.
PNA: 1951, 1956, 1965, 1972, 1973, 1974, 1975, 1982, 1983, 1985, 1989, 2000.
Hurricane Season: 1950, 1953, 1964, 1969, 1988, 1995, 2000, 2002.
After "scoring" based on
the weight of each analog, we were left with a ranking of the
seasons. The scoring panned out in a most convenient
manner. There was a bit of a break at the score of nine
"points". Three seasons earned a 10, while five earned an
eight. Only one earned a nine. Meanwhile, a neat and clean
10 analog seasons scored a 10 or higher. So, the data provided us
with an easy split on what to include and what not to include.
The only debate is the previously discussed 1997-98 season. Not
only is it on the cusp, yielding precisely a score of 10, but it is the
only "10" to be aided by the questionable NAO factor. One could
moan about 1953-54 as well, as it is a "10" aided by the debatable
Hurricane Season parameter. And this does do somewhat of a
disservice to the final "10", 1993-94, which got no assistance from any
of the dubious parameters... NAO, PNA or Hurricane Season. So, it
would be reasonable to remove 1953-54 and 1997-98 from the list.
However, for one thing, I'm using those three "dubious" parameters for
a reason. So, I'm not just going to dump its results. Also,
though low weighted also, the PNA and Hurricane parameters are weighted
more than the NAO. As a result, 1995-96 doesn't get the scrutiny
of 1997-98, because it came in at "11" rather than "10".
Nonetheless, it, too, would be off the list without these three
parameters ("Hurricane" in particular for 1995-96). So, we get
what we get and stick with it. The end scoring results in these
final top ten analogs, with their scores in parenthesis: 1969-70 (16),
1979-80 (14), 1980-81 (12), 2003-04 (12), 1957-58 (11), 1963-64 (11),
1995-96 (11), 1953-54 (10), 1993-94 (10), 1997-98 (10).
The most serious "issue" I take with this analog methodology is that I do no exclusion, but merely inclusion. That is, I dole out points based on how well a season matches (analogs) with each parameter. I do NOT deduct it the analog season is starkly different. I should, but have simply not developed a satisfactory methodology for this. But, let's just give a simplified example and use our resident red-headed stepchild analog... 1997-98. The Nino3.4 and Atlantic SST anomalies are the two most critical driving analog parameters. The 1997-98 analog season matched the Atlantic SST anomaly for this season, but was awarded no points for the Nino3.4 SST anomaly. Arguably, however, 1997-98 was such a POOR match in this critical parameter that perhaps it should have been DOCKED points. Specifically, the current Nino3.4 SST anomaly is running about 0.75C above normal and holding steady. Any season with a modest positive SST anomaly was awarded analog points. But seasons either significantly negative or massively positive should, perhaps, have points removed. In the case of 1997-98, it was the strongest El Nino ever (1982-83 ranked close, but not quite there). So, while its anomaly was positive, it was obscenely so. Ideally, this should damage the analog score for 1997-98. Other than something very simplistic, I've not really established an acceptable way to do this.
The table below shows the
seasonal snowfall numbers for these analog years, as well as the mean
for the analog seasons, the median, the weighted mean, and the 54-year
mean seasonal snowfall. This is done for a selection of sites across
the Northeast U.S. Values with a blue background indicate snowfall for
that season was more than 25% above the 54-year mean; orange
backgrounds indicate seasonal snowfall less than 25% below the
mean. Note that I no longer calculate the weighted mean.
There is a bit too much subjectivity in the analog weighting to make
this useful. Clearly, though, there is some basis for the
weights. As such, a straight-up "mean" takes no account for the
spread in weights. Nor does the median. So, instead, we do
something quite simple... Calculate the total analog mean, then the top
five analog mean, then the top three analog mean. Then, as the
"final analysis" value, average those three means. This,
effectively, performs a much more simplistic weighting scheme,
something far more objective as it is less impacted by the individual
analog weights. I've also included on this table a percent
deviation; that is the percent by which the "final analysis" deviates
from the 54-year mean.
|Top 5 Mean
|Top 3 Mean
My forecast is actual going to differ rather notably from the above table. It takes some serious "problems" for me to deviate from the data. This methodology has worked reasonably well since its inception. Throwing it out is dangerous. And, actually, I am not suggesting a complete toss-out of the methodology. Rather, there are simply some key factors, almost all weighing in the same direction, that are potentially contaminating the final analysis. The core analog methodology remains sound. Here are several key points referencing the table above and related to the concerns I'm expressing:
1) As mentioned time and time again, 1997-98 is, for many reasons, far and away the most "suspect" analog. While it has a low weight in the final analysis, it is so extreme that it remains potent. Take, for example, DCA. Without 1997-98 the "Analog Mean" jumps up to 22.0". This pushes the final analysis up to 17.7". This is not a massive difference. However, for such a low-weight analog, it is significant. Plus, it is compounded by the issues below.
2) Objectivity, though good in principle, can sometime corrupt the data. Specifically, I opted to use the "Top 3" as an arbitrary, clean, objective category. Well, 2003-2004, the 4th analog, rates almost identically to 1980-81. It is almost random that it comes later. These could easily be swapped, resulting in a major impact as 2003-04 was snowier at every station than 1980-81. Or, if we did "Top 4" to include both (rather than random swapping), 9 out of the 12 "Top 4" values would increase from the "Top 3", resulting in a higher "Final Analysis" in 75% of the sites.
3) The same problem occurs with 1963-64 and 1995-96 compared to 1957-58. The scores on these three seasons were nearly identical. Having only 1957-58 make the "Top 5" list is almost purely random. Both 1963-64 and 1995-96 are snowier than 1957-58 at half the locales; and at only one are both seasons below 1957-58 (Rochester). Moreover, both seasons, were they to be included in the "Top 5", exceed the current "Top 5" values at NINE of the 12 locations. Like point #2 above, this is a random, statistically insignificant factor forcing the "Final Analysis" lower. The only way this factor could yield a LOWER total would be if we solve this by eliminating 1957-58 from the "Top 5" and making it a "Top 4". In doing this, we'd mimic the solution for #2 above. And, then, we'd have only two numbers to average for the "Final Analysis"... the "Analog Mean" and a "Top 4". Using the DCA example again... the "Top 4" comes in at 12.8", yielding a "Final Analysis" of 16.3". So, there is a way to apply this correction and yield a DECREASE in the "Final Analysis". But, this truly "flattens" the weighting (using only two means). Therefore, in any reasonable analysis, this almost random inclusion/exclusion in the "Top 5" (and "Top 3", see point #2) yields an artificially decreased "Final Analysis".
4) This is the first season I've seen in which the frequency distribution relative to normal deviates significantly from the means and "Final Analysis". Here's what I mean... Site-by-site there is an even split in the Final Analysis' deviation from the norm, 6 sites are above normal, 6 sites are below normal. Yet, the frequency distribution is much more clear cut. Amongst all analog seasons for all sites, 69 analogs seasons saw above normal snowfall, leaving only 51 below normal. Clearly, the distribution isn't overwhelming, but is rather clear. It becomes even more so when one considers point #1 above. Excluding 1997-98, 66 analogs are above normal while only 42 are below normal.
5) Similar to #4 above,
every site except one (Boston) has an equal or greater number of
significantly above normal (light blue) analog snowfall seasons than
significantly below normal (light orange). This is NOT the same
thing as #4. In fact, based on #4 one would expect the low-snow
seasons to be more significantly deviating, thereby driving the Final
Analysis lower than the frequency distribution. This is not the
case. There are far more significantly above normal snowfall
seasons than below normal. What this means is that it is the
weighting methodology which is driving the Final Analysis down.
This is, of course, of no surprise if one examines the full mean, the
"Top 5" and the "Top 3". At every site but one (Pittsburgh) the
"Top 3" is the lowest value. At first blush, this is
fine... We'd prefer weighting, to give our better analogs more
weight than the questionable ones. But issue #3 addresses a
problem with the weighting which is artificially pulling the Top 3, Top
5 and Final Analysis downward. The frequency distributions in
this bullet and #4 above simply drive home this inconsistency.
6) The lack of
"penalizing" analog seasons with poor correlations to some key
parameters is also aiding low-snow seasons. Examining the Nino3.4
SST anomaly, 1997-98 is likely the only season which would get
penalized for correlating so poorly. To be fair, second in line
is the high-snow 1995-96 season. Still, it's analog deviation is
somewhat less than 1997-98's, making it less likely to be subject to
penalty (1995-96's Nino3.4 SST anomaly deviates from this season's
baseline by 1.3C; 1997-98's anomaly deviates from the baseline by a
whopping 1.6C... a rather modest difference, but, again 1997-98 is
clearly worse). For the Atlantic SST anomaly, no season appears a
poor enough analog to be penalized. The 1993-94 season is close,
as it had a negative SST anomaly. However, it was only very
slightly negative; the only reason it even garners consideration for
penalty is because this season's baseline anomaly is so significantly
positive. Nonetheless, 1993-94 probably does fall short of being
worthy of penalty. So, once again, we're left with the same old
"red-headed step child"... 1997-98, a low-snow season on the brink of
7) Finally, as we see with
the Atlantic SST anomaly and the NAO (at least for its shorter range
predictions), Atlantic "functions" are a serious driving force for our
winter weather. So, one must consider the following question...
Are there any other cyclical, predictable, climatological phases
impacting the Atlantic on a large scale that we are aware of? The
answer is, "yes"... the thermohaline cycle. And this has been
linked to tropical cyclone activity. But it is such a general
process (impacting energy transport) that it is difficult to imagine it
being wholly irrelevant to winter weather. Very roughly, the
Atlantic Thermohaline Cycle was in one phase during the 1940s, 1950s
and 1960s; it then switched phases for the 1970s, 80s and 90s, and is
now switching back. Leaving the "fringe" years alone, since they
may be borderline debatable, and focusing on only the core years
(especially since using this as an analog factor is somewhat
questionable in the first place) the cycle would favor seasons from
about 1950-51 through 1959-60; in turn, it would "dis-favor" seasons
from 1980-81 through 1989-90. Looking at the analog table, we see
two seasons which would get aided, 1953-54 and 1957-58. While
1953-54 is low snow, 1957-58 is high enough such that the mean of these
two seasons shows generally above normal snowfall. Meanwhile, the
one season to be penalized from the thermohaline cycle is 1980-81; it
is a decidedly low-snowfall season. And, interestingly, the only
season missing the "core bins" by only one year is 1979-80... another
low-snow season barely missing getting penalized. So, this
additional potential analog parameter clearly favors heavier snowfall
Now, what can we do to
account for all of these relatively subjective assessments in the most
objective way possible? First, Issue #6 gives us good reason to
eliminate the already borderline 1997-98 season. So, we'll just
drop it. Issues #2 and #3 have more to do with simple data
handling. These can be alleviated by allowing Issue #7 as an
analog paramter. Giving this a light, but significant weight
would move 1957-58 beyond 1980-81 and 2003-04. This gives us a
"clean" Top 3 (i.e., no essentially "tied" seasons getting mixed
in). And though it is very "tight" 1953-54 would move JUST ahead
of 1963-64 and 1995-96. As 1980-81 gets penalized, 1953-54 would
also surpass that season. This also gives us a "clean" Top
5. While these adjustments are all somewhat subjective, it at
least positions us to re-analyze the data in an objective way.
Below is the previous table with the proper adjustments made...
|Top 5 Mean
|Top 3 Mean
This table seems far more realistic. The statistics are "cleaner", and the deviation from the normal matches the frequency distribution. Also, note the vastly improved agreement between the various means. The full mean is significantly different from the "Top 3" and "Top 5" at BOS and PVD and, to a lesser degree, at NYC and PIT, but that's it. In the original table almost every site had some significant differences amongst the means... which makes the forecast rather low confidence. As such, confidence is reasonably high (as high as can be expected with seasonal forecasting!) with this updated table. There is certainly room for error given some of the subjectivity, and this error could go either way. For example, favoring lower snow totals, perhaps 1997-98 should have just been left in, albeit with a low weight. And, favoring higher snow totals, 1979-80 just barely missed the core of the timeframe being penalized for the thermohaline cycle contrary to the current phase. Since that "core" was quite tight, perhaps 1979-80 should also have been penalized, which would push 1979-80 down just below 2003-2004; in turn, this would seriously crank up the "Top 3" snowfall averages. Check out a particularly low-snow locale for 1979-80... Boston. Their "Top 3" was 35.4"; dropping 1979-80 down to 4th place puts their "Top 3" at 44.3"... that's enough to increase the "Final Analysis" by 3", to 43.0", making the difference (albeit slightly) between a below normal and above normal snowfall season. But, since these errors could go either way, I'll just make you aware of them and stick with the analysis.
Incidentally, we've played
with a lot of numbers and analogs and such, but what is PHYSICALLY
going on here? What is the expected pattern that will create
generally above normal snowfall ("generally" because most of New
England comes in near normal in the above table)? Well, it is
quite simple... The modest El Nino should result in some generally
moist flow into the Southeastern U.S. With any storms amplifying
a trough, this moisture will be drawn northward into the Mid-Atlantic
and Northeast. Classic during El Nino seasons are infrequent, but
rather potent events. Already, as of mid-November, we're seeing
that classic pattern. At my locale, we've already exceeded our
normal precipitation for the month, and the month is only half
over. Incredibly, we've had only two measurable precipitation
events all month. Very typical. Meanwhile, when you hear
"El Nino" you probably think... ahhh, too warm, no snow. That is
certainly generally true. However, this is a rather modest El
Nino, not capable of driving up such a ridge in the Southeastern U.S.
that cold air can't even penetrate the Northeast. We're likely to
see some significant swings, as retreating cold air will certainly open
the door for El Nino induced warm-ups all the way through the
Northeast. But any driving Arctic air will easily win out over
the warm air. And the correlation between the Atlantic SST
anomaly and the NAO strongly suggests significant periods of negative
NAO phases. For those unfamiliar with what that entails, it
typically means a ridge of high pressure aloft sets up in a "blocking"
fashion over the Atlantic into Greenland. The result is that the
upstream trough (over central and eastern North America) gets "stuck",
dumping cold air and possible winter storms into the eastern part of
the country. This probably has snow lovers drooling. But,
keep in mind, the El Nino is just strong enough to occasionally drive
up some warm air. As a result, I do not anticipate this being a
blockbuster snowfall season for the Northeast. Indeed, as noted,
the analogs work out to "near normal" over most of New England.
Certainly, for snow lovers, there will be "opportunities" to look
forward to. Some seasons, like the much maligned 1997-98 left
nothing to watch... warm and rainy all season. This season is not
likely to be uneventful. I expect infrequent, but significant
precipitation events, with temperatures running slightly below normal,
but with wide variations.
The image below shows my expected seasonal snowfall amounts across the Northeastern U.S. for the winter of 2004-2005.
Last season in addition to the above image I provided a "percent deviation from the average" map. I opted out of that this season. In some ways it does provide better guidance in terms of whether there'll be more or less snow than usual. But, in the end, one receives amounts of snow, not percent deviations. So, I felt, rather than confuse matters, just put the snowfall map out there and let folks examine it as is. Moreover, some of the Mid-Atlantic deviations may appear rather deceptive this season; because of their relatively low averages, a somewhat modest 5" (roughly) of additional extra snow expected this season results in rather large percent deviations (30% or more). That can be misleading. So, if you're interested in the "deviations", I have included them for the specific sites in the tables above. So, there you have it; there's the 2004-2005 seasonal snowfall prediction for the Northeastern U.S.