From NCAR and UCAR Currents
Bob Henson | 30 December 2010 • Perspective is everything in weather prediction. We’ll never be able to forecast a winter storm or cold front more than several weeks in advance, or so we’re told by chaos theory. And we’d be foolish to expect a next-day outlook boasting pinpoint precision to a city block or to the nearest minute.
Still, it’s good to remind ourselves how far weather prediction has come in recent years. The ferocious winter storm that assailed the U.S. Atlantic coast this week offers a great case in point. The forecasts weren’t perfect by any means (and officials didn't always act on them), but people who paid attention to the news knew several days in advance that a windy, heavy snow was quite possible on Sunday and Monday, 26–27 December, along and near the coast from Virginia to Maine. And that’s exactly what transpired.
Those planning travel during the post-Christmas crunch got plenty of notice that having a Plan B would be wise. The storm wreaked havoc on road, train, and air transport throughout the region, with as many as 10,000 flights reportedly cancelled. At New York’s Central Park, the snowfall of 20 inches (51 centimeters) was the sixth biggest in Manhattan’s 145 years of weather records, with massive drifts to boot. Norfolk, Virginia, notched 14.2” (36 cm), its third biggest storm total in more than a century of weather watching.
A smorgasbord of progress
Forecasts of storms like these are better than ever thanks to a variety of innovations over the last decade, many of them rooted in research at NOAA, NCAR, universities and elsewhere. More information from satellites and other sources is making its way into computer forecast models. The models themselves are much more detailed. In turn, that improves their ability to capture the many forces in play as a winter storm takes shape, including the massive amounts of heat released as water vapor condenses to form raindrops and snowflakes.
For a good example of how forecasts went wrong prior to these improvements, let’s take a look at the infamous snowstorm of 24–25 January 2000. Only a few days earlier, NOAA’s National Weather Service had announced the installment of a new supercomputer dedicated to running daily forecast models. As an NWS announcement put it, the agency was on its way toward becoming “America’s no-surprise weather service.”
Then, with exquisitely perverse timing, the 24–25 January snowstorm reared its head. Just one day before the storm hit, the NWS’s Eta forecast model had called for the heaviest rain and snow to stay well offshore (here’s a summary). Instead, the system ended up hugging the coast, with disastrous results. Instead of the flurries they’d expected, people in Raleigh, North Carolina, saw the city’s biggest snow ever: 20.3” (52 cm). And the Washington area—home to the new supercomputer—got more than a foot of snow after a mere inch had been predicted the evening before.
A group led by Fuqing Zhang (then at NCAR and Texas A&M University, now at Pennsylvania State University) analyzed this forecast fiasco in detail in a study published in 2002 in the journal Monthly Weather Review. Among other findings, the group discovered that the Eta model’s horizontal resolution—32 kilometers (20 miles) between each grid point—couldn’t capture the moisture-related dynamics critical to the storm’s evolution.
When the team simulated the same storm using a higher-resolution NCAR/Penn State research model, they obtained a better forecast of the storm’s location and strength and, especially, its precipitation. The biggest benefits came when the grid spacing dropped to 10 km (6 mi); there was comparatively less improvement when the resolution tightened further to 3.3 km (2 mi).