Why (Almost) Everyone Got Snowpocalypse Wrong
This morning, New York Governor Andrew Cuomo announced that the Great ‘15 Blizzard had spared much of the state, especially New York City. In Central Park, only 8 inches fell, and in the nearby Hudson Valley accumulation is little more than 4 inches. Scant amounts of snow continues to fall in those areas, but the storm ended up passing 25 miles to the east of the city, sparing it from the brunt of its force.
While New York City was spared, the storm is still pummeling parts of Long Island, northeastern Westchester County, Connecticut, Rhode Island and Massachusetts.
So was the blizzard forecast exaggerated, or was the science bad? Neither really. One model actually got the storm right. Unfortunately, it wasn’t the one that meteorologists used in their forecasts.
On the East Coast, winter storms typically form in the Atlantic off the coast of the Carolinas and move along the east coast. Known as Nor’Easters. As these storms form over water, instead of land, they’re not easy to forecast. That’s why sometimes major snowstorms hit us by surprise and other times dire weather predictions don’t pan out.
In this case, three models were looked at, the European (ECWMF) Model, the North American Model Mesoscale (NAM) Model, and the Global Forecast (GFS) model. The forecasts that we all saw were based on the European model, which has been the “go to” model for the National Weather Service for such storms, as it has been fairly accurate in the past. The NAM model basically concurred with the ECWMF model, giving those two forecasts a lot of weight in the meteorological community.
But one model did get the forecast right. The revamped GFS model, which uses the latest technologies but is relatively untested, showed divergent results. It showed a distinctively more eastern track for the storm and has gotten snowfall totals and wind speeds right, so far. It also accurately predicted only 6-8 inches of snow for New York.
An old version of GFS was scrapped by the meteorological community because they found it had software hiccups, many “biases,” and ultimately, failed to provide accurate data for Hurricane Sandy in 2012. This prompted the federal government to fund a major upgrade to the system, which was just put online this month.
Without getting highly technical, ECWMF and NAM are highly dependent on regional satellite and other meteorological data, while the revamped GFS model has better “spatial resolution” and uses many more diverse data points to arrive at its forecasts. So a lot of local atmospheric phenomenon is detected with GFS, and such details are often smoothed over by the lower-resolution models. GFS also runs on a supercomputer that has about 16% more computing power than the computers that run ECMWF.
GFS has been described as like having a high definition television as opposed to an old standard TV. But GFS, which was created by National Centers for Environmental Prediction, is so new that meteorologists were hesitant to default to its data. After this storm, that may change.
Some meteorologists are actually excited about the new GFS model, and after its performance with this storm it may get a stronger look from the National Weather Service.
Keep in mind that meteorology is a unique science, probably the only one where scientists are producing a new theory every day. Weather patterns can still be baffling for meteorologists, by their own admission. They understand that the atmosphere in any given place can be capricious and unstable by the minute. And as they explain, any tiny disturbance below the limits of their observations can grow or alter larger weather systems in just a short period of time. That's why more "high-definition" prediction models like the new GFS will improve weather prediction in the near future.