Fig. 11.32 illustrates precipitation forecasting performance. The reported statistics are
ESS, which is the equitable skill score, and areal bias. ESS ranges from 1.0 for perfect
performance to 0.0 for forecasts of no skill at all, essentially equivalent to a random
variable being used as a predictor. Areal bias, not to be confused with biases in
magnitude which are used above for the state variables, is 1.0 if the model covers the
correct fraction of the domain with precipitation at or above a particular precipitation
threshold; values above 1.0 indicate overforecasting.
In the 3-h precipitation forecast graph (Fig. 11.32a), both of the MDSS models have
equal or greater ESS scores than the Eta model, and both models have areal bias values
that are closer to unity for all thresholds of precipitation. This is consistent with last
year's results, and the results from several other tests FSL has conducted in summertime
weather applications as well. This is the expected result of the "hot start" diabatic
initialization technique implemented in the MDSS models; it is consistent with and
similar to the results from the 2003 MDSS demonstration as well. In the 6-h results (Fig.
11.32b), the performance of the local models relative to the Eta model reveals a similar
signal but the benefits of the FSL initialization method begin to "wear off".
Results and Recommendations: The mesoscale model configuration used for
the 2004 MDSS field demonstration outperformed the NCEP Eta model for
quantitative precipitation forecasting. In addition, the hot start and time
lagged ensemble methods also provided improvements to the forecast product.
Given the impact that precipitation forecasts have on winter road
maintenance, consideration should be given to the use of mesoscale models in
operational versions of the MDSS.
136