Improving Demand Forecasts for an Electrical Utility

Improving Demand Forecasts for an Electrical Utility


Electrical load demand is measured as a time series, and the forecasting process essentially involves predicting what the expected demand will be at some hourly point in the future, based on recent load demand and weather patterns that have already occurred just prior to the present time. In today’s competitive power markets, electricity is bought and sold as a commodity at market prices. Accordingly, there can be dramatic increases in costs associated with over or under contracting errors which then require the utility to sell or buy power at a loss on the balancing market.

Consequently, forecasting customer load demand has become integral to the planning and operational activities of electrical utilities. Furthermore, because the financial penalties for forecast errors are so high, research activities that reduce the Mean Absolute Percent Error (MAPE) by as little as a fraction of one percent have merit.


For a 24-hour forecast horizon, AlgoTactica has achieved a 0.18% reduction in annual MAPE by designing specialized time series filters. These special filters are convolved with the time series of an environmental variable that is normally used as input to most load prediction models involving neural networks. In this industry, experts estimate that a decrease of 1% in MAPE for a utility having a 1-gigawatt peak load can produce $300,000 in annual savings. The utility in this study has a peak load exceeding 20 gigawatts, therefore the annual savings from a 0.18% reduction in MAPE can be roughly estimated as $300,000 x 0.18 x 20 = $1,080,000.


At the start of this study, two neural networks were designed for load prediction. Inputs to the first were a baseline data feature set normally used in load forecasting, while the second input set also consisted of these same features, except that the environmental component had been processed by the AlgoTactica filters. For each network, neuron layer sizes from 20 to 100 were trained for a 24-hour forecast, using an early-stopping strategy.

When verified against previously unseen test data, the final best designs yielded a MAPE of 1.47 with a 63-node neuron layer for the feature set associated with the AlgoTactica filters, compared to 1.65 and 98 respectively for the baseline. Therefore, the AlgoTactica set achieves greater accuracy, while also requiring a network of lower complexity.

The PDF of Absolute Errors shows the distribution of errors in a 24-hour forecast for each of the 63 and 98-node neural networks, based on a common 1-year testing data set. The PDF Difference curve shows the result after the 98-node neural network error distribution was subtracted from the error distribution associated with the 63-node network that used the AlgoTactica filters. It reveals that for absolute errors in the range of 0 to 180, our feature set produces a significantly higher percentage of these smaller values than does the baseline set. Conversely, for the range of 180 to 950, the baseline set produces a much higher percentage of those larger values. Therefore, overall, the feature set engineered by using the AlgoTactica filters has been shown to produce smaller prediction errors. Furthermore, the MAPE Values by Day of Week graph shows that although there are differences in MAPE values depending on forecast day, our feature set always produces values that are lower than for the baseline set. This graph shows the MAPE values obtained for a 24-hour prediction from the validation data set, for each day of the week, averaged over a 12-month period.

Back to top