How to prepare for the future by aggregating forecasts

Gilles Stoltz, CNRS Research Professor of Economy and Decision Science - August 25th, 2016
How to prepare for the future by aggregating forecasts by Gilles Stoltz ©Fotolia-faithie

Our ongoing machine-learning research at HEC Paris helps managers paint a clearer vision of the future. By programming a computer to aggregate lots of different forecasting methods – of which there are many – managers are no longer faced with the difficult decision of which one to choose.

Gilles Stoltz ©HEC Paris

Gilles Stoltz is a researcher for CNRS at the GREGHEC laboratory. In addition, he is a professor at HEC Paris, where he teaches the foundations of statistics in the bachelor (...)

See CV

Whether used to predict weather patterns, determine how exchange rates might fluctuate, or in which direction consumer preference might evolve, accurate forecasts are key to success in business. In many industries, sufficiently large companies hire data scientists to provide such predictions while others buy forecasts as a service. 

But while decision-makers would often like their forecasting experts to come up with one single accurate prediction, in reality even just one expert can think of several forecasting methods, each of which usually requires tuning and some technical choices. This means companies are spoiled for choice when deciding which methods to choose. 

Such methods include traditional statistical models, whereby past data is modeled to come up with predictions. Then there are machine learning approaches, where a computer by way of an algorithm is able to automatically find hidden data in statistics without being told directly where to look. Within machine learning techniques there are also different methodologies (for example ‘random forests’, which uses decision trees, or ‘deep learning’ using multi-layered transformations of the data). Managers, when faced with a whole cloud of forecasts and what they sometimes see as too much choice, perceive this as a problem. 

But there is a solution. As in life outside of business, diversity can be an opportunity and instead of selecting one expert or one forecasting technique at all cost, there are ways to aggregate the cloud of prognosis into a single meta-forecast. This work (itself a form of machine learning) can be conducted by computers and is both automated and safe.  

At HEC Paris we have developed forecast aggregation tools to help businesses make decisions. So far we have used these in various industries: to forecast exchange rates at a monthly rate and with a macro-economic perspective [1]; to forecast electricity consumption [2] with France’s largest electricity provider, EDF; to forecast oil production (on-going); and even to forecast air quality [3]. 

So how does it work? First, we must design all the automatic processing of expert forecasts. We do this by creating formulae and algorithms, associating each expert with a weight and have the weights vary over time depending on the past performance of each individual. Once programmed, a machine then uses the algorithms to perform the desired aggregation automatically as a black and does not require human supervision. 

This way of aggregating expert forecasts performs well both in practice (i.e. when used for practical purposes) and in theory, and comes with what we call ‘theoretical guarantees of performance’, which is what makes it so safe. At HEC, as well as other research centers around the world, we have produced aggregation techniques that perform almost as well as, say, the best expert, and even the best fixed (or linear) combinations of experts. The important thing it to gather forecasts from a multitude of experts and we believe this is beneficial, as it increases the chance that at least one of them will be good. Of course aggregating forecasts does have its limitations. Firstly, constructing or getting expert forecasts that actually exhibit diversity is not always an easy task, as experts are often clones of each other and predict similar tendencies. Secondly, this black-box aggregation of expert forecasts that comes from various sources (human experts, statistical models, machine learning approaches) does not attempt at all to model the underlying phenomenon, which is in strong contrast to more classical statistical or econometrics approaches. With aggregation, all efforts are put into forecasting performance and nothing is invested on the modeling. Of course this works fine if the final decision-maker only wants to make good decisions and not  try to understand her/his environment!  


As we continue to develop this aggregation solution at HEC, which is both on the theoretical and practical side, we have begun approaching various companies for R&D contracts and/or proofs of concepts. Our aim is to look at business problems even more closely related to the ones studied at HEC Paris, such as forecasting volumes of sales to manage the supply chain. Perhaps our work will help put decision-makers’ minds at ease when they face what can be an overwhelming choice of forecasting options. 


[1] Christophe Amat, Tomasz Michalski and Gilles Stoltz, Fundamentals and exchange rate forecastability with machine learning methods, 2015. 

[2] Marie Devaine, Pierre Gaillard, Yannig Goude, and Gilles Stoltz, Forecasting electricity consumption by aggregating specialized experts; a review of the sequential aggregation of specialized experts, with an application to Slovakian and French country-wide one-day-ahead (half-)hourly predictions, Machine Learning , 90(2):231-260, 2013

[3] Vivien Mallet, Gilles Stoltz, and Boris Mauricette, Ozone ensemble forecast with machine learning algorithms, Journal of Geophysical Research , 114, D05307, 2009