Social scientists often do not have the benefit of being able to run or replicate experiments in order to generate new data and end up re-using the same datasets when evaluating the explanatory power of new models compared to older models. As pointed out by White (2000), such sequential testing of models on a fixed amount of data leads to the problem known as ‘data snooping’, in which the initially small probability of a poor model appearing good by random chance gets amplified by repeated testing. Ignoring this effect or naively selecting the best-fitting model without further testing can often lead to incorrectly identifying as a ‘best’ a model that in fact has no real predictive power on the data. In order to help avoid this problem, White proposes a ‘Reality Check’ procedure which tests the null hypothesis that no model in a given collection outperforms a given benchmark model.
Hansen et al. (2011) offer a generalisation of the reality check, in the form of the Model Confidence Set (MCS) approach, which identifies the subset of models which have equal predictive power on some data and has the benefit of not requiring an a priori benchmark model. Starting with the full collection of models, the procedure successively eliminates the worst performing model until the null of equal predictive ability is no longer rejected at a given confidence level. The surviving subset of models makes up the MCS at that confidence level. The flexibility of the MCS procedure has made it very popular as a way of evaluating the forecasting ability of multivariate GARCH models of volatility, leading to numerous comparison exercises with relatively large collections of models (such as 600 models in Liu et al. (2015)) compared on a small range of data sets such as the S&P 500 index (for example in Neumann and Skiadopoulos (2013) or Wilhelmsson (2013)).
In spite of its flexibility and popularity, this highlights two drawbacks to the MCS approach. The first is that because the elimination process starts with the full collection of models and gradually shrinks it down to the subset of models that forms the MCS, it is not possible to add extra models to a collection ex post without having to rerun the MCS procedure on the entire, larger, collection. It is therefore not possible to simply update the confidence set with a few more candidate models, which is something that was possible with White’s Reality Check, and which one might argue is desirable in view of the many parallel exercises carried out using similar specifications on the same volatility data.
A second, related, drawback of the elimination process is that its time complexity and memory requirements increase rapidly with the size of the model collection, thus making it cumbersome to analyse large model collections.
This paper proposes an alternative approach to obtaining the MCS of a collection of models which preserves the attractiveness and flexibility of the methodology while addressing the two potential drawbacks mentioned above. The intuition is that the iterative process used to find the MCS can be reversed: rather than starting with the full collection of models and shrinking it down the the subset of models that form the MCS, a collection is initially made up of two models and MCS is gradually updated as models are added to the collection. The MCS is obtained once all the models in the collection are processed. Growing the collection of models rather than shrinking it intuitively allows for further models to be added to the collection at a later point in time. Furthermore, because only deviations of the new model with respect to the existing collection need to be calculated and stored for each iteration, this reducing the time complexity of the MCS procedure from O (M3) to O (M2) and the memory requirement from O (M2) to O(M). This is confirmed by a Monte Carlo analysis carried out in order to validate this updating approach.
To download the file in pdf format click here.