On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation

Author(s)
Mauro Costantini, Robert Kunst
Abstract

To select a forecast model among competing models, researchers often use ex-ante prediction experiments over training samples. Following Diebold and Mariano (1995), forecasters routinely evaluate the relative performance of competing models with accuracy tests and may base their selection on test significance on top of comparing forecast errors. With extensive Monte Carlo analysis, we investigated whether this practice favors simpler models over more complex ones, without gains in forecast accuracy. We simulated the autoregressive moving-average model, the self-exciting threshold autoregressive model, and vector autoregression. We considered two variants of the Diebold–Mariano test, the test by Giacomini and White (2006), the -test by Clark and McCracken (2001), the Akaike information criterion, and a pure training-sample evaluation. The findings showed some accuracy gains for small samples when applying accuracy tests, particularly for the Clark–McCracken and bootstrapped Diebold–Mariano tests. Evidence against this testing procedure dominated, however, and training-sample evaluations without accuracy tests performed best in many cases.

Organisation(s)
Department of Economics
External organisation(s)
Università degli Studi dell’Aquila
Journal
International Journal of Forecasting
Volume
37
Pages
445-460
No. of pages
16
ISSN
0169-2070
DOI
https://doi.org/10.1016/j.ijforecast.2020.06.010
Publication date
08-2020
Peer reviewed
Yes
Austrian Fields of Science 2012
502025 Econometrics
Keywords
ASJC Scopus subject areas
Business and International Management
Portal url
https://ucris.univie.ac.at/portal/en/publications/on-using-predictiveability-tests-in-the-selection-of-timeseries-prediction-models-a-monte-carlo-evaluation(bafc254f-6329-48a8-ae8d-447d77652f14).html