The Library
Explaining the results of the M3 forecasting competition
Tools
UNSPECIFIED (2001) Explaining the results of the M3 forecasting competition. INTERNATIONAL JOURNAL OF FORECASTING, 17 (4). pp. 550-554. ISSN 0169-2070.
Research output not available from this repository.
Request-a-Copy directly from author or use local Library Get it For Me service.
Abstract
Makridakis and Hibon (2000) summarize four main implications of the latest forecasting competition, which we paraphrase as: (a) 'simple methods do best'; (b) 'the accuracy measure matters'; (c) 'pooling helps'; and (d) 'the evaluation horizon matters'. We applaud the detailed empirical investigations, are unsurprised by their summary; but are surprised by the assertion that 'the strong empirical evidence, however, has been ignored by theoretical statisticians'. Having successfully published two books and more than a dozen papers across a wide range of journals, which inter alia analyze their four points, we refute the claim that the issue is being 'ignored', and doubt the implicit suggestion of hostility by the profession.
What must be the relationship between the world to be forecast and the models with which we forecast for conditions (a)-(d) not to hold? The research summarized in Clements and Hendry (1998b, 1999) (henceforth CH98 and CH99) shows that in weakly stationary processes, a congruent, encompassing model in-sample will dominate in forecasting at all horizons.(2) When the data generating process (DGP) is complicated, as is likely in economics, then so will be the dominant model, subject to possible losses from parameter estimation (CH98, ch. 12). Causal variables will dominate non-causal (CH99, ch. 1), forecast accuracy will deteriorate as the horizon increases, and there will be no forecast-accuracy gains from pooling forecasts across methods or models: indeed, pooling refutes encompassing. These are perhaps the 'optimality' claims that Makridakis and Hibon (2000) correctly doubt are empirically relevant.
The results of the forecasting competitions are manifestly at odds with such strong 'theoretical predictions'. This discrepancy between theory and practice (noted by, e.g., Fildes & Makridakis, 1995), and the systematic mis-forecasting and forecast failure that has periodically blighted macroeconomics, stimulated the research summarized in CH98 and CH99. The 'textbook' paradigm discussed in the previous paragraph offers no explanation for observed forecast failures, although they have sometimes been attributed to 'mis-specified models', 'poor methods', 'inaccurate data', 'incorrect estimation', 'data-based model selection' and so on, without those claims being proved: our research demonstrates the lack of foundation for such 'explanations'.
The reason that (a)-(d) hold in practice is that economies are non-stationary and evolving processes which are not reducible to stationarity by differencing, thereby generating moments that are non-constant over time.
Modern economies are regularly subject to major institutional, political, financial, legal, fashion, and technological changes which manifest themselves as structural breaks in models relative to the underlying DGP. Models are far from being facsimiles of the DGP, and even if they closely resembled it in-sample, unanticipated structural change could seriously reduce their usefulness for forecasting. Our research suggests that models which are relatively robust to, or adapt rapidly to, structural change are most likely to be successful in forecasting. Specifically, shifts in deterministic terms appear to be especially injurious to forecasting, and to be a primary factor underlying systematic forecast failure, as they cause a shift in the model's forecast mean relative to the data mean. Other breaks are surprisingly difficult to detect and have relatively benign effects on forecasts (see Hendry & Doornik, 1997; Hendry, 2000). The remaining potential sources of forecast failure, ranging from model mis-specification, a lack of parsimony - including failure to impose restrictions such as unit roots and cointegration inaccurate forecast-origin data, through to inefficient estimation, may all exacerbate forecast failure, but generally just play supporting roles.
Item Type: | Journal Article | ||||
---|---|---|---|---|---|
Subjects: | H Social Sciences > HC Economic History and Conditions H Social Sciences > HD Industries. Land use. Labor > HD28 Management. Industrial Management |
||||
Journal or Publication Title: | INTERNATIONAL JOURNAL OF FORECASTING | ||||
Publisher: | ELSEVIER SCIENCE BV | ||||
ISSN: | 0169-2070 | ||||
Official Date: | October 2001 | ||||
Dates: |
|
||||
Volume: | 17 | ||||
Number: | 4 | ||||
Number of Pages: | 5 | ||||
Page Range: | pp. 550-554 | ||||
Publication Status: | Published |
Data sourced from Thomson Reuters' Web of Knowledge
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |