How would an historian do time-series econometrics? The TL;DR version

A couple of weeks ago I finished a paper required for a Masters degree at the University of Melbourne. If you want to read the whole thing, you can do so here. But the point of this post is to boil it down to something you can chew through in a few minutes.

One of the big issues I see with contemporary econometrics is that its practitioners don’t read widely from other empirical fields, such as statistics and machine learning. Recent papers by Hal Varian, Guido Imbens and Susan Athey have started to bridge this gap, but there is still much that econometricians can learn from other fields. My paper is such an attempt at bridging the gap.

The paper proposes an answer to the following simple question: How much history should I include in a time-series econometric model? Too much history and I end up estimating parameters that are useful over history, but not useful today; too little history and I risk not learning from important history. Implicit in estimating a time-series model is that something can be learned from the past. Yet the world changes over time, and sometimes a model estimated in one world will be useless in another.

Many prominent researchers have done a lot of work to try to capture a changing world. A common approach (Koop and Korobilis, Cogley and Sargent) is to build sophisticated time-varying-parameter models. Another method most closely associated with the work of James Hamilton is a regime-switching framework, where the economic model flips between states, with each state associated with its own set of parameters.

Don’t get me wrong, I love these models; they’re statistically principled and they work well. But they’re extremely difficult to build well–even for smart people. And it’s possible that we may be able to do better by letting technology guide our beliefs of how the world is changing.

My paper illustrates a method for estimating analogy weights—weights that tell us how “similar” various periods of history are to today. These weights have two applications: first, because they give weight to analogous periods of history, the researcher does not need to choose a data window on which to estimate their model. They just estimate their model using the usual methods, but specifying a weighting vector. Because they use this weighting vector, the model shifts markedly as the state of the world shifts, improving predictive performance. Second, the weights tell the researcher when the world is very different from all periods of history. This should sound warning bells—DO NOT USE YOUR MODEL, USING MODELS THAT ARE ESTIMATED ON ENTIRELY DIFFERENT DATA CONDITIONS IS DANGEROUS.

This all sounds great, but how do I cook up these analogy weights? The trick here was getting a deep understanding of how the Random Forest works (I will not explain here, but you should read the original Breiman paper). The Random Forest is a very popular tool from machine learning. As a biproduct of fitting a Random Forest, the researcher is rewarded with a proximity matrix–a matrix that describes how “proximate” all data points are from all other data points in the dataset. Proximity here is not based on a metric, instead it is the similarity of every two periods of history in terms of all the characteristics that help for the prediction of the outcome variable. Thus it is a distance measure that is relatively robust to the inclusion of new X variables that may not have systematic relationships with the outcome variable.

So what do I find?

Using analogy weights tends to improve the predictive performance of time-series models.

I ran a horse-race using a small VAR-system in three specifications: the first was completely vanilla. The second was vanilla but used my analogy weights. The third was a sophisticated time-varying-parameter model. Unfortunately, I set up the comparison between the models in such a way that the sophisticated model is not directly comparable to the others (I used posterior predictive checking, not true out-of-sample checking). I’ll have to go back and re-do this bit to get better comparisons. Still, the weighted model performed extremely well in pure out-of-sample prediction, on some measures beating the sophisticated model (which was assessed in-sample).

I also back-tested a portfolio volatility model that uses analogy weights for a small stock portfolio over the Global Financial Crisis. The analogy weighted volatility model outperformed the common CCC-GARCH model and state-of-the-art DCC-GARCH model. Most of the relative gains for the weighted model were in the beginning and end of the Global Financial Crisis, when the weighted model was first quicker to recognise a regime shift, and then more quickly to resume tempered volatility forecasts afterwards.

These results are not conclusive that analogy weighting is a clear winner, but they do suggest that it’s at least worth exploring the use of Random Forest analogy weights.

 

2 Comments

  1. Matt Cowgill said,

    November 7, 2015 @ 5:00 am

    Hi Jim,
    Really interesting, cheers!
    Did you take down your code from Github that you used for that post about estimating quarterly value added in WA? I’m still trying to teach myself R and wanted to walk myself through the code again, but couldn’t find it…

  2. khakieconomist said,

    November 8, 2015 @ 11:06 am

    Hey Matt –

    The link should be to my dropbox, which worked for me.

    http://khakieconomist.com/2013/06/wa-in-recession/

    That said, if you want to do some nowcasting of GSP, I’d probably recommend the approach here: https://github.com/khakieconomics/nowcasting_in_stan I’ve learned a lot since that post!

    Cheers,
    Jim

RSS feed for comments on this post