Friday, March 23, 2012

An Overview of VAR Modelling

Judging by the posted comments and the emails I've received, there's no doubt that my various posts on different aspects of VAR modelling have been quite popular.

Many followers of this blog will therefore be interested in a recent working paper by Helmut Luetkephol. The paper is simply titled, "Vector Autoregressive Models", and it provides an excellent overview by one of the leading figures in the field.

You can download the paper from here.


© 2012, David E. Giles

12 comments:

  1. Dear Prof. Giles,
    Thank you for this good reference and your very helpful blog.
    I have a question related to nonnormality of VAR residuals. Luetkephol says "nonnormality is not a necessary condition for validity of many statistical procedures related to VAR". However, he doesn't give details. Does nonnormality a necessary condition for granger causality and impulse response functions?

    ReplyDelete
    Replies
    1. Thanks for the comment. Normality of the errors is not needed when testing for Granger causality or when generating impulse response functions. The assumption of normality is used, however, in constructing the Likelihood Function when applying the Johansen procedure to test for cointegration.

      Delete
    2. Hi, I have enjoyed your blog tremendously. In regards to the comment above, I would like to ask whether johansen test is robust to non-normality as a paper by Gonzalo (1994) seems to suggest it indeed is. Also, a simple question, if say cointegration test indicates existence of 1 CE but granger causality indicates no causility, does that mean inferencing of the coint. test is wrong?

      Thanks,
      kiff

      Delete
    3. Thanks for the comments. Re. non-normality, the answer depends on whether you are talking about departures from nonrmality in terms of skewness or kurtosis; and whether you are using the trace test of teh max. eigenvalue test.

      Here's a good reference:
      http://www.calstatela.edu/faculty/klai/KLPaper/OBES93Au.pdf

      Regarding your second question, it's hard to say where the conflict in the results may be coming from. It could be due to a realtively small sample size; maybe one or more structural breaks; mis-specification of the from of teh trend in the cointegration testing; etc.

      Delete
    4. Hi Dave,
      You say: "Normality of the errors is not needed when testing for Granger causality or when generating impulse response functions.". Could you please any article/book/study to justify the non-importance of normality in both 1. Granger causality 2. Impulse Response Functions. Thank you.

      Also, I wanna thank you for your perfect blog. This blog taught me many facts. I am a regular follower of the blog.

      Delete
    5. Dave Giles: "Normality of the errors is not needed when testing for Granger causality or when generating impulse response functions."

      Any reference article or paper? Thank you very much in advance.
      What is the basic idea behind non-importance of normality in Granger causality and impulse response functions?

      Delete
    6. Thanks for the comment. You don't need normality of the errors when applying ANY test that has only asymptotic (large T) justification. This applies to tests for GC. The impulse response functions simply track out the dynamic responses of the model to shocks from one or more of the variables, The distribution of the error term is irrelevant for the mean responses, as long as the errors have a zero mean - that\s all that is assumed in getting the IRF's. The confidence bands around the IRF's are only asymptotically valid, so the comment above regarding testing also applies here.

      Delete
  2. HI, dear sir it is very kind of you that, you established this blog; very helpful in econometric problems. i have few confusion in VAR model, 1) is this necessary for VAR that all the variables must have the same order of integration or we can use mixed variable like ARDL. 2) i run a VAR model where all the variables were non stationary at level but stationary at first difference, i used all the variable at first difference and log d(LGDP) d(Lemp) d(Linvestment) and then generate accumulated impulse response with cholesky decomposition option in eviews here the table of accumulated response shows coefficient with standard error but these coefficient are very low, is this because of, as i used growth form of all the variables?
    3) can i treat these values as elasticities or do i need to transform them? if so, what would be the transformation?
    Regards.

    ReplyDelete
    Replies
    1. Syed - thanks for the comment and questions.

      (1)It depends what you are suing the VAR model for. If it's for testing for Granger causality, then you should fit the model in the (log) levels of the data (no differencing) when using the Toda-Yamamoto testing procedure. However, if you are interested in the impulse responses, then you need to transform each variable so that it is stationary before estimating the model.

      (2) Yes this is probably the reason.

      (3) No, they are not elasticities. You need to go back to the coefficient estimates to get the elasticity estimates.

      Delete
    2. Hi Dr. Giles,

      Are you sure that IRF is supposed to be done on transformed variables? Let's suppose we have two I(1) variables (suppose they're log-levels). Whenever I do IRF over a VAR in first differences, the IRF and its confidence bands will collapse to 0.000 after about 6 periods. However, 0% of my VECM based IRFs go to zero (over 20 periods). Then, if I do a VAR-levels with the I(1) data, I'll see that it gives a very very similar look to the VECM IRF which makes me think that we're supposed to do the IRF in levels. Also, in R we're supposed to feed the levels formulation of the VECM into the irf function otherwise it won't work. For example we have to do irf(vec2var(ca.jo(data))). This also makes me think we're supposed to do IRF over the levels VAR.

      Delete
    3. I would like to add to my previous post that the IRFs generated by a VECM are almost indistinguishable from the IRFs generated by a VAR in levels. However, if IRF is done with VAR in differences, the results are unrecognizable when contrasting them with the VECM results. Both variables are I(1) and the rank of the cointegrating matrix is 1 (p < 0.01). Confirmed with both EViews and R.

      Delete
    4. Sorry .. One more addition. I have found two papers [1][2] that use transformed variables (I(1) variables that are now I(0)) to do IRF and one paper [3] that uses log-levels to do IRF. Paper [4] doesn't explicitly state what they use, but since they found a cointegrating relationship in all cases I'm guessing they just used a VECM based IRF which is pretty much the same as a VAR-levels IRF.

      I think the latter is better when doing a study where cointegration is involved because it wouldn't make sense to do a levels-IRF whenever we detect cointegration and use a VECM (i.e., an IRF analysis that's indistinguishable from IRF over a VAR in levels) and then a differences-IRF whenever we don't detect cointegration.

      [1] (Siliverstovs and Duong 2006) On the Role of Stock Market for Real Economic Activity: Evidence for Europe
      [2] (Mun 2005) Contagion and impulse response of international stock markets around the 9–11 terrorist attacks
      [3] (Filis, 2010) Macro economy, stock market and oil prices: Do meaningful relationships exist
      among their cyclical fluctuations?
      [4] (Kim and Rousseau 2012) Credit buildups and the stock market in four East Asian economies

      Delete