Information matrix test

From The Right Wiki
Jump to navigationJump to search

In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White,[1] who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient, or as a function of the Hessian matrix of the log-likelihood function. Consider a linear model y=Xβ+u, where the errors u are assumed to be distributed N(0,σ2I). If the parameters β and σ2 are stacked in the vector θT=[βσ2], the resulting log-likelihood function is

(θ)=n2logσ212σ2(yXβ)T(yXβ)

The information matrix can then be expressed as

I(θ)=E[((θ)θ)((θ)θ)T]

that is the expected value of the outer product of the gradient or score. Second, it can be written as the negative of the Hessian matrix of the log-likelihood function

I(θ)=E[2(θ)θθT]

If the model is correctly specified, both expressions should be equal. Combining the equivalent forms yields

Δ(θ)=i=1n[2(θ)θθT+(θ)θ(θ)θ]

where Δ(θ) is an (r×r) random matrix, where r is the number of parameters. White showed that the elements of n1/2Δ(θ^), where θ^ is the MLE, are asymptotically normally distributed with zero means when the model is correctly specified.[2] In small samples, however, the test generally performs poorly.[3]

References

  1. White, Halbert (1982). "Maximum Likelihood Estimation of Misspecified Models". Econometrica. 50 (1): 1–25. doi:10.2307/1912526. JSTOR 1912526.
  2. Godfrey, L. G. (1988). Misspecification Tests in Econometrics. Cambridge University Press. pp. 35–37. ISBN 0-521-26616-5.
  3. Orme, Chris (1990). "The Small-Sample Performance of the Information-Matrix Test". Journal of Econometrics. 46 (3): 309–331. doi:10.1016/0304-4076(90)90012-I.

Further reading