When I posted my nerdy rant about the importance of going for efficiency, I mentioned my research with my friend Rodolfo Cermeno on GARCH in panels. However in the published version, Badi made us cut all the simulation exercises.
Just this morning though, I found a draft of the paper with the simulations contained.
We first compared OLS to our MLE panel garch estimator where the conditional variance was correctly specified and we found that,
"when comparing the OLS and MLE estimators (for the mean equation), we find that the MLE outperforms the OLS estimator in terms of bias, precision and mean squared error. In every sample, the MLE estimator has a MSE smaller than the OLS estimator by at least a factor of 4 when ρ =.25 and at least a factor of 5 when ρ = .5"
ρ is the arch(1) coefficient.
We then compared OLS to a mis-specified panel garch estimator and found that for the conditional mean coefficients, the MSE was lower with the misspecified garch estimator by at least a factor of 2.5.
Those are pretty big MSE improvements!
When the variance reductions are big and the bias induced in the conditional mean from a mis-specified conditional variance model is small, OLS with asymptotically correct standard errors is kind of a dumb way to conduct research.