LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>Since you only have 3 parameters and a supercomputer at hand, perhaps the best approach to find the maximizing parameters is to either simply use a brute force random search, or simulated annealing. I was thinking this before I got to the point in the article where you briefly mentioned SA.

I think there’s another issue too. Even if you found the true maximum of the likelihood, and the stochastic process you’re simulating is stationary (and meets other criteria needed for a limit theorem), the length of the time series needed for convergence may be much greater than 1000. The fact that the ML estimates are consistent and asymptotically normal does not mean that a time horizon of 1000 is enough to estimate the parameters. (In addition, my bet is that the confidence intervals given by these programs assume normality of the estimates, with the Hessian of the log likelihood as sample Fisher information, although that’s just a guess).

For example. Suppose X_t is a stochastic process, that is 2nd order stationary and has correctly decaying autocorrelations. An ergodic theorem tells you that you can estimate the ensemble mean mu = E[X_t] by a time average \frac{\sum_{t = 0}^T X_t}{n}, however it tells you nothing about the rate of convergence. In order to know what T would be enough to give a good estimate, you’d have to estimate the decay rate of the autocorrelations of X_t.

You’d have to do something similar here to see if T = 1000 is enough even if you have the correct MLE. However, your estimator, the ML estimator for the parameters, is far more complicated than a simple time average. Not only is it’s distribution not known, it itself needs to be computed numerically, so investigating its convergence towards the true values can be theoretically daunting, and possibly numerically impractical.

Regards,

Edger

LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>