# MLE vs Expectation Maximization to estimate time-changing parameters in state space model

by Fr1   Last Updated August 13, 2019 19:19 PM

Suppose I have a generic model in state-space form described as

$$x_{t+1}=\phi_{t} x_{t}+w_{t+1}\epsilon_{t+1}$$ $$y_{t}=H_{t}x_{t}+v_{t}e_{t+1}$$

where both $$e_{t+1}$$ and $$\epsilon_{t+1}$$ are iid orthogonal white nose. Notice that all the parameters in $$\phi_{t}, H_{t}$$ as well as in $$Var(w_{t+1})$$ and $$Var(v_{t})$$ are allowed to be time-varying. Suppose I know all the time-changing parameters, so I don't have to estimate them, except for $$H_{t}$$ and $$v_{t}$$, that are the time-changing parameters to be estimated.

Generally, in literature, I see that they use the Expectation Maximization algorithm to estimate the time-changing unknown parameters (a procedure like this pag. 17), which involves updating the estimates of the time-varying matrixes. However, why at least theoretically, not numerically, can't I use a simple MLE and define the estimated time-changing parameters as those set of matrixes $$H_{t}$$ and $$v_{t}$$ for t=1,...,T (where T is my sample size) that maximize the likelihood? Is there any theoretical countergument to do this? I am interested in a theoretical countergument, not a numerical one.

Thanks

Tags :

## Related Questions

Updated July 31, 2017 13:19 PM

Updated August 11, 2019 17:19 PM

Updated March 17, 2017 18:19 PM

Updated August 21, 2019 19:19 PM