GithubHelp home page GithubHelp logo

T_t unknown about kfas HOT 10 CLOSED

helske avatar helske commented on July 19, 2024
T_t unknown

from kfas.

Comments (10)

helske avatar helske commented on July 19, 2024

Yes you can include unknown parameters in T. Just build corresponding updatefnfor the fitSSM, just like for other model parameters. So something like this:

updatefn <- function(model, pars) {
 model$T[1,1,1] <- pars[1]
 model 
}

fit <- fitSSM(model = model, updatefn = updatefn, inits = inits)

If the corresponding state is diffuse, you might want to switch to nondiffuse initialization (ie. set model$P1inf to zero and add some prior variances to model$P1), or you can use argument marginal = TRUE in fitSSM or logLikfunction which computes the marginal likelihood as in Francke et al.

from kfas.

yuanyuan2319 avatar yuanyuan2319 commented on July 19, 2024

Hello Helske,

Thank you!
I tried to define T_t or updatefn in different ways but I still get error message. And mostly I got message like:

Error in model["T"][1, 1, 1] <- pars[3] : incorrect number of subscripts

I then made a simple reproducible example as follows. Could you please point out where I got wrong?

return <- seq(1, 10.5, by=0.5)
syn.return <- seq(3.1, 8.8, by=0.3)
dat <- data.frame(return, syn.return)

n <- nrow(dat)
Zt <- array(data = as.vector(rbind(dat$return, rep(1, n))), dim = c(1,2,n))
Ht <- matrix(NA)

Tt <- matrix(c(NA,0,0,1), nrow = 2, ncol = 2)  
#Tt <- array(data = c(NA,0,0,1), dim = c(2,2,n))

Rt <- matrix(c(1,0), nrow = 2, ncol = 1)
Qt <- matrix(NA)
a1 <- matrix(c(1,0), nrow = 2, ncol = 1)
P1 <- matrix(0, nrow = 2, ncol = 2)
P1inf <- diag(2)

syn.return <- dat$syn.return
model <- SSModel(syn.return ~ -1 + 
                        SSMcustom(Z = Zt, T = Tt, R = Rt, Q = Qt, a1 = a1, P1 = P1, P1inf = P1inf),
                      H = Ht)
update <- function(model, pars){
  model["Q"] <- pars[1]
  model["H"] <- pars[2]
  model["T"][1,1,1] <- pars[3]
  model
}

fit <- fitSSM(model, update, inits = c(0, 0, 1), method = "BFGS", marginal = TRUE)
out <- KFS(fit$model)

Thank you in advance for your time!

from kfas.

helske avatar helske commented on July 19, 2024

Oops, in my example (and yours) I had arguments pars and modelin wrong order at the function signature, it should be first parameters and then the model object.

from kfas.

yuanyuan2319 avatar yuanyuan2319 commented on July 19, 2024

Hi Helske,

I am using "KFAS: Exponential Family State Space Models in R" and I just found out I didn't get the same answer when replicating your example.

I replicate the example on page 5. And before I get the same estimation for parameters, that MLE are 9.5 and 4.3.

Now I get 2.3 and 1.4. Then I just notice exp(2.3) = 9.5 and exp(1.4) = 4.3.

In addition, if I add a updatefn as:

update_gaussian <- function(pars, model){
  model$H[1] <- exp(pars[1])
  model$Q[1] <- exp(pars[2])
  model
}

I get the same 2.3 and 1.4.

So did you change the package codes and exp(pars) is the default setting?

Because sometimes I get negative estimation for variance in my project. If exp(pars) is the default setting, I then don't need to worry about negative values.

Thank you a lot!

from kfas.

helske avatar helske commented on July 19, 2024

the default updatefn function in fitSSMuses exp(pars) parameterization, but with your own custom updatefn you need to take care of that yourself i.e. the parameters are passed to your function as is without any transformations.

from kfas.

yuanyuan2319 avatar yuanyuan2319 commented on July 19, 2024

Hello Helske,

I want to ask two further questions.

In your KFAS tutorial, there is an example of Gaussian state space model, that for constant "v" in the state equation, you specify it as a state variable. And then take the last value vT as the estimation.

I then tried to set "v" as the parameter and 1 as state variable. The results of other state variables looks visually the same as your approach.

What I want to ask is: is there a typical difference between these two approaches, for generating results of other state variables?

What is the particular reason that you set "v" as a state variable rather than a parameter?

Because in my own example, when I set "v" as constant or random walk, I basically get the same results for all state variables. So I am thinking if it is because of this approach.

Thank you.

from kfas.

yuanyuan2319 avatar yuanyuan2319 commented on July 19, 2024

The second question is a small one:

When I set x <- exp(pars[1])
or x <- pars[1] and "in check function restrict x >0"

I would get different estimation of x and likelihood value.

Shoudn't this two be the same since I didn't really change anything?

from kfas.

helske avatar helske commented on July 19, 2024

I am not sure if I understand what you are asking, do you mean that instead of defining constant drift term as a time-invariant (Q=0) state with some prior distribution for the first time instance, you instead fix the state as 1 and then estimate the coeffcient in the corresponding element of T? The issue with the latter approach is that after estimating the coefficient, your parameter is assumed to be known and fixed, whereas the former approach incorporates uncertainty of the drift estimation (by prior distribution N(a1,P1 or P1inf) into the modelling, leading to more realistic inference. If I am correct in what you are saying you should see some differences between different approaches if you check the filtered or smoothed variances of the mean for example.

For your second question, again I should see your reproducible example code to really understand what is the issue. Do you supply checkfn and updatefn correctly into the fitSSM function? I personally haven't used the check functionality much so it could contain a bug as well..

from kfas.

yuanyuan2319 avatar yuanyuan2319 commented on July 19, 2024

Hi Helske,

Yes, that's exactly what I mean. I tried both approaches and I couldn't tell the difference between two approaches visually from plots of state estimations. Maybe I should check the numbers. Another concern is that, the state equation I use is AR(1) process. If it is more realistic to set the constant drift as a state variable, I should set coefficients as a state too? It seems that except error variance, all other terms is updated through time.

As for the second one, how do you usually deal with the case where parameters need to be positive? Just set T <- exp(pars[1])?

I also notice that in the newest KFAS version on github, the output variables from KFS() function are not complete, such as t and epshat.

Thanks.

from kfas.

helske avatar helske commented on July 19, 2024

The state estimates should be identical with to approaches but their variances should not.

You can't set the AR-coefficients as a state variable too, unless you move into nonlinear models which are out of the scope of KFAS.

What do you mean that the output variables are not complete? By default, not everything is computed, you can change the behaviour by defining the function arguments, see ?KFS.

PS. Please make a separate issue for each problem you are facing, and please try to include reproducible code.

from kfas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.