GithubHelp home page GithubHelp logo

Comments (7)

kyr-pol avatar kyr-pol commented on May 28, 2024

Hi @jaztsong ,

I haven't worked with GPytorch or pytorch, so I wouldn't know if some specific detail is wrong with your implementation, but the logic seems ok. A more general comment: I think what you are doing here is the unscented transform, which is a reasonable alternative to moment matching, but it might make policy optimisation harder (in our version for the gradients of the cost w.r.t. the controller's parameters we back propagate through the moment matching equations).

Now for your problem, how did you compare the sampling results to moment matching? Did you rewrite moment matching in pytorch? If so, did you test against our version? Otherwise, if you are comparing to our implementation of predict_on_noisy_input show us a MWE, which in this case I guess should have a small dataset and a GP model defined both in gpflow and gpytorch with the same hyperparameters. Then we should verify that on normal inputs (single points, not probabilistic) their predictions are the same and after that we'd look for a given noisy test point whether the predicted mean, variance and input-output covariance are similar (preferably with a large number of samples just to be sure).

from pilco.

jaztsong avatar jaztsong commented on May 28, 2024

@kyr-pol Thanks for the reponse.

I'm only using mgpr for dynamics model, not for controller so far. So the gradients of controller's parameters doesn't travel through mgpr.

Currently, the way I'm testing with tests/test_prediction.py by comparing the sampling result against the matlab code. The mean is not too bad with the relative error around 10~30%. But the variance and input-output covariance can be way off the mark.

from pilco.

kyr-pol avatar kyr-pol commented on May 28, 2024

What's your GP kernel's hyperparameters and what's the covariance on the initial state?
For very small covariances the estimates with both methods should converge to the predicted values for mean and variance from the standard GP equations. If one or both of them don't, then you know what's wrong. If they do, but when the initial covariance increases the estimates diverge (and increasing the number of samples doesn't help) it could be just moment matching failing (after all it is an approximation).

from pilco.

jaztsong avatar jaztsong commented on May 28, 2024

Thanks. I will run more experiments to compare the outputs.

One more issue I need to consult your wisdom is that I had zero success to get a good policy in the examples (inv_double_pendulum.py and inverted_pendulum.py) in this repo (using all your code).

The only change I made is to replace mojoco-based environment with roboschool-based environment. They are supposed to be identical to each other.

Any thoughts?

from pilco.

kyr-pol avatar kyr-pol commented on May 28, 2024

So you are using the code from the examples folder, just with roboschool environments instead of gym? If so, I would expect it to be a matter of different representation of states, different scaling etc., so you would have to set a different initial state, goal state and so on. Maybe the notebook in the examples folder can help: Hyperparameter setting and troubleshooting tips.ipynb. Good luck!

from pilco.

jaztsong avatar jaztsong commented on May 28, 2024

@kyr-pol Thank you for your suggestion. I did check the state definitions of roboschool vs gym, it looks like they are meant to be identical. But I don't know why it didn't work. I will definitely go check the hyperparameter settings.

An addition quick question is that, if I use a sampling approach, how does the backpropagation flow? Does it consider the sampled points as constant tensors?

from pilco.

kyr-pol avatar kyr-pol commented on May 28, 2024

Hey, not sure how would pytorch, or tensorflow for that matter, implement backpropagation with sampling, but yes I'd expect the samples to be constant, after all they don't change and are not optimised.
I recently came across a paper that uses numerical quadrature for propagating uncertainty in PILCO and the authors calculate gradients too: Numerical Quadrature for Probabilistic Policy Search. I hope that helps!

from pilco.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.