Comments (7)
Hi @jaztsong ,
I haven't worked with GPytorch or pytorch, so I wouldn't know if some specific detail is wrong with your implementation, but the logic seems ok. A more general comment: I think what you are doing here is the unscented transform, which is a reasonable alternative to moment matching, but it might make policy optimisation harder (in our version for the gradients of the cost w.r.t. the controller's parameters we back propagate through the moment matching equations).
Now for your problem, how did you compare the sampling results to moment matching? Did you rewrite moment matching in pytorch? If so, did you test against our version? Otherwise, if you are comparing to our implementation of predict_on_noisy_input
show us a MWE, which in this case I guess should have a small dataset and a GP model defined both in gpflow and gpytorch with the same hyperparameters. Then we should verify that on normal inputs (single points, not probabilistic) their predictions are the same and after that we'd look for a given noisy test point whether the predicted mean, variance and input-output covariance are similar (preferably with a large number of samples just to be sure).
from pilco.
@kyr-pol Thanks for the reponse.
I'm only using mgpr for dynamics model, not for controller so far. So the gradients of controller's parameters doesn't travel through mgpr.
Currently, the way I'm testing with tests/test_prediction.py
by comparing the sampling result against the matlab code. The mean is not too bad with the relative error around 10~30%. But the variance and input-output covariance can be way off the mark.
from pilco.
What's your GP kernel's hyperparameters and what's the covariance on the initial state?
For very small covariances the estimates with both methods should converge to the predicted values for mean and variance from the standard GP equations. If one or both of them don't, then you know what's wrong. If they do, but when the initial covariance increases the estimates diverge (and increasing the number of samples doesn't help) it could be just moment matching failing (after all it is an approximation).
from pilco.
Thanks. I will run more experiments to compare the outputs.
One more issue I need to consult your wisdom is that I had zero success to get a good policy in the examples (inv_double_pendulum.py
and inverted_pendulum.py
) in this repo (using all your code).
The only change I made is to replace mojoco-based environment with roboschool-based environment. They are supposed to be identical to each other.
Any thoughts?
from pilco.
So you are using the code from the examples folder, just with roboschool environments instead of gym? If so, I would expect it to be a matter of different representation of states, different scaling etc., so you would have to set a different initial state, goal state and so on. Maybe the notebook in the examples folder can help: Hyperparameter setting and troubleshooting tips.ipynb. Good luck!
from pilco.
@kyr-pol Thank you for your suggestion. I did check the state definitions of roboschool vs gym, it looks like they are meant to be identical. But I don't know why it didn't work. I will definitely go check the hyperparameter settings.
An addition quick question is that, if I use a sampling approach, how does the backpropagation flow? Does it consider the sampled points as constant tensors?
from pilco.
Hey, not sure how would pytorch, or tensorflow for that matter, implement backpropagation with sampling, but yes I'd expect the samples to be constant, after all they don't change and are not optimised.
I recently came across a paper that uses numerical quadrature for propagating uncertainty in PILCO and the authors calculate gradients too: Numerical Quadrature for Probabilistic Policy Search. I hope that helps!
from pilco.
Related Issues (20)
- Computation of cross-covariance of state and action
- Question about MGPR.
- Error with cloudpickle
- Computation time for policy optimization HOT 3
- Reference for predicting with uncertain inputs with SMGPR HOT 1
- Gradient based policy optimisation. HOT 4
- SMGPR : the induced points are different for each model HOT 1
- calculate_factorizations question HOT 1
- Cost for trajectory following HOT 3
- Cholesky decomposition was not successful. The input might not be valid. HOT 2
- [BUG] mountain_car.py fails due to missing import
- What is the V for in the predict_given_factorizations HOT 1
- installation: issue with gast, tensorflow HOT 6
- How do you save your trained model? HOT 2
- Could you please share exact version of some dependency packages
- Performance issue in the definition of create_models, pilco/controllers.py(P1)
- AttributeError: 'Parameter' object has no attribute 'value'
- NotImplementedError: Cannot convert a symbolic (graph mode) `DeferredTensor` to a numpy array. HOT 2
- Is squash_sin() right? HOT 1
- Bugs in model update? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pilco.