GithubHelp home page GithubHelp logo

khrylx / transform2act Goto Github PK

View Code? Open in Web Editor NEW
52.0 3.0 13.0 14.21 MB

[ICLR 2022 Oral] Official PyTorch Implementation of "Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design".

Home Page: https://sites.google.com/view/transform2act

License: MIT License

Python 100.00%
reinforcement-learning agent-design robot-design iclr2022

transform2act's Introduction

Transform2Act

This repo contains the official implementation of our paper:

Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design
Ye Yuan, Yuda Song, Zhengyi Luo, Wen Sun, Kris Kitani
ICLR 2022 (Oral)
website | paper

Installation

Environment

  • Tested OS: MacOS, Linux
  • Python >= 3.7
  • PyTorch == 1.8.0

Dependencies:

  1. Install PyTorch 1.8.0 with the correct CUDA version.
  2. Install the dependencies:
    pip install -r requirements.txt
    
  3. Install torch-geometric with correct CUDA and PyTorch versions (change the CUDA and TORCH variables below):
    CUDA=cu102
    TORCH=1.8.0
    pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
    pip install torch-sparse==0.6.12 -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
    pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
    pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
    pip install torch-geometric==1.6.1
    
  4. install mujoco-py following the instruction here.
  5. Set the following environment variable to avoid problems with multiprocess trajectory sampling:
    export OMP_NUM_THREADS=1
    

Pretrained Models

  • You can download pretrained models from Google Drive or BaiduYun (password: 2x3q).
  • Once the transform2act_models.zip file is downloaded, unzip it under the results folder of this repo:
    mkdir results
    unzip transform2act_models.zip -d results
    
    Note that the pretrained models directly correspond to the config files in design_opt/cfg.

Training

You can train your own models using the provided config in design_opt/cfg:

python design_opt/train.py --cfg hopper --gpu 0

You can replace hopper with {ant, gap, swimmer} to train other environments. Here is the correspondence between the configs and the environments in the paper: hopper - 2D Locomotion, ant - 3D Locomotion, swimmer - Swimmer, and gap - Gap Crosser.

Visualization

If you have a display, run the following command to visualize the pretrained model for the hopper:

python design_opt/eval.py --cfg hopper

Again, you can replace hopper with {ant, gap, swimmer} to visualize other environments.

You can also save the visualization into a video by using --save_video:

python design_opt/eval.py --cfg hopper --save_video

This will produce a video out/videos/hopper.mp4.

Citation

If you find our work useful in your research, please cite our paper Transform2Act:

@inproceedings{yuan2022transform2act,
  title={Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design},
  author={Yuan, Ye and Song, Yuda and Luo, Zhengyi and Sun, Wen and Kitani, Kris},
  booktitle={International Conference on Learning Representations},
  year={2022}
}

License

Please see the license for further details.

transform2act's People

Contributors

khrylx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

transform2act's Issues

Some errors in the codes

  1. There is no create_dirs argument in Config class's init function.

    cfg = Config(args.cfg, args.tmp, create_dirs=not (args.render or args.epoch != '0'))

  2. When num_threads >= 2, the code gets stuck in the following line

    out[x_ind] = torch.addmm(b, x[x_ind], W.t())
    multiprocessing might be the reason. When num_threads=1 and Queue was not use, everything goes fine. I setup a conda environment as the instructions in README. Have you ever encountered this bug?

The configuration is correct,but the code cannot be trained normally.

hello,I'm having some troubles when trying to run this project.
the following is my hardware version and my configured environment.

NVIDIA Geforce RTX 3080Ti, and because it's a 30 series graphics card, the minimum CUDA can be used is CUDA11.

截图1

python=3.7.0,torch=1.8.0+cu111, torch.cuda is available.

截图2

torch-geometric=2.1.0 with correct CUDA and PyTorch versions and Mujoco_py=2.1 can successfully be imported.

图片
图片
截图6

when run python design_opt/train.py --cfg hopper --gpu 0

The program does not report errors, but also no response
截图3

log file is empty
截图4

the tensorboard interface also has no data.
截图5

Additionally,I have successfully executed it in the PYTORCH=1.8.0+CU102 version on another server before.
Is this project unable to use CUDA11.1 for training?

I cant use the pretrain model for the state_dict error

python design_opt/eval.py --cfg swimmer --save_video

loading model from checkpoint: results/swimmer/models/best.p
Traceback (most recent call last):
File "/home/dzyao/shiyuexin/shiyan/transform2act/design_opt/eval.py", line 28, in
agent = Transform2ActAgent(cfg=cfg, dtype=dtype, device=device, seed=cfg.seed, num_threads=1, training=False, checkpoint=epoch)
File "/home/dzyao/shiyuexin/shiyan/transform2act/design_opt/agents/transform2act_agent.py", line 38, in init
self.load_checkpoint(checkpoint)
File "/home/dzyao/shiyuexin/shiyan/transform2act/design_opt/agents/transform2act_agent.py", line 162, in load_checkpoint
self.policy_net.load_state_dict(model_cp['policy_dict'])
File "/home/dzyao/anaconda3/envs/pytorch2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Transform2ActPolicy:
Missing key(s) in state_dict: "skel_gnn.gconv_layers.0.lin_rel.weight", "skel_gnn.gconv_layers.0.lin_rel.bias", "skel_gnn.gconv_layers.0.lin_root.weight", "skel_gnn.gconv_layers.1.lin_rel.weight", "skel_gnn.gconv_layers.1.lin_rel.bias", "skel_gnn.gconv_layers.1.lin_root.weight", "skel_gnn.gconv_layers.2.lin_rel.weight", "skel_gnn.gconv_layers.2.lin_rel.bias", "skel_gnn.gconv_layers.2.lin_root.weight", "attr_gnn.gconv_layers.0.lin_rel.weight", "attr_gnn.gconv_layers.0.lin_rel.bias", "attr_gnn.gconv_layers.0.lin_root.weight", "attr_gnn.gconv_layers.1.lin_rel.weight", "attr_gnn.gconv_layers.1.lin_rel.bias", "attr_gnn.gconv_layers.1.lin_root.weight", "attr_gnn.gconv_layers.2.lin_rel.weight", "attr_gnn.gconv_layers.2.lin_rel.bias", "attr_gnn.gconv_layers.2.lin_root.weight", "control_gnn.gconv_layers.0.lin_rel.weight", "control_gnn.gconv_layers.0.lin_rel.bias", "control_gnn.gconv_layers.0.lin_root.weight", "control_gnn.gconv_layers.1.lin_rel.weight", "control_gnn.gconv_layers.1.lin_rel.bias", "control_gnn.gconv_layers.1.lin_root.weight", "control_gnn.gconv_layers.2.lin_rel.weight", "control_gnn.gconv_layers.2.lin_rel.bias", "control_gnn.gconv_layers.2.lin_root.weight".
Unexpected key(s) in state_dict: "skel_gnn.gconv_layers.0.lin_l.weight", "skel_gnn.gconv_layers.0.lin_l.bias", "skel_gnn.gconv_layers.0.lin_r.weight", "skel_gnn.gconv_layers.1.lin_l.weight", "skel_gnn.gconv_layers.1.lin_l.bias", "skel_gnn.gconv_layers.1.lin_r.weight", "skel_gnn.gconv_layers.2.lin_l.weight", "skel_gnn.gconv_layers.2.lin_l.bias", "skel_gnn.gconv_layers.2.lin_r.weight", "attr_gnn.gconv_layers.0.lin_l.weight", "attr_gnn.gconv_layers.0.lin_l.bias", "attr_gnn.gconv_layers.0.lin_r.weight", "attr_gnn.gconv_layers.1.lin_l.weight", "attr_gnn.gconv_layers.1.lin_l.bias", "attr_gnn.gconv_layers.1.lin_r.weight", "attr_gnn.gconv_layers.2.lin_l.weight", "attr_gnn.gconv_layers.2.lin_l.bias", "attr_gnn.gconv_layers.2.lin_r.weight", "control_gnn.gconv_layers.0.lin_l.weight", "control_gnn.gconv_layers.0.lin_l.bias", "control_gnn.gconv_layers.0.lin_r.weight", "control_gnn.gconv_layers.1.lin_l.weight", "control_gnn.gconv_layers.1.lin_l.bias", "control_gnn.gconv_layers.1.lin_r.weight", "control_gnn.gconv_layers.2.lin_l.weight", "control_gnn.gconv_layers.2.lin_l.bias", "control_gnn.gconv_layers.2.lin_r.weight".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.