GithubHelp home page GithubHelp logo

legged_gym's Introduction

Isaac Gym Environments for Legged Robots

This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym. It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.

Maintainer: Nikita Rudin
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: [email protected]


🔔 Announcement (09.01.2024)

With the shift from Isaac Gym to Isaac Sim at NVIDIA, we have migrated all the environments from this work to Orbit. Following this migration, this repository will receive limited updates and support. We encourage all users to migrate to the new framework for their applications.

Information about this work's locomotion-related tasks in Orbit is available here.


Useful Links

Project website: https://leggedrobotics.github.io/legged_gym/
Paper: https://arxiv.org/abs/2109.11978

Installation

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch 1.10 with cuda-11.3:
    • pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
  3. Install Isaac Gym
    • Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
    • cd isaacgym/python && pip install -e .
    • Try running an example cd examples && python 1080_balls_of_solitude.py
    • For troubleshooting check docs isaacgym/docs/index.html)
  4. Install rsl_rl (PPO implementation)
  5. Install legged_gym
    • Clone this repository
    • cd legged_gym && pip install -e .

CODE STRUCTURE

  1. Each environment is defined by an env file (legged_robot.py) and a config file (legged_robot_config.py). The config file contains two classes: one containing all the environment parameters (LeggedRobotCfg) and one for the training parameters (LeggedRobotCfgPPo).
  2. Both env and config classes use inheritance.
  3. Each non-zero reward scale specified in cfg will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
  4. Tasks must be registered using task_registry.register(name, EnvClass, EnvConfig, TrainConfig). This is done in envs/__init__.py, but can also be done from outside of this repository.

Usage

  1. Train:
    python legged_gym/scripts/train.py --task=anymal_c_flat
    • To run on CPU add following arguments: --sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).
    • To run headless (no rendering) add --headless.
    • Important: To improve performance, once the training starts press v to stop the rendering. You can then enable it later to check the progress.
    • The trained policy is saved in issacgym_anymal/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt. Where <experiment_name> and <run_name> are defined in the train config.
    • The following command line arguments override the values set in the config files:
    • --task TASK: Task name.
    • --resume: Resume training from a checkpoint
    • --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
    • --run_name RUN_NAME: Name of the run.
    • --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
    • --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
    • --num_envs NUM_ENVS: Number of environments to create.
    • --seed SEED: Random seed.
    • --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
  2. Play a trained policy:
    python legged_gym/scripts/play.py --task=anymal_c_flat
    • By default, the loaded policy is the last model of the last run of the experiment folder.
    • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.

Adding a new environment

The base environment legged_robot implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and has no reward scales.

  1. Add a new folder to envs/ with '<your_env>_config.py, which inherit from an existing environment cfgs
  2. If adding a new robot:
    • Add the corresponding assets to resources/.
    • In cfg set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg and the name of the environment (python class).
    • In train_cfg set experiment_name and run_name
  3. (If needed) implement your environment in <your_env>.py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
  4. Register your env in isaacgym_anymal/envs/__init__.py.
  5. Modify/Tune other parameters in your cfg, cfg_train as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!

Troubleshooting

  1. If you get the following error: ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, do: sudo apt install libpython3.8. It is also possible that you need to do export LD_LIBRARY_PATH=/path/to/libpython/directory / export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib(for conda user. Replace /path/to/ to the corresponding path.).

Known Issues

  1. The contact forces reported by net_contact_force_tensor are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesirable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from the reported forces with sensor_options.enable_forward_dynamics_forces. Example:
    sensor_pose = gymapi.Transform()
    for name in feet_names:
        sensor_options = gymapi.ForceSensorProperties()
        sensor_options.enable_forward_dynamics_forces = False # for example gravity
        sensor_options.enable_constraint_solver_forces = True # for example contacts
        sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
        index = self.gym.find_asset_rigid_body_index(robot_asset, name)
        self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
    (...)

    sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
    self.gym.refresh_force_sensor_tensor(self.sim)
    force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
    self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
    (...)

    self.gym.refresh_force_sensor_tensor(self.sim)
    contact = self.sensor_forces[:, :, 2] > 1.

legged_gym's People

Contributors

mayankm96 avatar nikitardn avatar sun-ge avatar thkkk avatar tomlankhorst avatar xerus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

legged_gym's Issues

torch==1.10.0+cu113 appears incompaible with isaacgym v1.0 preview 3

OS Version: Ubuntu 21.04
Nvidia Driver: 495
Graphics: GTX 1660 Ti
Pytorch: PyTorch version 1.10.1+cu102

Readme from gitbuhub advises use
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

this causes isaacgym examples to all fail whereas using default version of pytorch works ok. (torch>=1.8.0","torchvision>=0.9.0")
or pytorch 1.10.1 works fully in Isaac gym but only partly in legged gym (only anymal_c_flat works in gpu mode)
presumably this causes incompatibility with legged gym and terrains?

can you specify what is the required for pytorch version for full compatibility?

Cannot access rsl_rl repository on BitBucket

Step 4 of the Installation requires access to the rsl_rl repository.

Install rsl_rl (PPO implementation)
Clone https://bitbucket.org/leggedrobotics/rsl_rl/src/master/
cd rsl_rl && git checkout develop && pip install -e .

However navigating to the URL gives an error:

We can't let you see this page
To access this page, you may need to log in with another account. You can also return to the previous page or go back to your dashboard.

rsl_rl is also not visible from the main Robotics Systems Lab BitBucket page. Is the repository not publically available?

[Question] Troubles with play.py script after training

Hi,

I am encountering issues with the play.py script after training my robot. During training, the robot moves as expected, but when I attempt to use the trained network with play.py, the robot barely moves and remains stationary. This issue is not limited to any specific robot models, as I also have experienced this problem with both anymal and a1.

Currently, I am running everything on the CPU. Could this be the reason for the problem? Are there any specific parameters that I need to adjust for play.py to work correctly with my trained network?

Any recommendations or advice would be greatly appreciated. Thank you in advance.

Best regards,

Louis

Plots

Warning: failed to preload PhysX libs

By following the installation procedure, when I tried to run an example cd examples && python 1080_balls_of_solitude.py on step 3, it failed:
(eth) C:\Users\MSI-NB\Desktop\ETH Massive Parallel DRL\ETH_anymal\isaacgym\python\examples>python 1080_balls_of_solitude.py
*** Warning: failed to preload PhysX libs
Traceback (most recent call last):
File "1080_balls_of_solitude.py", line 25, in
from isaacgym import gymutil
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym_init_.py", line 5, in
from isaacgym import gymapi
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym\gymapi.py", line 104, in
_import_active_version()
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym\gymapi.py", line 101, in _import_active_version
raise RuntimeError("No gym module found for the active version of Python (%d.%d)" % (major, minor))
RuntimeError: No gym module found for the active version of Python (3.8)

So I tried to ignore it and continued following the remaining procedures. After installed all the required packages as instructed, I tried to run 'python train.py --task=anymal_c_flat'. Unfortunatly, same error occured as floows:
(eth) C:\Users\MSI-NB\Desktop\ETH Massive Parallel DRL\ETH_anymal\legged_gym-master\legged_gym\scripts>python train.py --task=anymal_c_flat --sim_device=cpu --rl_device=cpu --num_envs=4
*** Warning: failed to preload PhysX libs
Traceback (most recent call last):
File "train.py", line 35, in
import isaacgym
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym_init_.py", line 5, in
from isaacgym import gymapi
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym\gymapi.py", line 104, in
_import_active_version()
File "c:\users\msi-nb\desktop\eth massive parallel drl\eth_anymal\isaacgym\python\isaacgym\gymapi.py", line 101, in _import_active_version
raise RuntimeError("No gym module found for the active version of Python (%d.%d)" % (major, minor))
RuntimeError: No gym module found for the active version of Python (3.8)

Does anyone have any idea how to fix this problem?

  • OS: Windows 10 x86_64
  • GPU: RTX 3070 Laptop
  • CUDA: 11.3.109
  • Nvidia Driver: 512.15
  • Torch Version: 1.10.0+cu113

Self Collisions

Hi,
When we test our trained policy for uneven terrain we find the legs are crossing each other when we command the robot to turn around. Then we realize that the self-collision is turned off in config files for uneven terrain. Both in env/anymal_c/mixed_terrains and in env/a1. But interestingly, in flat ground the self-collision is enabled. Does self-collision does something wrong when training in uneven terrain? How to avoid legs crossing each other when commanding turning around for robots?

RuntimeError: Error building extension 'gymtorch'

I'm trying to execute scripts/train.py
but it reports error like below.

Screenshot from 2023-01-09 12-00-03

and my pytorch version all are as your setting.

torch==1.10.0+cu113
torchvision==0.11.1+cu113
torchaudio==0.10.0+cu113

I have no idea what happen from this error.
Can anyone help me?

Terrain Type error

Hi there, when i select the terrain type as : mesh_type = 'heightfield'
It brings the error here:

AttributeError: module 'isaacgym.gymapi' has no attribute 'HeightFieldProperties'

Do you have any idea how to solve this error?

Thanks!

RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

legged_gym) leo@leo-pc:~/legged_gym$ python legged_gym/scripts/train.py --task=anymal_c_flat
Importing module 'gym_38' (/home/leo/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/leo/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.0+cu113
Device count 1
/home/leo/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/leo/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Emitting ninja build file /home/leo/.cache/torch_extensions/py38_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Setting seed: 1
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
/home/leo/miniforge3/envs/legged_gym/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "legged_gym/scripts/train.py", line 47, in
train(args)
File "legged_gym/scripts/train.py", line 41, in train
env, env_cfg = task_registry.make_env(name=args.task, args=args)
File "/home/leo/legged_gym/legged_gym/utils/task_registry.py", line 97, in make_env
env = task_class( cfg=env_cfg,
File "/home/leo/legged_gym/legged_gym/envs/anymal_c/anymal.py", line 49, in init
super().init(cfg, sim_params, physics_engine, sim_device, headless)
File "/home/leo/legged_gym/legged_gym/envs/base/legged_robot.py", line 71, in init
super().init(self.cfg, sim_params, physics_engine, sim_device, headless)
File "/home/leo/legged_gym/legged_gym/envs/base/base_task.py", line 84, in init
self.create_sim()
File "/home/leo/legged_gym/legged_gym/envs/base/legged_robot.py", line 244, in create_sim
self._create_envs()
File "/home/leo/legged_gym/legged_gym/envs/base/legged_robot.py", line 675, in _create_envs
pos[:2] += torch_rand_float(-1., 1., (2,1), device=self.device).squeeze(1)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

Getting cudaImportExternalMemory when trying to add camera sensor to a1 robot

As I tried to place the camera at unitree A1 robot it always gives me an error in the line gym.create_camera_sensor(env, camera_props), stating: [Error] [carb.gym.plugin] cudaImportExternalMemory failed on rgbImage buffer with error 999

This is the same error I get when I try running isaacgym/python/examples/interop_torch.py.

Could you help me with this? Its been a month since I have asked for a solution at the Nvidia Developers forum (https://forums.developer.nvidia.com/t/cudaimportexternalmemory-failed-on-rgbimage/212944), but no one has answered to my query yet.

[Question] How to obtain the position of AnyMal

Hi!

I've been using this environment and I want to get the position/location of the AnyMal as the ground truth.

I checked #31 and used: _rb_states = gym.acquire_rigid_body_state_tensor(sim) to get all the information of the current scene. However, I can hardly get the name of each rigid body (and the return also contains the information of obstacles) so I don't know how to locate my robot.

Do you have any idea?
Thanks!

Random ValueError

Hi

I have been training with a custom robot based on the a1 example. I repeatedly get the following error, random number of seconds into the training:

Traceback (most recent call last):
  File "legged_gym/scripts/train.py", line 47, in <module>
    train(args)
  File "legged_gym/scripts/train.py", line 43, in train
    ppo_runner.learn(num_learning_iterations=train_cfg.runner.max_iterations, init_at_random_ep_len=True)
  File "/home/pr_admin/repos/rsl_rl/rsl_rl/runners/on_policy_runner.py", line 107, in learn
    actions = self.alg.act(obs, critic_obs)
  File "/home/pr_admin/repos/rsl_rl/rsl_rl/algorithms/ppo.py", line 94, in act
    self.transition.actions = self.actor_critic.act(obs).detach()
  File "/home/pr_admin/repos/rsl_rl/rsl_rl/modules/actor_critic.py", line 124, in act
    self.update_distribution(observations)
  File "/home/pr_admin/repos/rsl_rl/rsl_rl/modules/actor_critic.py", line 121, in update_distribution
    self.distribution = Normal(mean, mean*0. + self.std)
  File "/home/pr_admin/.local/lib/python3.6/site-packages/torch/distributions/normal.py", line 50, in __init__
    super(Normal, self).__init__(batch_shape, validate_args=validate_args)
  File "/home/pr_admin/.local/lib/python3.6/site-packages/torch/distributions/distribution.py", line 56, in __init__
    f"Expected parameter {param} "
ValueError: Expected parameter loc (Tensor of shape (4096, 12)) of distribution Normal(loc: torch.Size([4096, 12]), scale: torch.Size([4096, 12])) to satisfy the constraint Real(), but found invalid values:
tensor([[-0.3947, -0.7582,  0.0545,  ..., -0.0636, -0.6433, -0.7390],
        [ 0.8675,  2.4356,  0.0706,  ...,  1.3473,  0.9501,  0.3461],
        [ 0.1058, -2.1669,  0.2811,  ...,  0.1533, -0.2502,  0.6426],
        ...,
        [ 0.3339, -0.1643, -0.0863,  ...,  0.4542,  0.7566, -1.9923],
        [-0.5428, -1.2139, -0.6498,  ...,  0.0080,  1.8390,  0.1338],
        [-0.3889, -0.3290,  0.1571,  ..., -0.0942, -1.7548,  0.1372]],
       device='cuda:0')

Any idea where it could come from? Memory is ok and it does not happen with anymal nor a1.

Thanks!

How to add more assets in the envs?

I have try to add some ball and cube in anymal_c_flat env.
So I modify the any_c_flat_config.py as below,

class AnymalCFlatCfg( AnymalCRoughCfg ):
...     
     # add ball_file and cube_file in assert
     class asset( AnymalCRoughCfg.asset ):
        self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
        ball_file = "{LEGGED_GYM_ROOT_DIR}/resources/objects/ball.urdf"
        cube_file = "{LEGGED_GYM_ROOT_DIR}/resources/objects/cube.urdf"

and I creat a new Robot inherited from LeggedRobot and modefiy _create_envs function

def _create_envs(self):
    """ Creates environments:
         1. loads the robot URDF/MJCF asset,
         2. For each environment
            2.1 creates the environment, 
            2.2 calls DOF and Rigid shape properties callbacks,
            2.3 create actor with these properties and add them to the env
         3. Store indices of different bodies of the robot
    """
    asset_path = self.cfg.asset.file.format(LEGGED_GYM_ROOT_DIR=LEGGED_GYM_ROOT_DIR)
    ball_asset_path = self.cfg.asset.ball_file.format(LEGGED_GYM_ROOT_DIR=LEGGED_GYM_ROOT_DIR)
    cube_asset_path = self.cfg.asset.cube_file.format(LEGGED_GYM_ROOT_DIR=LEGGED_GYM_ROOT_DIR)
    asset_root = os.path.dirname(asset_path)
    asset_file = os.path.basename(asset_path)


    objects_root = os.path.dirname(ball_asset_path)
    ball_file = os.path.basename(ball_asset_path)
    cube_file = os.path.basename(cube_asset_path)


    # asserts use default AssetOptions
    ball_asset = self.gym.load_asset(self.sim, objects_root, ball_file, gymapi.AssetOptions())
    cube_asset = self.gym.load_asset(self.sim, objects_root, cube_file, gymapi.AssetOptions())
    object_asset_spacing = 0.05


    asset_options = gymapi.AssetOptions()
    asset_options.default_dof_drive_mode = self.cfg.asset.default_dof_drive_mode
    asset_options.collapse_fixed_joints = self.cfg.asset.collapse_fixed_joints
    asset_options.replace_cylinder_with_capsule = self.cfg.asset.replace_cylinder_with_capsule
    asset_options.flip_visual_attachments = self.cfg.asset.flip_visual_attachments
    asset_options.fix_base_link = self.cfg.asset.fix_base_link
    asset_options.density = self.cfg.asset.density
    asset_options.angular_damping = self.cfg.asset.angular_damping
    asset_options.linear_damping = self.cfg.asset.linear_damping
    asset_options.max_angular_velocity = self.cfg.asset.max_angular_velocity
    asset_options.max_linear_velocity = self.cfg.asset.max_linear_velocity
    asset_options.armature = self.cfg.asset.armature
    asset_options.thickness = self.cfg.asset.thickness
    asset_options.disable_gravity = self.cfg.asset.disable_gravity


    robot_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
    self.num_dof = self.gym.get_asset_dof_count(robot_asset)
    self.num_bodies = self.gym.get_asset_rigid_body_count(robot_asset)
    dof_props_asset = self.gym.get_asset_dof_properties(robot_asset)
    rigid_shape_props_asset = self.gym.get_asset_rigid_shape_properties(robot_asset)


    # save body names from the asset
    body_names = self.gym.get_asset_rigid_body_names(robot_asset)
    # print(body_names) 
    # ['base', 'LF_HIP', 'LF_THIGH', 'LF_SHANK', 'LF_FOOT', 'LH_HIP', 
    # 'LH_THIGH', 'LH_SHANK', 'LH_FOOT', 'RF_HIP', 'RF_THIGH', 
    # 'RF_SHANK', 'RF_FOOT', 'RH_HIP', 'RH_THIGH', 'RH_SHANK', 'RH_FOOT']
    self.dof_names = self.gym.get_asset_dof_names(robot_asset)
    self.num_bodies = len(body_names)
    self.num_dofs = len(self.dof_names)
    feet_names = [s for s in body_names if self.cfg.asset.foot_name in s]
    # print(feet_names) # ['LF_FOOT', 'LH_FOOT', 'RF_FOOT', 'RH_FOOT']
    penalized_contact_names = []
    for name in self.cfg.asset.penalize_contacts_on:
        penalized_contact_names.extend([s for s in body_names if name in s])
    termination_contact_names = []
    for name in self.cfg.asset.terminate_after_contacts_on:
        termination_contact_names.extend([s for s in body_names if name in s])


    base_init_state_list = self.cfg.init_state.pos + self.cfg.init_state.rot + self.cfg.init_state.lin_vel + self.cfg.init_state.ang_vel
    self.base_init_state = to_torch(base_init_state_list, device=self.device, requires_grad=False)
    start_pose = gymapi.Transform()
    start_pose.p = gymapi.Vec3(*self.base_init_state[:3])


    self._get_env_origins()
    env_lower = gymapi.Vec3(0., 0., 0.)
    env_upper = gymapi.Vec3(0., 0., 0.)
    self.actor_handles = []
    self.envs = []
    for i in range(self.num_envs):
        # create env instance
        env_handle = self.gym.create_env(self.sim, env_lower, env_upper, int(np.sqrt(self.num_envs)))
        
        # randomly add ball and cube
        num_objects = 5
        for j in range(num_objects):
            c = 0.5 + 0.5 * np.random.random(3)
            color = gymapi.Vec3(c[0], c[1], c[2])
            object_asset = ball_asset if i%2==0 else cube_asset


            pose = gymapi.Transform()
            pose.r = gymapi.Quat(0, 0, 0, 1)
            pose.p = gymapi.Vec3(object_asset_spacing*j, object_asset_spacing*j, 0.0)


            ahandle = self.gym.create_actor(env_handle, object_asset, pose, None, i, 0)
            self.gym.set_rigid_body_color(env_handle, ahandle, 0, gymapi.MESH_VISUAL_AND_COLLISION, color)


        
        pos = self.env_origins[i].clone()
        pos[:2] += torch_rand_float(-1., 1., (2,1), device=self.device).squeeze(1)
        start_pose.p = gymapi.Vec3(*pos)
            
        rigid_shape_props = self._process_rigid_shape_props(rigid_shape_props_asset, i)
        self.gym.set_asset_rigid_shape_properties(robot_asset, rigid_shape_props)
        actor_handle = self.gym.create_actor(env_handle, robot_asset, start_pose, self.cfg.asset.name, i, self.cfg.asset.self_collisions, 0)
        dof_props = self._process_dof_props(dof_props_asset, i)
        self.gym.set_actor_dof_properties(env_handle, actor_handle, dof_props)
        body_props = self.gym.get_actor_rigid_body_properties(env_handle, actor_handle)
        body_props = self._process_rigid_body_props(body_props, i)
        self.gym.set_actor_rigid_body_properties(env_handle, actor_handle, body_props, recomputeInertia=True)
        self.envs.append(env_handle)
        self.actor_handles.append(actor_handle)


    self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
    for i in range(len(feet_names)):
        self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], feet_names[i])


    self.penalised_contact_indices = torch.zeros(len(penalized_contact_names), dtype=torch.long, device=self.device, requires_grad=False)
    for i in range(len(penalized_contact_names)):
        self.penalised_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], penalized_contact_names[i])


    self.termination_contact_indices = torch.zeros(len(termination_contact_names), dtype=torch.long, device=self.device, requires_grad=False)
    for i in range(len(termination_contact_names)):
        self.termination_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], termination_contact_names[i])

but it will get some error in _init_buffers().
Please what is the right way to add more assert?

Configuration files and hyperparameter tuning

I see that you have used Python classes for config files. Is there any reason you choose Python classes over YAML files?

Also, given that you used Python classes, how did you perform the grid search on the parameters? I found that the nested class structure makes it messier to iterate over and get the attributes of the parameters that I want to search over. If you have the code doing the grid search, can you please share that?

Adding Observation History to Policy

Hi, thanks for the amazing repo. I am able to train and deploy policies on my Go1 robot without any issues.

I wanted to explore the idea of maintaining an observation_history tensor and passing it through an encoder, and adding the encoder output to the current observation to improve the performance of the policy.

For example, my self.obs_buf with the observation history tensor latent_obs added looks like this:

            latent_obs = self.obs_encoder(self.obs_storage).to(self.device)
            self.obs_buf = torch.cat(( 
                                        self.base_ang_vel,
                                        self.pitch.unsqueeze(1),
                                        self.roll.unsqueeze(1),
                                        self.commands[:,0].unsqueeze(1),
                                        self.commands[:,1].unsqueeze(1),
                                        self.commands[:,2].unsqueeze(1),
                                        (self.dof_pos - self.default_dof_pos),
                                        self.dof_vel,
                                        self.actions,
                                        latent_obs,
                                        ),dim=-1)

           # add the current obs to the buffer and remove the oldest one

To make debugging easy, I substituted latent_obs = self.obs_encoder(self.obs_storage).to(self.device) with
latent_obs = torch.randn((*correct_size_here), requires_grad=True, device = self.device, dtype=torch.float)

The primary change from the original implementation is that self.obs_buf should now have grad and I verified that it does have grad with pdb.set_trace()

When the training starts, loss.backward() works only the 1st time and throws the following error as soon as its called for the 2nd time in the for loop (NOTE: this is when self.obs_buf has grad):

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Using retain_graph=True works without any errors, but the iterations are very slow and therefore isn't a viable option.
I also tried adding obs_batch.detach_() and critic_obs_batch.detach_() after self.optimizer.step() but it still doesn't work.

Any help or suggestions on how to fix this would be greatly appreciated.
Thank you in advance.

Hello, I want to load a ball or a door object in the legged gym with task a1

Describe the bug
I want to load a ball or a door object in the legged gym with task a1, but always meet with the problem "an illegal memory access was encountered".

The whole problem shows as follow:
Traceback (most recent call last):
File "legged_gym/scripts/train.py", line 21, in
train(args)
File "legged_gym/scripts/train.py", line 15, in train
ppo_runner, train_cfg = task_registry.make_alg_runner(env=env, name=args.task, args=args)
File "/home/zxw/下载/IsaacGym_Preview_4_Package/legged_gym/legged_gym/utils/task_registry.py", line 147, in make_alg_runner
runner = OnPolicyRunner(env, train_cfg_dict, log_dir, device=args.rl_device)
File "/home/zxw/JD/parkour/rsl_rl/rsl_rl/runners/on_policy_runner.py", line 81, in init
_, _ = self.env.reset()
File "/home/zxw/下载/IsaacGym_Preview_4_Package/legged_gym/legged_gym/envs/base/base_task.py", line 84, in reset
self.reset_idx(torch.arange(self.num_envs, device=self.device))
File "/home/zxw/下载/IsaacGym_Preview_4_Package/legged_gym/legged_gym/envs/base/legged_robot.py", line 128, in reset_idx
self._reset_root_states(env_ids)
File "/home/zxw/下载/IsaacGym_Preview_4_Package/legged_gym/legged_gym/envs/base/legged_robot.py", line 390, in _reset_root_states
self.root_states[env_ids] = self.base_init_state
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Could you provide a simple demo? Thanks a lot!

Best~

Why up_axis=gymapi.UP_AXIS_Z is not set?

Hi,

From the isaacgym doc, up axis is set to be z-axis.

sim_params = gymapi.SimParams()
sim_params.up_axis = gymapi.UP_AXIS_Z
sim_params.gravity = gymapi.Vec3(0.0, 0.0, -9.8)

But it seems this variable is not set in this repo, but it trains well, did I miss something?

Thanks!

My URDF Robots getting fly away, do you have any tips

I'm trying to train my custom Quadruped Robot using this library
So I followed that you suggest on Readme.md and it works so far.

But after I do (I set get_args --task , default : my_custom)

python train.py

Simulation shows
flying
flying away.

here is my collision imgaes
Screenshot from 2022-02-05 22-14-53

And this is my URDF file
https://github.com/miercat0424/Custom_URDF-/blob/main/Quadruped_1.urdf

As I think I need some inertial setting skill but I don't have any idea.
Do you have any idea or tips about making URDF?

[Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 6137

/usr/bin/env /home/PJLAB/geyuhong/anaconda3/envs/policydissect/bin/python /home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 39341 -- /home/PJLAB/geyuhong/policydissect/play/play_anymal.py
*** Warning: failed to preload CUDA lib
*** Warning: failed to preload PhysX libs
Importing module 'gym_37' (/home/PJLAB/geyuhong/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_37.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/PJLAB/geyuhong/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.0+cu113
Device count 1
/home/PJLAB/geyuhong/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/PJLAB/geyuhong/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Emitting ninja build file /home/PJLAB/geyuhong/.cache/torch_extensions/py37_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
pygame 2.3.0 (SDL 2.24.2, Python 3.7.16)
Hello from the pygame community. https://www.pygame.org/contribute.html
Setting seed: 1
Not connected to PVD
/buildAgent/work/99bede84aa0a52c2/source/physx/src/gpu/PxPhysXGpuModuleLoader.cpp (148) : internal error : libcuda.so!

[Warning] [carb.gym.plugin] Failed to create a PhysX CUDA Context Manager. Falling back to CPU.
Physics Engine: PhysX
Physics Device: cpu
GPU Pipeline: disabled
Backend TkAgg is interactive backend. Turning interactive mode on.
/home/PJLAB/geyuhong/anaconda3/envs/policydissect/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 6137
[Error] [carb.gym.plugin] Must enable GPU pipeline to use state tensors
Traceback (most recent call last):
File "/home/PJLAB/geyuhong/anaconda3/envs/policydissect/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/PJLAB/geyuhong/anaconda3/envs/policydissect/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/main.py", line 39, in
cli.main()
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="main")
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 322, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 136, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/PJLAB/geyuhong/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/PJLAB/geyuhong/policydissect/play/play_anymal.py", line 46, in
play_anymal(args, activation_func=activation, map=forward_anymal, model_name="anymal_forward", parkour=False)
File "/home/PJLAB/geyuhong/policydissect/policydissect/utils/isaacgym_utils.py", line 351, in play_anymal
env, _ = task_registry.make_env(name=args.task, args=args, env_cfg=env_cfg)
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/utils/task_registry.py", line 103, in make_env
headless=args.headless
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/envs/anymal_c/anymal.py", line 51, in init
super().init(cfg, sim_params, physics_engine, sim_device, headless)
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/envs/base/legged_robot.py", line 87, in init
self._init_buffers()
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/envs/anymal_c/anymal.py", line 65, in _init_buffers
super()._init_buffers()
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/envs/base/legged_robot.py", line 572, in _init_buffers
self.noise_scale_vec = self._get_noise_scale_vec(self.cfg)
File "/home/PJLAB/geyuhong/policydissect/policydissect/legged_gym/envs/anymal_c/anymal.py", line 126, in _get_noise_scale_vec
noise_vec = torch.zeros_like(self.obs_buf[0])
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

In /legged_gym/envs/base/legged_robot.py, line 488, i get '[Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 6137', and after running the line 489, i get '[Error] [carb.gym.plugin] Must enable GPU pipeline to use state tensors'.
These two errors prevent me from creating tensors on 'cuda'. I don't know how to solve this problem.
Besides, I run the case in /isaacgym/python/examples/ and find that i will get the same error if the case has the code 'self.gym.refresh_dof_state_tensor(self.sim)' and 'self.gym.refresh_actor_root_state_tensor(self.sim)', e.g., franka_nut_bolt_ik_osc.py and franka_cube_ik_osc.py

issue with training robot with passive joint

I plan to add a joint, like a bar on the top of Anymal, which make it likes a cart-pole. How could I exclude this joint from action space, meaning that let it run like roller without motor.

Code migration

How to transplant the trained controller to a real quadruped robot A1?

Unexpected low performance

Describe the bug
I used the original training script to train anymal_c_flat as well as anymal_c_rough. However, the 5000 iterations still result in low mean_reward (~20)

To Reproduce
Steps to reproduce the behavior:

  1. Execute python issacgym_anymal/scripts/train.py --task=anymal_c_flat
  2. The result shows low performance

Expected behavior
The Anymal is expected to move forward efficiently, instead of kinda random wandering.

System (please complete the following information):

  • OS: Ubuntu 20.04
  • GPU: RTX4090
  • CUDA: 11.6
  • Isaac Gym Preview 4

Additional context
Do I need to change the experiment setting? Or what can I do?

There Is No develop Branch & Some Suggestions

  • In the README file:
    cd rsl_rl && git checkout develop && pip install -e .
    but there is no develop branch in the rsl_rl.
    cd legged_gym && git checkout develop && pip install -e .
    same, there is no develop branch in this (legged_gym) repository
  • Besides, I think the pull request from sheim is reasonable, there are duplicate codes, packages=find_packages(), which can make pip install -e . fail. It would be great if the author/maintainer could merge the pull request.
  • Try running an example python examples/1080_balls_of_solitude.py should be changed to cd examples, and try running an example python 1080_balls_of_solitude.py

Configuration files and hyperparameter tuning

I see that you have used Python classes for config files. Is there any reason you choose Python classes over YAML files?

Also, given that you used Python classes, how were you able to perform a grid search on the parameters? I found that the nested class structure makes it messier to iterate over and get the attributes of the parameters that I want to search over. If you have the code doing the grid search, can you please share that?

Unable to specify GPU device on multi-GPU setup

Describe the bug
Unable to specify the GPU device to use on multi-GPU setup

To Reproduce
Steps to reproduce the behavior:

  1. Execute python train.py --graphics_device_id=0 --task=a1
  2. On seperate terminal, execute python train.py --graphics_device_id=1 --task=a1
  3. Observe that for both terminals, selected GPU device is still cuda:0.

Expected behavior
Selected GPU device should show cuda:0 and cuda:1 on the different terminals.

System (please complete the following information):

  • Commit: 9ddda29
  • OS: Ubuntu 18.04
  • GPU: 4x A5000
  • CUDA: 11.4
  • GPU Driver: 470.82.01

Training does not start

The training does not start. Previous steps all succeed. What could be the reason?
Screenshot from 2022-04-13 18-37-11

It just stuck there for more than 10 mins. I checked the CPU and GPU occupancy. It is obvious that training does not start.

obs, extras = self.env.get_observations() ValueError: too many values to unpack (expected 2)**

Expected behavior
When i run the example, there is error that :
File "/home/hu/ana/rsl_rl/rsl_rl/runners/on_policy_runner.py", line 29, in init
obs, extras = self.env.get_observations()
ValueError: too many values to unpack (expected 2)

###################Here is the details#############################
(rl-go2) hu@hu-desktop:~/ana/legged_gym-master$ python3 legged_gym/scripts/train.py --task=anymal_c_flat
Importing module 'gym_38' (/home/hu/ana/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/hu/ana/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.0+cu113
Device count 1
/home/hu/ana/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/hu/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Emitting ninja build file /home/hu/.cache/torch_extensions/py38_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Setting seed: 1
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
/home/hu/.local/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "legged_gym/scripts/train.py", line 47, in
train(args)
File "legged_gym/scripts/train.py", line 42, in train
ppo_runner, train_cfg = task_registry.make_alg_runner(env=env, name=args.task, args=args)
File "/home/hu/ana/legged_gym-master/legged_gym/utils/task_registry.py", line 147, in make_alg_runner
runner = OnPolicyRunner(env, train_cfg_dict, log_dir, device=args.rl_device)
File "/home/hu/ana/rsl_rl/rsl_rl/runners/on_policy_runner.py", line 29, in init
obs, extras = self.env.get_observations()
ValueError: too many values to unpack (expected 2)
################################################

System (please complete the following information):

  • OS: [Ubuntu 20.04]
  • GPU: [RTX geforce 2060 ]
  • CUDA: [11.0]
  • GPU Driver: [525.147.05 ]

How are episode rewards calculated?

I tried to add an reward term with bias = 0.5, i.e., it should be larger than 0.5 anyway, but still the value is close to zero at the first iteration.

[Error] [carb.gym.plugin] Failed to parse URDF file

The provided URDF files cannot be parsed. I am using Isaac Gym Preview 3 (version 1.0rc3) if that helps.
For all models it says it cannot parse the provided color string.

System:

  • Commit: 0548121
  • OS: Ubuntu 20.04
  • GPU: Quadro RTX 5000
  • CUDA: 11.4
  • GPU Driver: 470.103.01

Out of memory when training

The training command does not work on my laptop if --sim_device=cuda. It works if I use --sim_device=cpu.
I tried to only use 1 environment, but nothing seems to have changed.

OS Version: Ubuntu 21.04
Nvidia Driver: 470.82.00
Graphics: RTX 3060 Laptop
Pytorch: 1.10.0+cu113

(issac) cxx@cxx:~/Documents/Isaac/legged_gym$ python legged_gym/scripts/train.py --task=anymal_c_flat --num_envs 1 --sim_device=cuda --rl_device=cuda
Importing module 'gym_38' (/home/cxx/Documents/Isaac/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/cxx/Documents/Isaac/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.0+cu113
Device count 1
/home/cxx/Documents/Isaac/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/cxx/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Emitting ninja build file /home/cxx/.cache/torch_extensions/py38_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Setting seed: 1
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
/home/cxx/anaconda3/envs/issac/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
[Error] [carb.gym.plugin] Gym cuda error: out of memory: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 1718
[Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 6003
[Error] [carb.gym.plugin] Gym cuda error: out of memory: ../../../source/plugins/carb/gym/impl/Gym/GymPhysXCuda.cu: 991
[Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 5859
Traceback (most recent call last):
  File "legged_gym/scripts/train.py", line 47, in <module>
    train(args)
  File "legged_gym/scripts/train.py", line 41, in train
    env, env_cfg = task_registry.make_env(name=args.task, args=args)
  File "/home/cxx/Documents/Isaac/legged_gym/legged_gym/utils/task_registry.py", line 97, in make_env
    env = task_class(   cfg=env_cfg,
  File "/home/cxx/Documents/Isaac/legged_gym/legged_gym/envs/anymal_c/anymal.py", line 49, in __init__
    super().__init__(cfg, sim_params, physics_engine, sim_device, headless)
  File "/home/cxx/Documents/Isaac/legged_gym/legged_gym/envs/base/legged_robot.py", line 75, in __init__
    self._init_buffers()
  File "/home/cxx/Documents/Isaac/legged_gym/legged_gym/envs/anymal_c/anymal.py", line 63, in _init_buffers
    super()._init_buffers()
  File "/home/cxx/Documents/Isaac/legged_gym/legged_gym/envs/base/legged_robot.py", line 505, in _init_buffers
    self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
  File "/home/cxx/Documents/Isaac/isaacgym/python/isaacgym/torch_utils.py", line 16, in to_torch
    return torch.tensor(x, dtype=dtype, device=device, requires_grad=requires_grad)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

To make an interactive environment upon the given ones

I am planning to make an interactive environment just like issacgym/python/examples/projectiles.py where with the click of a mouse button we can throw projectiles toward robots (in my case unitree a1 robot). I tried including the steps onto the _create_envs method in legged_robot.py but I could only include this much of code into it:

self.proj_env = self.gym.create_env(self.sim, env_lower, env_upper, int(np.sqrt(self.num_envs)))
proj_asset_options = gymapi.AssetOptions()
proj_asset_options.density = 0.01
proj_asset = self.gym.create_sphere(self.sim, 0.05, proj_asset_options)
self.projectiles = []
for i in range(self.num_projectiles):
    proj_pose = gymapi.Transform()
    proj_pose.p = gymapi.Vec3(i * 0.5, 1.0, 10)
    proj_pose.r = gymapi.Quat(0, 0, 0, 1)

    # create actors which will collide with actors in any environment
    proj_handle = self.gym.create_actor(self.proj_env, proj_asset, proj_pose, "projectile_" + str(i), -1, 0)

    # set each projectile to a different, random color
    proj_color = 0.5 + 0.5 * np.random.random(3)
    self.gym.set_rigid_body_color(self.proj_env, proj_handle, 0, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(proj_color[0], proj_color[1], proj_color[2]))
    self.projectiles.append(proj_handle)

Now when I include the next part that is:

proj_index = 0
for event in self.gym.query_viewer_action_events(self.viewer):
    if event.action == 'space_shoot' or event.action == 'mouse_shoot' and event.value > 0:
        pos = self.gym.get_viewer_mouse_position(self.viewer)
    cam_pose = self.gym.get_viewer_camera_transform(self.viewer, self.proj_env)
    cam_fwd = cam_pose.r.rotate(gymapi.Vec3(0,0,1))
    spawn = cam_pose.p
    speed = 25
    vel = cam_fwd * speed
    ang_vel = 1.57 - 3.141*np.random.random(3)

    proj_handle = self.projectiles[proj_index]
    state = self.gym.get_actor_rigid_body_states(self.proj_env, proj_handle, gymapi.STATE_NONE)
    state['pose']['p'].fill((spawn.x, spawn.y, spawn.z))
    state['pose']['r'].fill((0, 0, 0, 1))
    state['vel']['linear'].fill((vel.x, vel.y, vel.z))
    state['vel']['angular'].fill((ang_vel[0], ang_vel[1], ang_vel[2]))
    self.gym.set_actor_rigid_body_states(self.proj_env, proj_handle, state, gymapi.STATE_ALL)
    proj_index = (proj_index + 1) % len(self.projectiles)

I get an AtrributeError as self.viewer is not a part of LeggedRobot class.
But I tried to put the second part into step() method and I get an Isaac Gym error: [Error] [carb.gym.plugin] Function GymGetActorRigidBodyStates cannot be used with the GPU pipeline after simulation starts. Please use the tensor API if possible. See docs/programming/tensors.html for more info.
I got to know from the Isaac gym docs that gym.get_actor_rigid_body_states can be used for CPU rendering but not for GPU.
IS THERE A WAY I CAN DO THE WHOLE INTERACTIVE SIMULATION VIA GPU?

Using trained weights on real-robot: Minimal Example

Help required

Hi Dear Legged Gym Team,

Could you provide a small minimal example of using the trained weights with a real robot and not the GYM or simulator? Or perhaps point towards a repository or example that has done so.

Thanks for your contributions ^_^

Terrain and A1/Cassie only working on CPU

OS Version: Ubuntu 21.04
Nvidia Driver: 495
Graphics: GTX 1660 Ti
Pytorch: PyTorch version 1.10.1+cu102

Hi tried anymal_c_flat and works fine on GTX 1660 Ti using nvidia-driver-495
When i try to run anymal_c_rough only works on CPU pipeline..otherwise terminal says killed.
Cassie works on cpu pipline python3 train.py --task=cassie --num_envs=900 --sim_device=cpu
will not let me run rl_device=cuda

how do I get it all runnning on GPU or is my GPU not advanced enough?

Sim-to-real Implementation

Hello,

I am wondering if you have any guideline for the sim-to-real deployment.
I am planning to deploy the learned policy in the Unitree A1 Robot

Thanks in advance!

Clarification on `armature` and `thickness` Parameters in `asset` Class

Hello,

I was reading the following code and came across two parameters: armature and thickness. Being relatively new to working with the underlying simulator, IsaacGym, I found myself in need of some clarification on these parameters, especially since I couldn't find substantial information on them online.

Here is the segment of the code I'm referring to:

    class asset(PrefixProto, cli=False):
        # ... (other parts of the code)
        armature = 0.
        thickness = 0.01

I have a few questions that I'm hoping you might be able to assist with:

  1. Armature

    • Could you please elaborate on what the armature parameter signifies within this class?
    • How is this parameter utilized in the simulation process?
    • Why is the default value set to 0.0? How does this choice impact the simulation?
  2. Thickness

    • Similarly, could you elucidate what the thickness parameter represents?
    • How does changing this value influence the behavior or performance of the simulation?
    • Why is the default value set to 0.01? How does this choice impact the simulation?

I would highly appreciate any assistance or insights you can provide to help me understand these parameters better. Please excuse me if my questions seem too basic; I'm still navigating my way around this simulator and I'm eager to learn more.

Thank you very much for your time!

Failed to train unitree aliengo robot using legged_gym

In official aliengo urdf, the {FR,FL,RR,RL} thigh joints are defined as continuous. When we used this urdf to train the policy, the robot got unstable action and failed to stand up.
Does legged_gym support aliengo(continuous joints) training?

Fail to support different envs

Hello! I am currently attempting to train robots in different environments using a set of networks, but I have encountered an issue where it seems that isaacgym does not support creating more than one environment. The following code and error message provide a brief explanation:

def train(args):
    env, env_cfg = task_registry.make_env(name=args.task, args=args) # args.task = 'anymal_c_flat'
    args_ = copy.deepcopy(args)
    args_.task = 'cassie'
    env_, env_cfg_ = task_registry.make_env(name=args_.task, args=args_)

And after running it, the console shows:

/buildAgent/work/99bede84aa0a52c2/source/foundation/FdFoundation.cpp (166) : invalid operation : Foundation object exists already. Only one instance per process can be created.

how does the privileged observation work?

thanks for your great contribution!

I notice that you use the privileged observation as critic obs for assymetric training in the PPO, but you haven`t mention this in the paper,
Could you please explain this part more clearly?

Plus, I notice that in other works by your team the privileged observation is used for distillation that can be reconstructed in the student policy, is the two privileged observation the same? If so, how does it work?

How to train the actuator network?

Thank you for the great work.

I am wondering how can I train the actuator network for my unitree robot.
Do you have any repo/code that you used to obtain the actuator network for anymal C (which is in resources)?

Thank you.

How do I specify a certain terrain

I see that there are terrain setting parameters in the legged_robot_config.py file.

    class terrain:
        selected = False # select a unique terrain type and pass all arguments
        terrain_kwargs = None # Dict of arguments for selected terrain

How do I set the terrain_kwargs parameter, which parameters are currently supported, could you give me an example?
This will help me a lot,thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.