GithubHelp home page GithubHelp logo

opendilab / di-drive Goto Github PK

View Code? Open in Web Editor NEW
562.0 11.0 60.0 40.69 MB

Decision Intelligence Platform for Autonomous Driving simulation.

Home Page: https://opendilab.github.io/DI-drive/

License: Apache License 2.0

Python 99.76% Shell 0.24%
reinforcement-learning imitation-learning autodrive carla pytorch autonomous-driving metadrive

di-drive's Introduction

DI-drive

icon

Twitter Style Docs Loc Comments

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub license

Introduction

DI-drive doc

DI-drive is an open-source Decision Intelligence Platform for Autonomous Driving simulation. DI-drive applies different simulators/datasets/cases in Decision Intelligence Training & Testing for Autonomous Driving Policy. It aims to

  • run Imitation Learning, Reinforcement Learning, GAIL etc. in a single platform and simple unified entry
  • apply Decision Intelligence in any part of the driving simulation
  • suit most of the driving simulators input & output
  • run designed driving cases and scenarios

and most importantly, to put these all together!

DI-drive uses DI-engine, a Reinforcement Learning platform to build most of the running modules and demos. DI-drive currently supports Carla, an open-source Autonomous Driving simulator to operate driving simulation, and MetaDrive, a diverse driving scenarios for Generalizable Reinforcement Learning. DI-drive is an application platform under OpenDILab

icon

Visualization of Carla driving in DI-drive

Outline

Installation

DI-drive runs with Python >= 3.5 and DI-engine >= 0.3.1 (Pytorch is needed in DI-engine). You can install DI-drive from the source code:

git clone https://github.com/opendilab/DI-drive.git
cd DI-drive
pip install -e .

DI-engine and Pytorch will be installed automatically.

In addition, at least one simulator in Carla and MetaDrive need to be installed to run in DI-drive. MetaDrive can be easily installed via pip. If Carla server is used for simulation, users need to install 'Carla Python API' in addition. You can use either one of them or both. Make sure to modify the activated simulators in core.__init__.py to avoid import error.

Please refer to the installation guide for details about the installation of DI-drive.

Quick Start

Carla

Users can check the installation of Carla and watch the visualization by running an 'auto' policy in provided town map. You need to start a Carla server first and modify the Carla host and port in auto_run.py into yours. Then run:

cd demo/auto_run
python auto_run.py

MetaDrive

After installation of MetaDrive, you can start an RL training in MetaDrive Macro Environment by running the following code:

cd demo/metadrive
python macro_env_dqn_train.py.

We provide detail guidance for IL and RL experiments in all simulators and quick run of existing policy for beginners in our documentation. Please refer to it if you have further questions.

Model Zoo

Imitation Learning

Reinforcement Learning

Other Method

DI-drive Casezoo

DI-drive Casezoo is a scenario set for training and testing the Autonomous Driving policy in simulator. Casezoo combines data collected from actual vehicles and Shanghai Lingang road license test Scenarios. Casezoo supports both evaluating and training, which makes the simulation closer to real driving.

Please see casezoo instruction for details about Casezoo.

File Structure

DI-drive
|-- .gitignore
|-- .style.yapf
|-- CHANGELOG
|-- LICENSE
|-- README.md
|-- format.sh
|-- setup.py
|-- core
|   |-- data
|   |   |-- base_collector.py
|   |   |-- benchmark_dataset_saver.py
|   |   |-- bev_vae_dataset.py
|   |   |-- carla_benchmark_collector.py
|   |   |-- cict_dataset.py
|   |   |-- cilrs_dataset.py
|   |   |-- lbc_dataset.py
|   |   |-- benchmark
|   |   |-- casezoo
|   |   |-- srunner
|   |-- envs
|   |   |-- base_drive_env.py
|   |   |-- drive_env_wrapper.py
|   |   |-- md_macro_env.py
|   |   |-- md_traj_env.py
|   |   |-- scenario_carla_env.py
|   |   |-- simple_carla_env.py
|   |-- eval
|   |   |-- base_evaluator.py
|   |   |-- carla_benchmark_evaluator.py
|   |   |-- serial_evaluator.py
|   |   |-- single_carla_evaluator.py
|   |-- models
|   |   |-- bev_speed_model.py
|   |   |-- cilrs_model.py
|   |   |-- common_model.py
|   |   |-- lbc_model.py
|   |   |-- model_wrappers.py
|   |   |-- mpc_controller.py
|   |   |-- pid_controller.py
|   |   |-- vae_model.py
|   |   |-- vehicle_controller.py
|   |-- policy
|   |   |-- traj_policy
|   |   |-- auto_policy.py
|   |   |-- base_carla_policy.py
|   |   |-- cilrs_policy.py
|   |   |-- lbc_policy.py
|   |-- simulators
|   |   |-- base_simulator.py
|   |   |-- carla_data_provider.py
|   |   |-- carla_scenario_simulator.py
|   |   |-- carla_simulator.py
|   |   |-- fake_simulator.py
|   |   |-- srunner
|   |-- utils
|       |-- data_utils
|       |-- env_utils
|       |-- learner_utils
|       |-- model_utils
|       |-- others
|       |-- planner
|       |-- simulator_utils
|-- demo
|   |-- auto_run
|   |-- cict
|   |-- cilrs
|   |-- implicit
|   |-- latent_rl
|   |-- lbc
|   |-- metadrive
|   |-- simple_rl
|-- docs
|   |-- casezoo_instruction.md
|   |-- figs
|   |-- source

Join and Contribute

We appreciate all contributions to improve DI-drive, both algorithms and system designs. Welcome to OpenDILab community! Scan the QR code and add us on Wechat:

qr

Or you can contact us with slack or email ([email protected]).

License

DI-engine released under the Apache 2.0 license.

Citation

@misc{didrive,
    title={{DI-drive: OpenDILab} Decision Intelligence platform for Autonomous Driving simulation},
    author={DI-drive Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-drive}},
    year={2021},
}

di-drive's People

Contributors

rangilyu avatar robinc94 avatar youngzhou1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

di-drive's Issues

hi, i can not run auto_run_case.py [cutin]

when i try to run the command:

Run single scenario

python auto_run_case.py --host [CARLA HOST] --port [CARLA_PORT] --scenario [SCENARIO_NAME]

i have config it, like this,
parser = argparse.ArgumentParser(description=description, formatter_class=RawTextHelpFormatter)
parser.add_argument('--route', help='Run a route as a scenario (input:(route_file,scenario_file,[route id]))', nargs='+', type=str)
parser.add_argument('--scenario', default='CutIn', help='Run a single scenario (input: scenario name)', type=str)

parser.add_argument('--host', default='localhost', help='IP of the host server (default: localhost)')
parser.add_argument('--port', default=9000, help='TCP port to listen to (default: 9000)', type=int)
parser.add_argument('--tm-port', default=None, help='Port to use for the TrafficManager (default: None)', type=int)

Automatically core dumped at server

when I finished installation of DI-drive and run auto_run.py, the server core dumped in few seconds.
I have also try other python files and they all caused this problem. I had also tried the examples provide in Carla/PythonAPI/examples, the problem didn't rise.

I used the latest DI-drive and DI-Engine 3.1
Python version: 3.7
torch version: 1.8.0
Ubuntu 22.04

Traceback (most recent call last):
  File "/home/yang/DI-drive/demo/auto_run/auto_run.py", line 80, in <module>
    main(main_config)
  File "/home/yang/DI-drive/demo/auto_run/auto_run.py", line 75, in main
    evaluator.eval(cfg.policy.eval.evaluator.reset_param)
  File "/home/yang/DI-drive/core/eval/single_carla_evaluator.py", line 74, in eval
    obs = self._env.reset(**reset_param)
  File "/home/yang/DI-drive/core/envs/drive_env_wrapper.py", line 57, in reset
    obs = {'birdview': birdview, 'vehicle_state': obs['vehicle_state']}
KeyError: 'vehicle_state'
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 2623 (sensor.other.collision) 
terminate called without an active exception
Aborted (core dumped)

No such file or directory: '/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/corl2017/099/full_Town01.txt'

Hi,
I stuck with this error when I run the training file for RL training like this:

python3 simple_rl_train.py -p ddpg

/home/DI-drive/didrive/lib/python3.8/site-packages/treevalue/tree/integration/torch.py:21: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
register_for_torch(TreeValue)
/home/DI-drive/didrive/lib/python3.8/site-packages/treevalue/tree/integration/torch.py:22: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
register_for_torch(FastTreeValue)
[07-04 10:02:39] WARNING If you want to use numba to default_helper.py:450
speed up segment tree, please
install numba first
/home/DI-drive/didrive/lib/python3.8/site-packages/gym/envs/registration.py:440: UserWarning: WARN: The registry.env_specs property along with EnvSpecTree is deprecated. Please use registry directly as a dictionary instead.
logger.warn(
[ENV] Register environments: ['SimpleCarla-v1', 'ScenarioCarla-v1'].
/home/DI-drive/didrive/lib/python3.8/site-packages/gym/core.py:329: DeprecationWarning: WARN: Initializing wrapper in old step API which returns one bool instead of two. It is recommended to set new_step_api=True to use new step API. This will be the default behaviour in future.
deprecation(
Traceback (most recent call last):
File "simple_rl_train.py", line 243, in
main(args)
File "simple_rl_train.py", line 153, in main
collector_env = SyncSubprocessEnvManager(
File "/home/DI-drive/didrive/lib/python3.8/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 79, in init
super().init(env_fn, cfg)
File "/home/DI-drive/didrive/lib/python3.8/site-packages/ding/envs/env_manager/base_env_manager.py", line 135, in init
self._env_ref = self._env_fn0
File "simple_rl_train.py", line 35, in wrapped_continuous_env
return BenchmarkEnvWrapper(ContinuousEnvWrapper(env), wrapper_cfg)
File "/home/DI-drive/didrive/lib/python3.8/site-packages/core/envs/drive_env_wrapper.py", line 153, in init
pose_pairs = read_pose_txt(benchmark_dir, poses_txt)
File "/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/benchmark_utils.py", line 32, in read_pose_txt
pose_pairs = pairs_file.read_text().strip().split('\n')
File "/usr/lib/python3.8/pathlib.py", line 1236, in read_text
with self.open(mode='r', encoding=encoding, errors=errors) as f:
File "/usr/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/usr/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/corl2017/099/full_Town01.txt'

How to resolve this?

Also I want to know is this supports multi agent scenario?

Could not resolve host: gitlab.bj.sensetime.com

When I try to install DI-drive following the instruction:
git clone https://gitlab.bj.sensetime.com/open-XLab/cell/xad.git
It gives me an error like this

Cloning into 'xad'...
fatal: unable to access 'https://gitlab.bj.sensetime.com/open-XLab/cell/xad.git/':
Could not resolve host: gitlab.bj.sensetime.com

Wondering how to solve this problem? Thanks!

BeV Speed RL Pretrain Model

ISSUE TEMPLATE

Hi DI-Drive team,
I appreciate your work and am trying to use this platform. Could you provide any access to your pretrained models of the BEV Speed RL? Thank you very much.

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable:
    import core, torch, sys
    print(core.__version__, torch.__version__, sys.version, sys.platform)

Collect less frames in each episode

Hi, I am collecting data for the Implicit Affordance model. I noticed that in each episode, it usually collects thousands of frames. I wonder if there's a way to downsample the number of frames collected in each episode, like only collect about a hundred frames in each episode? Thank you!

Unstable AutoMPCPolicy

ISSUE TEMPLATE

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable:
    import core, torch, sys
    print(core.__version__, torch.__version__, sys.version, sys.platform)

0.3.4 1.12.1+cu102 3.8.16 (default, Mar 2 2023, 03:21:46)
[GCC 11.2.0] linux

I was trying the AutoMPCPolicy with script python auto_run.py
I noticed the agent is super unstable compared to AutoPIDPolicy. MPC agent hit the wall easily. I was wondering did you have this issue?

The weather is not randomized following the input config of the init function

ISSUE TEMPLATE

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable:
    import core, torch, sys
    print(core.__version__, torch.__version__, sys.version, sys.platform)

In the following code in carla_simulator.py, it seems that the weather is not randomized following the input config of the init function.

Screenshot from 2021-12-07 21-37-30

You can see that the config specifies a subset of the whethers.

Screenshot from 2021-12-07 21-39-27

ModuleNotFoundError: No module named 'ding'

In the documentation for End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances, when I try to run python collect_data.py, there's an error in the import section:
from ding.utils import EasyTimer
ModuleNotFoundError: No module named 'ding'
Is that because the ding dir wasn't added into the Github repository? Could you help with that? Thanks!

hi, i have another error, run simple_rl_train.py

before i run it, i config
collector_env_num=2,
evaluator_env_num=1,
dict(carla_host='localhost', carla_ports=[9000, 9006, 2]),
and start three carla servers with port-num 9000 9002 9004

then I get following error:

assert self._closed, "please first close the env manager"

how can i fix it?
thx

Attribution Error when running simple_rl_train.py

ISSUE TEMPLATE

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable:
    import core, torch, sys
    print(core.__version__, torch.__version__, sys.version, sys.platform)

I installed the latest version of DI-Drive with DI-Engine (version 0.3.1) and Carla (0.9.10) according to the instruction. No error was raised when entering "python -c "import carla"". However, when I run the simple_rl_train.py, the program raised an error

Traceback (most recent call last):
File "simple_rl_train.py", line 243, in
main(args)
File "simple_rl_train.py", line 181, in main
cfg.policy.eval.evaluator, evaluate_env, policy.eval_mode, tb_logger, exp_name=cfg.exp_name
File "/home/zck7060/DI-drive/core/eval/serial_evaluator.py", line 59, in init
super().init(cfg, env, policy, tb_logger=tb_logger, exp_name=exp_name, instance_name=instance_name)
File "/home/zck7060/DI-drive/core/eval/base_evaluator.py", line 49, in init
self.env = env
File "/home/zck7060/DI-drive/core/eval/serial_evaluator.py", line 76, in env
self._env_manager.launch()
File "/home/zck7060/anaconda3/envs/didrive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 333, in launch
self._create_state()
File "/home/zck7060/anaconda3/envs/didrive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 250, in _create_state
shape = obs_space.shape
AttributeError: 'NoneType' object has no attribute 'shape'

Before running the program, I had started the Carla server with ports 9000 and 9002.

I also tried other versions of DI-Engine like 0.4.0 but same error. Any idea of how this error occurs is appreciated!

CICT Training failed using the default setting

ISSUE TEMPLATE

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable:
    import core, torch, sys
    print(core.__version__, torch.__version__, sys.version, sys.platform)

Hi, I'm using the DI-drive repo to train the CICT model of my own. I collected data and trained the GAN model using the default parameter settings, it turns out that the discriminator loss is always 0, which leads to the generator stop learning anything after several iterations. I'm wondering if the provided pretrained model is trained using the default setting? If not, can you provide some reference for the parameter settings? Thanks very much.

Unable to find libfuse when run "auto_run.py"

I tried to run "auto_run.py" on windows and got issues as follow:

Traceback (most recent call last):
File "simple_rl_train.py", line 9, in
from core.envs import SimpleCarlaEnv, BenchmarkEnvWrapper
File "d:\carla\di-drive\core\envs_init
.py", line 24, in
from .md_macro_env import MetaDriveMacroEnv
File "d:\carla\di-drive\core\envs\md_macro_env.py", line 12, in
from core.utils.simulator_utils.md_utils.discrete_policy import DiscreteMetaAction
File "d:\carla\di-drive\core\utils\simulator_utils\md_utils\discrete_policy.py", line 5, in
from metadrive.component.vehicle_module.PID_controller import PIDController
File "C:\Users\James Chen\anaconda3\envs\carla_1\lib\site-packages\metadrive_init_.py", line 11, in
module = loader.find_module(module_name).load_module(module_name)
File "C:\Users\James Chen\anaconda3\envs\carla_1\lib\site-packages\metadrive\cli.py", line 14, in
from metadrive.mnt import mount
File "C:\Users\James Chen\anaconda3\envs\carla_1\lib\site-packages\metadrive\mnt.py", line 7, in
from fuse import FUSE, FuseOSError, Operations
File "C:\Users\James Chen\anaconda3\envs\carla_1\lib\site-packages\fuse.py", line 115, in
raise EnvironmentError('Unable to find libfuse')
OSError: Unable to find libfuse

I am wondering whether the problem is caused by windows? and any solutions? Solutions I can find online are all based on Linux.

Thanks!

Missing folder/File

Hi,
There is no file or folder named "ding". This line is mentioned in rl_train and rl_test python code.
How to get these files?

from ding.policy import DQNPolicy, PPOPolicy, TD3Policy, SACPolicy, DDPGPolicy

Thanks

how to train the simple rl

Hi, when I train the simple rl, I start the carla in a terminator with "./CarlaUE4.sh --carla-world-port=9000"
and another terminator with "python demo/simple_rl/dqn_train.py". And I have install the DI-engine,Carla, and set the pythonpath.

And I got:
ERROR:root:VEC_ENV_MANAGER: env 1 reset error
ERROR:root:
Env Process Reset Exception:
File "xxxxx/anaconda2/envs/di_drive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 382, in _reset
reset_fn()
File "/home/xxxx/anaconda2/envs/di_drive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 371, in reset_fn
raise Exception("env reset timeout") # Leave it to retry_wrapper to try again
Exception('env reset timeout')
ERROR:root:VEC_ENV_MANAGER: env 3 reset error
ERROR:root:
Env Process Reset Exception:
File "/home/xxxx/anaconda2/envs/di_drive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 382, in _reset
reset_fn()
File "/home/xxxxx/anaconda2/envs/di_drive/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 371, in reset_fn
raise Exception("env reset timeout") # Leave it to retry_wrapper to try again
Exception('env reset timeout')
ERROR:root:VEC_ENV_MANAGER: env 4 reset error
ERROR:root:

How to fix this problem? Did I forget to set some parameters?
Thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.