GithubHelp home page GithubHelp logo

autonomous_car_verification's Introduction

Compositional Autonomous Car Verificatoin

This repository contains code for experiments reported in the paper titled "Compositional Learning and Verification of Neural Network Controllers", which is accepted for publication at the International Conference on Embedded Software (EMSOFT), 2021.

Installation

The training code is implemented in pytorch, keras and tensorflow. For consistency, we recommend using the same versions of these libraries:

pip install tensorflow==1.15rc2 keras==2.3.1 torch=1.3.0

Verisig can downloaded from https://github.com/Verisig/verisig and installed according to the instructions in that repo.

Simulator

The F1/10 simulator is in the simulator folder, in Car.py. This simulator builds on our prior work on modeling and simulating the F1/10 car: https://github.com/rivapp/autonomous_car_verification. All neural network controllers and mode predictors are also in that folder, stored in yml files. Specifically, the controller filenames begin with tanh (where the lidar-feedback controllers are 64x64 and the state-feedback controllers are 16x16), whereas the mode predictors are the remaining yml files.

To reproduce the simulation figures, one needs to use the plot_trajectory scripts. For example, to reproduce Figure 7a, run

python plot_trajectories_7a.py

Similarly, example trajectories for the state-feedback system can be plotted using

python plot_trajectories_simple.py

Controller Training

The training code for the individual controllers is in the controller_training folder. The code is based on the TD3 reinforcement learning algorithm (https://github.com/sfujim/TD3). To run the algorithm, run

python train.py

To indicate which track to use, look at lines 108-141 as an example and set the turn variable accordingly. If you'd like to train a state-feedback controller, set the state_feedback variable to True in line 103. If you would like to change the control neural network architecture, look at the Actor class in TD3.py for an example. All controllers are stored in the models (or, for state feedback controllers, models_state_feedback) folder.

To reproduce Figure 6, use the train_store_rewards.py script, which is similar to train.py except it also stores the history of rewards as training progresses. Run it in the same manner:

python train_store_reward.py --seed 0

where we used seeds 0-4 to train 5 separate controllers. After each controller is trained, a pickle file with the reward history is stored in the same folder. To plot Figure 6, put all pickle files in the same folder (e.g., reward_history) and run the provided plot_rewards_lidar.py script (or, for state feedback, plot_rewards_state.py) as follows:

python plot_rewards_lidar.py reward_history

Mode Predictor Training

The training code for the mode predictor is in the mode_predictor_training directory.

Since the mode predictor is trained using supervised learning, we need to produce a training dataset using generate_data.py. The script also generates some training points annotated with the car's position, which can be used for evaluation.

For convenience, there is also a script called plot_test_points.py, which plots the annotated positions.

To train the mode predictor, run

python training_data/generate_data.py
python train_little.py
python train_big.py

Verification

The verification code is the verification folder.

Plant Model

The F1/10 car's hybrid system model can be found in the plant_models folder. The models are stored in python dictionaries and are saved as pickle files. If the user would like to modify the models (e.g., to add a different number of LiDAR rays or to change the car dynamics), one can look at the writeDynamics.py and writeCompTransitions.py files that are used to generate the pickle files (the second file is used to perform the composition between the F1/10 hybrid systen and the neural network hybrid system(s).

90-degree turn verification

The verification code for the 90-degree turns is in the square_right_turn and square_left_turn folders (for right and left turns, respectively). Note that this code assumes Verisig is installed in a verisig folder in the top directory. To run a single verification instance, use the right_turn.py and left_turn.py scripts, e.g., for a right turn (from the square_right_turn directory):

python right_turn.py

The initial conditions for the single instance are set in lines 874-880. To run all 1000 instances for a given turn, use the right_turn_multithreaded.py and left_turn_multithreaded.py scripts. The scripts are currently set up to use 60 cores -- feel free to change the num_cores variable to indicate how many cores you would like the scripts to use. For example, to run the full right-turn verification, run (from the square_right_turn folder):

python right_turn_multithreaded.py

Warning: Verifying each turn requires 1000 verifying instances, which requires significant computation resources. Please consult Table 1 in the paper before proceeding with this step, to get an estimate of the total run-time.

120-degree turn verification

The verification code for the 120-degree turns is in the sharp_right_turn and sharp_left_turn folders (for right and left turns, respectively). Note that this code assumes Verisig is installed in a verisig folder in the top directory. To run a single verification instance, use the sharp_right_turn.py and sharp_left_turn.py scripts, e.g., for a right turn (from the sharp_right_turn directory):

python sharp_right_turn.py

The initial conditions for the single instance are set in lines 871-877. To run all 1000 instances for a given turn, use the sharp_right_turn_multithreaded.py and shapr_left_turn_multithreaded.py scripts. The scripts are currently set up to use 60 cores -- feel free to change the num_cores variable to indicate how many cores you would like the scripts to use. For example, to run the full right-turn verification, run (from the sharp_right_turn folder):

python sharp_right_turn_multithreaded.py

Warning: Verifying each turn requires 1000 verifying instances, which requires significant computation resources. Please consult Table 1 in the paper before proceeding with this step, to get an estimate of the total run-time.

autonomous_car_verification's People

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.