GithubHelp home page GithubHelp logo

glc12125 / end-to-end-conditional-imitation-driving Goto Github PK

View Code? Open in Web Editor NEW

This project forked from acomcmxcvi/end-to-end-conditional-imitation-driving

0.0 0.0 0.0 59.57 MB

Autonomous driving research (Part I)

Python 100.00%

end-to-end-conditional-imitation-driving's Introduction

End-to-end Conditional Imitation Driving

Autonomous driving research (Part I)

The purpose of this project is to create system that is able to collect data and prepare dataset, just as training neural network for end-to-end imitation learning for autonomous driving, in purpose of research of the autonomous driving concepts.

The system is created for Carla simulator.

Video

The link of the video shows short demo of results, brief overview of purpose of data augmentation, and comparison of two models in specific situations: one trained on raw data, and second trained on data on which augmentation was applied.

https://www.youtube.com/watch?v=LoXPs6NShLI

Usage

Packages

Python: 3.7.4
Carla simulator: 0.9.8
Keras: 2.2.4
Tensorflow: 1.14.0
Imgaug: 0.4.0
OpenCV: 3.4.7

Steps

The whole process of collecting data which ends with training neural network is divided in several steps:

  1. Collecting data colecting_data.py:
    Collecting data is consisted of taking a picture from an automobile’s camera with appropriate steering angle which is put on the list. Collected data is kept in batches with 256 samples. During collecting data, Carla autopilot controls an automobile. The data is possible to collect during multiple sessions.

  2. Overview of collected data show_colected_data.py:
    Overview of raw data through all batches of one session.

  3. Adding high-level control to data add_high_level_control.py:
    After collecting data, it is necessary to add the high-level control information, that is to say, the adding of the information which led the automobile to corresponding action. At the beginning we are choosing the sessions which we want to go through with adding high-level control. High-level control is added by selecting the start frame on the ‘O’ button on the keyboard from which a corresponding high-level control is valid. The ‘P’ button on the keyboard selects the end frame up to which the control was valid, after which we select specific control (information). After going through all batches of all sessions, the program saves the file with all high-level controls of all sessions.

  4. Creating dataset for training data_to_dataset.py:
    After defining high-level controls for all data which we want to use foe training, it is required to create dataset for training, which should consist of suitable distribution of data. The data also mixes, and batches are created of 256 samples for training in the form X^j={images, high-level control}, Y^j={steering angle}, where j defines number of batches.

  5. Architecture of model functional_conditional_end_to_end_keras_model.py:
    The base of the architecture which was used is shown in End-to-End Learning for Self-Driving Cars (https://arxiv.org/pdf/1604.07316.pdf), with certain changes which have to do with one more input which have to do with high-level control.

  6. Model training train_model_batches_control.py:
    Model training is done in batches gained from last step. This method is chosen so that data augmentation is done in easy and fast way, and to solve memory issues.

  7. Model testing test_model_in_CARLA.py:
    In the process of the model testing is necessary to choose model that we want to test, and city in Carla simulator in which we want to test model. Arrows on the keyboard are the commands for giving high-level control information.

Models

With code there are two models: one trained on the raw data, and the other trained on the data on which data augmentation was applied.

Both models are trained through 10 epochs, on relatively small dataset with approximately 190000 frames divided in 745 batches. For optimizations algorithm Adam with learning rate of 1e-4 was used.

It is important to underline that raw data is collected in sunny environment, without clouds and rain. The data augmentation is very noticeable in changed environment with clouds and rain, where the model trained on the raw data does not manage well, and where model trained on the same data, but with augmentation manage is better.

end-to-end-conditional-imitation-driving's People

Contributors

acomcmxcvi avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.