GithubHelp home page GithubHelp logo

aicrowd-racing's Introduction

Learn to Race Banner

Discord

This repository is the Learn to Race Challenge Submission template and Starter kit! Clone the repository to compete now!

This repository contains:

  • Documentation on how to submit your models to the leaderboard
  • The procedure for best practices and information on how we evaluate your agent, etc.
  • Starter code for you to get started!

Table of Contents

Competition Overview

The Learn to Race Challenge is an opportunity for researchers and machine learning enthusiasts to test their skills by developing autonomous agents that can adhere to safety specifications in high-speed racing. Racing demands each vehicle to drive at its physical limits with barely any margin for safety, when any infraction could lead to catastrophic failures. Given this inherent tension, we envision autonomous racing to serve as a particularly challenging proving ground for safe learning algorithms.

Competition Stages

The challenge consists of two stages:

  • In Stage 1, participants will train their models locally, and then upload submit model checkpoints to AIcrowd for evaluation on Thruxton Circuit, which is included in the Learn-to-Race environment. Each team will be able to submit agents to the evaluation service with a limit of 1 successful submission every 24 hours. The top 10 teams on the leader board will enter Stage 2.

  • In Stage 2, participants will submit their models (with checkpoints) to AIcrowd for training on an unseen track for a time budget of one hour, during which the number of safety infractions will be accumulated as one of the evaluation metrics. After the one-hour β€˜practice’ period, the agent will be evaluated on the unseen track. Each team may submit up to three times for this stage, and the best results will be used for the final ranking. This is intended to give participants a chance to deal with bugs or submission errors.

Getting Started

  1. Sign up to join the competition on the AIcrowd website.
  2. Download the Arrival Autonomous Racing Simulator from this link.
  3. Fork this starter kit repository. You can use this link to create a fork.
  4. Clone your forked repo and start developing your autonomous racing agent.
  5. Develop your autonomous racing agents following the template in how to write your own agent section.
  6. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions on the racetrack and report the metrics on the leaderboard of the competition.

How to write your own agent?

We recommend that you place the code for all your agents in the agents directory (though it is not mandatory). You should implement the

  • select_action
  • register_reset
  • training (Needed only in stage 2)
  • load_model (Needed only in stage 2)
  • save_model (Needed only in stage 2)

methods as specified in the BaseAgent class. We recommend that you write your code in such a way that it implements training, load_model, and save_model methods as expected. This will ensure that your code is ready for stage 2 evaluations.

Please refer the BaseAgent class for the input/output interfaces.

Update the SubmissionConfig in config.py to use your new agent class instead of the SACAgent.

How to start participating?

Setup

  1. Add your SSH key to AIcrowd GitLab

You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.

  1. Fork the repository. You can use this link to create a fork.

  2. Clone the repository

    git clone [email protected]:<YOUR_AICROWD_USERNAME>/l2r-starter-kit.git
    
  3. Install competition specific dependencies!

    cd l2r-starter-kit
    pip install -r requirements.txt
    
  4. Try out the SAC agent by running python rollout.py. You should start the simulator first, by running bash <simulator_path>/ArrivalSim-linux-0.7.1.188691/LinuxNoEditor/ArrivalSim.sh -openGL. You can also checkout the random agent implementation for a minimal reference code.

  5. Write your own agent as described in How to write your own agent section.

  6. Make a submission as described in How to make a submission section.

How do I specify my software runtime / dependencies?

We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.

The configuration files typically include requirements.txt (pypi packages), apt.txt (apt packages) or even your own Dockerfile.

You can check detailed information about the same in the πŸ‘‰ runtime.md file.

What should my code structure be like?

Please follow the example structure as it is in the starter kit for the code structure. The different files and directories have following meaning:

.
β”œβ”€β”€ aicrowd.json           # Submission meta information - like your username
β”œβ”€β”€ apt.txt                # Packages to be installed inside docker image
β”œβ”€β”€ requirements.txt       # Python packages to be installed
β”œβ”€β”€ rollout.py             # Entrypoint to test your code locally (DO NOT EDIT, will be replaced during evaluation)
β”œβ”€β”€ config.py              # File containing env, simulator and submission configuration
β”œβ”€β”€ l2r/                   # Directory containing L2R env specific scripts
β”œβ”€β”€ evaluator/             # Helper scripts for local evaluation (will be ignored during evaluation)
β”œβ”€β”€ racetracks/            # L2R racetrack data (DO NOT EDIT, will be replaced during evaluation)
β”œβ”€β”€ utility/               # Helper scripts to simplify submission flow
└── agents                 # Place your agents related code here
    β”œβ”€β”€ base.py            # Code for base agent
    └── <my_agent>.py      # IMPORTANT: Your agent code

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "learn-to-race-autonomous-racing-virtual-challenge",
  "authors": ["your-aicrowd-username"],
  "description": "(optional) description about your awesome agent",
  "external_dataset_used": false
}

This JSON is used to map your submission to the challenge - so please remember to use the correct challenge_id as specified above.

How to make a submission?

πŸ‘‰ submission.md

Best of Luck πŸŽ‰ πŸŽ‰

Other Concepts

Evaluation Metrics

  • Success Rate: Each race track is partitioned into a fixed number of segments and the success rate is calculated as the number of successfully completed segments over the total number of segments. If the agent fails at a certain segment, it will respawn stationarily at the beginning of the next segment. If the agent successfully completes a segment, it will continue on to the next segment carrying over the current speed.
  • Average Speed: Average speed is defined as the total distance traveled divided by total tome.
  • Number of Safety Infractions (Stage 2 ONLY): The number of safety infractions is accumulated during the 1-hour β€˜practice’ period in Stage 2 of the competition. The agent is considered to have incurred a safety infraction if 2 wheels of the vehicle leave the drivable area, the vehicle collides with an object, or does not progress for a number of steps (e.g. stuck). In Learn-to-Race, the agent is considered having failed upon any safety infraction.

Ranking Criteria

  • In Stage 1, the submissions will first be ranked on success rate, and then submissions with the same success rate will be ranked on average speed.
  • In Stage 2, the submissions will first be ranked on success rate, and then submissions with the same success rate will be ranked on a weighted sum of the total number of safety infractions and the average speed.

Time constraints

  • To prevent the participants from achieving a high success rate by driving very slowly, the maximum episode length will be set based on an average speed of 30km/h. The evaluation will terminate if the maximum episode length is reached and metrics will be computed based on performance up till that point.

Local Evaluation

  • Participants can run the evaluation protocol for their agent locally with or without any constraint posed by the Challenge to benchmark their agents privately.
  • Remember to start the simulator first, by executing bash <simulator_path>/ArrivalSim-linux-0.7.1.188691/LinuxNoEditor/ArrivalSim.sh -openGL.
  • Participants can familiarize themselves with the code base by trying out the random agent, as a minimal example, by running python rollout.py.
  • Upon finishing the select_action method in the agent class, one should be able to execute the evaluation_routine method in rollout.py.
  • One should write the training procedures in the training method in the agent class, and then one can execute the training_routine method in rollout.py.

Contributing

πŸ™ You can share your solutions or any other baselines by contributing directly to this repository by opening merge request.

  • Add your implemntation as agents/<your_agent>.py.
  • Test it out using python rollout.py.
  • Add any documentation for your approach at top of your file.
  • Import it in config.py
  • Create merge request! πŸŽ‰πŸŽ‰πŸŽ‰

Contributors

πŸ“Ž Important links

πŸ’ͺ Β Challenge Page: https://www.aicrowd.com/challenges/learn-to-race-autonomous-racing-virtual-challenge

πŸ—£οΈ Β Discussion Forum: https://www.aicrowd.com/challenges/learn-to-race-autonomous-racing-virtual-challenge/discussion

πŸ† Β Leaderboard: https://www.aicrowd.com/challenges/learn-to-race-autonomous-racing-virtual-challenge/leaderboards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.