GithubHelp home page GithubHelp logo

autobots's Introduction

AutoBots Official Repository

This repository is the official implementation of the AutoBots architectures. We include support for the following datasets:

Visit our webpage for more information.

Getting Started

  1. Create a python 3.7 environment. I use Miniconda3 and create with conda create --name AutoBots python=3.7
  2. Run pip install -r requirements.txt

That should be it!

Setup the datasets

Follow the instructions in the READMEs of each dataset (found in datasets).

Training an AutoBot model

All experiments were performed locally on a single GTX 1080Ti. The trained models will be saved in results/{Dataset}/{exp_name}.

nuScenes

Training AutoBot-Ego on nuScenes while using the raw road segments in the map:

python train.py --exp-id test --seed 1 --dataset Nuscenes --model-type Autobot-Ego --num-modes 10 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --use-map-lanes True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/nuscenes_h5_files

Training AutoBot-Ego on nuScenes while using the Birds-eye-view image of the road network:

python train.py --exp-id test --seed 1 --dataset Nuscenes --model-type Autobot-Ego --num-modes 10 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --use-map-image True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/nuscenes_h5_files

Training AutoBot-Joint on nuScenes while using the raw road segments in the map:

python train.py --exp-id test --seed 1 --dataset Nuscenes --model-type Autobot-Joint --num-modes 10 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --use-map-lanes True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/nuscenes_h5_files

Argoverse

Training AutoBot-Ego on Argoverse while using the raw road segments in the map:

python train.py --exp-id test --seed 1 --dataset Argoverse --model-type Autobot-Ego --num-modes 6 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --use-map-lanes True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/argoverse_h5_files

TrajNet++

Training AutoBot-Joint on TrajNet++:

python train.py --exp-id test --seed 1 --dataset trajnet++ --model-type Autobot-Joint --num-modes 6 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/npy_files

Interaction-Dataset

Training AutoBot-Joint on the Interaction-Dataset while using the raw road segments in the map:

python train.py --exp-id test --seed 1 --dataset interaction-dataset --model-type Autobot-Joint --num-modes 6 --hidden-size 128 --num-encoder-layers 2 --num-decoder-layers 2 --dropout 0.1 --entropy-weight 40.0 --kl-weight 20.0 --use-FDEADE-aux-loss True --tx-hidden-size 384 --batch-size 64 --learning-rate 0.00075 --learning-rate-sched 10 20 30 40 50 --dataset-path /path/to/root/of/interaction_dataset_h5_files

Evaluating an AutoBot model

For all experiments, you can evaluate the trained model on the validation dataset by running:

python evaluate.py --dataset-path /path/to/root/of/interaction_dataset_h5_files --models-path results/{Dataset}/{exp_name}/{model_epoch}.pth --batch-size 64

Note that the batch-size may need to be reduced for the Interaction-dataset since evaluation is performed on all agent scenes.

Extra scripts

We also provide extra scripts that can be used for submitting to the nuScenes, Argoverse and Interaction-Dataset Evaluation server.

For nuScenes:

python useful_scripts/generate_nuscene_results.py --dataset-path /path/to/root/of/nuscenes_h5_files --models-path results/Nuscenes/{exp_name}/{model_epoch}.pth 

For Argoverse:

python useful_scripts/generate_argoverse_test.py --dataset-path /path/to/root/of/argoverse_h5_files --models-path results/Argoverse/{exp_name}/{model_epoch}.pth 

For the Interaction-Dataset:

python useful_scripts/generate_indst_test.py --dataset-path /path/to/root/of/interaction_dataset_h5_files --models-path results/interaction-dataset/{exp_name}/{model_epoch}.pth 

Reference

If you use this repository, please cite our work:

@inproceedings{
  girgis2022latent,
  title={Latent Variable Sequential Set Transformers for Joint Multi-Agent Motion Prediction},
  author={Roger Girgis and Florian Golemo and Felipe Codevilla and Martin Weiss and Jim Aldon D'Souza and Samira Ebrahimi Kahou and Felix Heide and Christopher Pal},
  booktitle={International Conference on Learning Representations},
  year={2022},
  url={https://openreview.net/forum?id=Dup_dDqkZC5}
}

autobots's People

Contributors

fgolemo avatar roggirg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

autobots's Issues

Bug in computing marginal errors and joint errors

Hey, Thanks for open sourcing the code.

I had few doubts regarding the computation of marginal and joint errors in train.py.

  1. agent masking is different for computing marginal error and joint error? Is there any specific reason to do that?

While computing marginal error you are taking agents_in[:,-1,:,-1] (essentially masking only last time step of the input trajectory for the agent) and while computing joint error you are taking agents_gt[:,:,:,-1] which is masking all the timesteps of the output trajectory?

And I think masking agent_gt trajectory is a way to go unless otherwise you have reason to do that?

Have a great day:)

Performance of AutoBots on ETH-UCY

Thank you for open-sourcing your great work!

We are currently looking for transformer-based forecasting models that work well on the ETH-UCY dataset. Do you know how AutoBots performs on the ETH-UCY dataset?

We tried to adapt your TrajNet++ dataloader to the ETH-UCY and experimented with some HPs. The results that we obtained so far are, however, not very competitive. In particular, AutoBots seems quite prone to overfitting, under the standard ETH-UCY data split. Just wondering if this is normal? Do you have any suggestions for tailoring AutoBots to the ETH-UCY dataset?

Do all agents in a scene share the same mode classification probability?

First of all, thanks for sharing the work to the community!

  1. The dimentions of mode_probs returned by autobot joint inference is (batch_size,num_mode) which does not contain the number of agents in the scene(M)

pred_obs, mode_probs = self.autobot_model(ego_in, agents_in, map_lanes, agent_types)

This means all the agents sharing the same mode classification probability and mode ordering, is that reasonable?

  1. Small oversight in the nuScenes joint training command line arguments in README.md:
    --use-map-image True should be changed to --use-map-lanes True

Question about which setting achieves best result on nuScenes

Hi, thank you first for your contribution to trajectory prediction!

I'm wondering which setting you choose to achieve the best results here on nuScenes:
image

I've tried the three training settings you provide in README (Ego and Joint using road segment, Ego using map image), but the best of which only achieves about 1.6 on minADE5. I don't know whether there are some additional modifications on the hyper parameters you set in the repo.

Again, thank you for your open source, which helps me a lot!

Question about negative loss

Hi, I am very interested in your work! There's a question that baffles me:

I found that the nll loss is a negative number when training autobots, I wonder if this is a normal phenomenon? Why is nll loss a negative number?

Model weights for NuScenes

Hi there,

Could you share the pre-trained model weights for the NuScecnes dataset?
Thanks in advance!

Maarten

The choice of "ego_idx" in INTERACTION dataset

Hi @roggirg:
I have a question about the role of ego_idx in the INTERACTION dataset, I noticed that you generate an ego_idx from random sampling in https://github.com/roggirg/AutoBots/blob/3a61ad9f80603f99ab1e9cc535acccf72c5d6bea/datasets/interaction_dataset/dataset.py#L164.
Why is an ego_idx necessary? Is ego_idx only used as a coordinate origin?
Will the different choices of ego_idx affect the performance of a model in the same case (i.e. the same sample)?

Thanks a lot! Looking forward to your reply.

create_h5_nusc.py

when I run python create_h5_nusc.py ****, there are some errors as follows

return self.data.get('sample_annotation', self.inst_sample_to_ann[(sample_token, instance_token)])
KeyError: ('faf2ea71b30941329a3c3f3866cec714', '4d87aaf2d82549969f1550607ef46a63')

Odd Results of Joint Version on Nuscenes

Hi there. I've tried the instructions in README to train a joint version Autobots on dataset Nuscenes, but I found the metrics on "val" part go up during training. I'm wondering is this a normal phenomenon? The picture is the corresponding results.
image

Lack of global information in interaction experiment

Hi,

I just noticed that the global information is removed in the interaction dataset experiment at:

. After doing this, the agents couldn't get the actual social information of other agents, because the remaining data is just from the individual local perspective. Have you found out the reason why the performance of the model is even worse after adding the global position? :)

Pretrained models

Dear Authors,

Would you mind releasing the pre-trained models on the trajnet++ dataset?

Thanks

about FDE calculation

Thanks for sharing the work!
The calculation of fde uses the last timestamp of gt trajectory , but it may not exist.

fde_loss = torch.norm((pred[:, -1, :, :2].transpose(0, 1) - data[:, -1, :2].unsqueeze(1)), 2, dim=-1)

Will this have a bad effect?

How to get the result without map

image
Thank you for your excellent work!
I have a simple question, if I want to train the model without map infomation.
What I should do is just to change the option --use-map-lanes from True to False?

Thank you very much!

Confusion about training objective

Thanks for your excellent work!
In the following objective, the approximating posterior of the latent variable is p_old(z|y,x1:t).The paper has said it can be calculated because the latent variable z is descrete. p_old(z | y,x1:t) = p_old(z | x1:t) * p_old(y | z,x1:t) / p_old(y,x1:t).When calculating the prior p_old(z | x1:t),the origin code wrote this:"priors = modes_pred.detach().cpu().numpy()",but when calculating p_old(y | z,x1:t),I wonder why don't add detach() operator?
image
Thanks for your patience!

Question on KL div loss

Hi there,
Thank you so much for sharing this fabulous work!

I'm a bit confused with the KL divergence loss calculation here.
As far as I understand, post_pr holds the posterior probability values, instead of the log of probabilities. While the KLDivLoss takes log of probabilities, should we actually send in log_posterior ?

Set transformer in Argoverse

Hi guys,

Your work is amazing. I am trying to use my own version of Set transformer in the Argoverse Motion Forecasting dataset using a minimal map representation with multimodal decoder, pretty similar to your work. Nevertheless, in my case I have as inputs the observed trajectories of the agents (at least the AGENT and AV) and plausible goal points for the AGENT (more important obstacle in the scene). In that sense, how would you modify your pipeline if you only had past trajectories and goal points instead of rasterizing the image? Should these goal points be the seed vectors?

Error while building Argoverse h5py files

Hi,
Thank you for releasing your work to the open source community!
I am trying to replicate results on the Argoverse dataset. While building h5py files I run into this error:

$ python datasets/argoverse/create_h5_argo.py --raw-dataset-path ~/datasets/argoverse/ --split-name train --output-h5-path data/argoverse/
Number of files: 205942
0
Traceback (most recent call last):
  File "datasets/argoverse/create_h5_argo.py", line 193, in <module>
    temp_lane_centerlines[:len(lane_centerlines)] = lane_centerlines
ValueError: could not broadcast input array from shape (0) into shape (0,10,3) 

This is happening when the script cannot find lane centerlines associated with the current query_bbox. I tried patching this by creating dummy data:

 if not len(lane_centerlines):
            lane_centerlines.append(np.concatenate((np.zeros((10, 2)), np.ones((10, 1))), axis=-1))

But this doesn't give good results after training (Val minADE: 1.621 minFDE: 3.636).
Is there a better way to handle the errors that crop up while building the dataset? Or do I need to look into changing the training set up?

Thank you!

Cannot find output_npys in train.zip for TrajNet++ dataset

Thanks for your great work for autonomous driving! But I found some problems when I try to use the TrajNet++ dataset, and I am looking forward to hearing from you soon.

As for the TrajNet++ Setup:
"Download the train.zip dataset from here.
Extract the data
Then run:
python create_data_npys.py --raw-dataset-path /path/to/synth_data/ --output-npy-path /path/to/output_npys"

After I extract the data from train.zip, I didn't find any files named output_npys. I just get "synth_data" and "real_data", which caused me cannot access the dataset.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.