GithubHelp home page GithubHelp logo

nachiket92 / pgp Goto Github PK

View Code? Open in Web Editor NEW
208.0 5.0 35.0 12.95 MB

Code for "Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals," CoRL 2021.

Home Page: https://proceedings.mlr.press/v164/deo22a.html

License: MIT License

Python 100.00%
trajectory-prediction pytorch autonomous-vehicles nuscenes

pgp's Introduction

PWC

Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals

This repository contains code for "Multimodal trajectory prediction conditioned on lane-graph traversals" by Nachiket Deo, Eric M. Wolff and Oscar Beijbom, presented at CoRL 2021.

@inproceedings{deo2021multimodal,
  title={Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals},
  author={Deo, Nachiket and Wolff, Eric and Beijbom, Oscar},
  booktitle={5th Annual Conference on Robot Learning},
  year={2021}
}

Note: While I'm one of the authors of the paper, this is an independent re-implementation of the original code developed during an internship at Motional. The code follows the implementation details in the paper. Hope this helps! -Nachiket

Installation

  1. Clone this repository

  2. Set up a new conda environment

conda create --name pgp python=3.7
  1. Install dependencies
conda activate pgp

# nuScenes devkit
pip install nuscenes-devkit

# Pytorch: The code has been tested with Pytorch 1.7.1, CUDA 10.1, but should work with newer versions
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch

# Additional utilities
pip install ray
pip install psutil
pip install positional-encodings==5.0.0
pip install imageio
pip install tensorboard

Dataset

  1. Download the nuScenes dataset. For this project we just need the following.

    • Metadata for the Trainval split (v1.0)
    • Map expansion pack (v1.3)
  2. Organize the nuScenes root directory as follows

└── nuScenes/
    ├── maps/
    |   ├── basemaps/
    |   ├── expansion/
    |   ├── prediction/
    |   ├── 36092f0b03a857c6a3403e25b4b7aab3.png
    |   ├── 37819e65e09e5547b8a3ceaefba56bb2.png
    |   ├── 53992ee3023e5494b90c316c183be829.png
    |   └── 93406b464a165eaba6d9de76ca09f5da.png
    └── v1.0-trainval
        ├── attribute.json
        ├── calibrated_sensor.json
        ...
        └── visibility.json         
  1. Run the following script to extract pre-processed data. This speeds up training significantly.
python preprocess.py -c configs/preprocess_nuscenes.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/data

Inference

You can download the trained model weights using this link.

To evaluate on the nuScenes val set run the following script. This will generate a text file with evaluation metrics at the specified output directory. The results should match the benchmark entry on Eval.ai.

python evaluate.py -c configs/pgp_gatx2_lvm_traversal.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/data -o path/to/output/directory -w path/to/trained/weights

To visualize predictions run the following script. This will generate gifs for a set of instance tokens (track ids) from nuScenes val at the specified output directory.

python visualize.py -c configs/pgp_gatx2_lvm_traversal.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/data -o path/to/output/directory -w path/to/trained/weights

Training

To train the model from scratch, run

python train.py -c configs/pgp_gatx2_lvm_traversal.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/data -o path/to/output/directory -n 100

The training script will save training checkpoints and tensorboard logs in the output directory.

To launch tensorboard, run

tensorboard --logdir=path/to/output/directory/tensorboard_logs

pgp's People

Contributors

nachiket92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pgp's Issues

Test set.

Thanks for sharing the code. I was wondering if the numbers reported here in the repo and also in the paper were on the validation dataset or for the test set ?

lane node is query, not key&value, right?

Hi author:
Thanks for your contribution. I noticed the lane nodes are transformed to query, and motion feature is transformed to key and value, which is contrary to the paper's description. Is that new consideration?

real time testing

Hi, which data file should be changed if we want to try on a real-time scenario?

About Visualization of the Lane Graph

Hi, thank you so much for your great work and open source.
I'm interested in the visualization of the lane graph. Could you please give some tutorial about that?

Questions about GPU productivity

Hi, thank you so much for your great work and open source. I am very interested in your research, including the previous papers I have read.

When I tried to run the source code of PGP, I found that the working efficiency of the GPU was very low, basically not more than 10%, which caused the training to be extremely slow.

I'm using an NVIDIA RTX2080 GPU, and my computing power is much lower than the computing platform used in your paper. I'm not sure if the low usage is due to GPU memory and I took the liberty to ask this question, thank you very much!

Screenshot from 2022-01-17 14-44-41

Diversity metrics

Could you provide the code for the diversity metrics described in Tables 4 and 5?

Thanks !

Latent var only experiments error

I want to train model with Latent var only mode.

from issue #13 I changed train config but i got this Exception

Do you know what the problem is ?

======
Loading NuScenes tables for version v1.0-trainval...
23 category,
8 attribute,
4 visibility,
64386 instance,
12 sensor,
10200 calibrated_sensor,
2631083 ego_pose,
68 log,
850 scene,
34149 sample,
2631083 sample_data,
1166187 sample_annotation,
4 map,
Done loading in 20.600 seconds.

Reverse indexing ...
Done reverse indexing in 5.2 seconds.

Epoch (1/100)
Traceback (most recent call last):
File "train.py", line 38, in
trainer.train(num_epochs=int(args.num_epochs), output_dir=args.output_dir)
File "/media/han/289e8c3e-12db-47da-8b89-15b58bef567d/home/han/prediction_256/PGP/train_eval/trainer.py", line 109, in train
train_epoch_metrics = self.run_epoch('train', self.tr_dl)
File "/media/han/289e8c3e-12db-47da-8b89-15b58bef567d/home/han/prediction_256/PGP/train_eval/trainer.py", line 153, in run_epoch
predictions = self.model(data['inputs'])
File "/home/han/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/media/han/289e8c3e-12db-47da-8b89-15b58bef567d/home/han/prediction_256/PGP/models/model.py", line 35, in forward
outputs = self.decoder(agg_encoding)
File "/home/han/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/media/han/289e8c3e-12db-47da-8b89-15b58bef567d/home/han/prediction_256/PGP/models/decoders/lvm.py", line 57, in forward
raise Exception('Expected ' + str(self.num_samples) + 'encodings for each train/val data')
Exception: Expected 1000encodings for each train/val data

Thank you in advance.

Offroad rate

Thanks for the great work and the repository! Can the author provide codes to calculate Offroad rate? Thank you!

Issue of the trained model weights

Thank you very much for this work!
I click on the link when I try to download the trained model parameters. But the webpage shows that "Can't access item. The organization that owns this item doesn't allow you to access it. "
May I ask if it is possible to update this link again, or could you please make a copy and send it to [email protected]? I would greatly appreciate it. Looking forward to your reply

visualize_graph

Hello, dear author. I want to know how to use the visualize_graph method in NuScenesGraphs.py. Thank you for sharing your code.

Is it really necessary to get indefinite future trajectory in get_visited_edges()?

Hi!

I found that you retrieved indefinite future trajectory while generating the ground truth node sequence and look-up table of visited edges for training behavior cloning:

fut_xy = self.helper.get_future_for_agent(i_t, s_t, 300, True)

As an experiment, I modified the line above to:
fut_xy = self.helper.get_future_for_agent(i_t, s_t, 6, True)
Nothing has changed except the time range, I just changed the length of the future trajectory to 6 seconds which is equal to the prediction horizon(t_f in preprocess_nuscenes.yml).

After this modification, I re-preprocessed the dataset and re-trained the model.
Here are the metrics I got on nuScenes validation set with 6s for ground truth traversal:

MinADE_5: 1.29
MinADE_10: 0.95
MinFDE_1: 7.26
MissRate_5: 0.52
MissRate_10: 0.34

Encountered some error when evaluating

Hello thank you for the great work
I have some problems when I try to evaluate the metrics.
I use the CUDA11.0 and the pytorch 1.7.1
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

And I download the trained weights
image

I load the weight as
-w /media/ee904/Data_stored/PGP/PGP_lr-scheduler/archive/data.pkl
then the error pop up I think it's about the weight format..
Can you help me? Or should I need to use the older version of CUDA?

image

RuntimeError when training

Thanks for your work! I'm interested in the excellent work but I got errors when I retry it:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [36, 32]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True)."
Do you know how to deal with it? Thanks!

about preprocessed code

when i want to run the third step
3. Run the following script to extract pre-processed data. This speeds up training significantly.

python preprocess.py -c configs/preprocess_nuscenes.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/data

it turns out to be segmentation fault
and i check that when from positional_encodings import PositionalEncodingPermute2D
it come up with segmentation fault (core dumped)

but when i import positional_encodings in the command line it turns out to be right.

Errors when training

Thanks for your excellent work!

Unfortunately, when I run the training code, I get some errors:

[libprotobuf FATAL google/protobuf/stubs/common.cc:83] This program was compiled against version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.20.1).  Contact the program author for an update.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "bazel-out/k8-opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  This program was compiled against version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.20.1).  Contact the program author for an update.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "bazel-out/k8-opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
*** SIGABRT received at time=1652294159 on cpu 2 ***
PC: @     0x7f98057c6e87  (unknown)  raise
    @     0x7f9805b8b980  (unknown)  (unknown)
    @ 0x736177206d617267  (unknown)  (unknown)
[2022-05-11 20:35:59,564 E 25630 25630] logging.cc:325: *** SIGABRT received at time=1652294159 on cpu 2 ***
[2022-05-11 20:35:59,564 E 25630 25630] logging.cc:325: PC: @     0x7f98057c6e87  (unknown)  raise
[2022-05-11 20:35:59,566 E 25630 25630] logging.cc:325:     @     0x7f9805b8b980  (unknown)  (unknown)
[2022-05-11 20:35:59,569 E 25630 25630] logging.cc:325:     @ 0x736177206d617267  (unknown)  (unknown)
Fatal Python error: Aborted

Stack (most recent call first):
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 1043 in create_module
  File "<frozen importlib._bootstrap>", line 583 in module_from_spec
  File "<frozen importlib._bootstrap>", line 670 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 728 in exec_module
  File "<frozen importlib._bootstrap>", line 677 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1035 in _handle_fromlist
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorflow/python/pywrap_tfe.py", line 28 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 728 in exec_module
  File "<frozen importlib._bootstrap>", line 677 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1035 in _handle_fromlist
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorflow/python/eager/context.py", line 35 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 728 in exec_module
  File "<frozen importlib._bootstrap>", line 677 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1035 in _handle_fromlist
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorflow/python/__init__.py", line 40 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 728 in exec_module
  File "<frozen importlib._bootstrap>", line 677 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 953 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorflow/__init__.py", line 41 in <module>
  File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
  File "<frozen importlib._bootstrap_external>", line 728 in exec_module
  File "<frozen importlib._bootstrap>", line 677 in _load_unlocked
  File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 983 in _find_and_load
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorboard/compat/__init__.py", line 45 in tf
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorboard/lazy.py", line 50 in load_once
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorboard/lazy.py", line 97 in wrapper
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorboard/lazy.py", line 65 in __getattr__
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/tensorboard/summary/writer/event_file_writer.py", line 72 in __init__
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 62 in __init__
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 252 in _get_file_writer
  File "/media/work2/lishijie/miniconda/envs/pgp/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 221 in __init__
  File "train.py", line 33 in <module>
Aborted (core dumped)

However, I configure the environment according to the README, and if I directly install protobuf==3.9.2 there still exists other conflicts make the code cannot run.

My environment is:

Package                      Version
---------------------------- -------------------
absl-py                      1.0.0
aiosignal                    1.2.0
argon2-cffi                  21.3.0
argon2-cffi-bindings         21.2.0
astunparse                   1.6.3
attrs                        21.4.0
backcall                     0.2.0
beautifulsoup4               4.11.1
bleach                       5.0.0
cached-property              1.5.2
cachetools                   5.0.0
certifi                      2021.10.8
cffi                         1.15.0
charset-normalizer           2.0.12
click                        8.1.3
cycler                       0.11.0
debugpy                      1.6.0
decorator                    5.1.1
defusedxml                   0.7.1
descartes                    1.1.0
distlib                      0.3.4
entrypoints                  0.4
fastjsonschema               2.15.3
filelock                     3.6.0
fire                         0.4.0
flatbuffers                  2.0
fonttools                    4.33.3
frozenlist                   1.3.0
gast                         0.3.3
google-auth                  2.6.6
google-auth-oauthlib         0.4.6
google-pasta                 0.2.0
grpcio                       1.43.0
h5py                         2.10.0
idna                         3.3
importlib-metadata           4.11.3
importlib-resources          5.7.1
ipykernel                    6.13.0
ipython                      7.33.0
ipython-genutils             0.2.0
ipywidgets                   7.7.0
jedi                         0.18.1
Jinja2                       3.1.2
joblib                       1.1.0
jsonschema                   4.5.1
jupyter                      1.0.0
jupyter-client               7.3.1
jupyter-console              6.4.3
jupyter-core                 4.10.0
jupyterlab-pygments          0.2.2
jupyterlab-widgets           1.1.0
keras                        2.8.0
Keras-Preprocessing          1.1.2
kiwisolver                   1.4.2
libclang                     14.0.1
Markdown                     3.3.7
MarkupSafe                   2.1.1
matplotlib                   3.5.2
matplotlib-inline            0.1.3
mistune                      0.8.4
mkl-fft                      1.3.1
mkl-random                   1.2.2
mkl-service                  2.4.0
msgpack                      1.0.3
nbclient                     0.6.3
nbconvert                    6.5.0
nbformat                     5.4.0
nest-asyncio                 1.5.5
notebook                     6.4.11
numpy                        1.18.5
nuscenes-devkit              1.1.9
oauthlib                     3.2.0
opencv-python                4.5.5.64
opt-einsum                   3.3.0
packaging                    21.3
pandocfilters                1.5.0
parso                        0.8.3
pexpect                      4.8.0
pickleshare                  0.7.5
Pillow                       9.1.0
pip                          21.2.2
platformdirs                 2.5.2
positional-encodings         4.0.0
prometheus-client            0.14.1
prompt-toolkit               3.0.29
protobuf                     3.20.1
psutil                       5.9.0
ptyprocess                   0.7.0
pyasn1                       0.4.8
pyasn1-modules               0.2.8
pycocotools                  2.0.4
pycparser                    2.21
Pygments                     2.12.0
pyparsing                    3.0.9
pyquaternion                 0.9.9
pyrsistent                   0.18.1
python-dateutil              2.8.2
PyYAML                       6.0
pyzmq                        22.3.0
qtconsole                    5.3.0
QtPy                         2.1.0
ray                          1.12.0
requests                     2.27.1
requests-oauthlib            1.3.1
rsa                          4.8
scikit-learn                 1.0.2
scipy                        1.4.1
Send2Trash                   1.8.0
setuptools                   61.2.0
Shapely                      1.8.2
six                          1.16.0
soupsieve                    2.3.2.post1
tensorboard                  2.8.0
tensorboard-data-server      0.6.1
tensorboard-plugin-wit       1.8.1
tensorflow                   2.3.0
tensorflow-estimator         2.3.0
tensorflow-gpu               2.3.0
tensorflow-io-gcs-filesystem 0.25.0
termcolor                    1.1.0
terminado                    0.13.3
tf-estimator-nightly         2.8.0.dev2021122109
threadpoolctl                3.1.0
tinycss2                     1.1.1
torch                        1.11.0
torchaudio                   0.7.0a0+a853dff
torchvision                  0.8.0a0
tornado                      6.1
tqdm                         4.64.0
traitlets                    5.2.0
typing_extensions            4.2.0
urllib3                      1.26.9
virtualenv                   20.14.1
wcwidth                      0.2.5
webencodings                 0.5.1
Werkzeug                     2.1.2
wheel                        0.37.1
widgetsnbextension           3.6.0
wrapt                        1.14.1
zipp                         3.8.0

Do you know how to solve this issue? Thanks!

Training from scratch and model configuration

Thanks for the great work and the repository!
I have trained the model from scratch and it yields similar results ( a bit worse but almost the same), but shouldn't we pre-train for 100 epochs and then finetune for other 100 as stated in the paper? If so, I think it would be good to indicate it in the README.
It would also be good to indicate how to combine the different enc/agg/dec to reproduce the jobs in the benchmark, or what configurations are possible at all, as some aggregator outputs would not match some decoders - maybe providing different .yml files?
Thank you!

Error when trying to preprocess data

This error occurs in the multiprocessing library saying 'AttributeError: module 'multiprocessing.reduction' has no attribute 'dump'
. Anyone know why this keeps happening

image

Have you ever tested this model on a real vehicle? & How can I change from single agent prediction to multi agent prediction?

Hi, thank you so much for sharing great work.

I have some questions.

  1. Have you ever tested this model on a real vehicle?
    If a model trained using data sampled at 2 Hz is loaded on a real test vehicle, real-time performance cannot be guaranteed. Is there any recommended method to solve this problem?

  2. How can I change from single agent prediction to multi agent prediction?
    I wonder if it can be easily changed or if another process is required.
    If not, do you have any plans to do this?

Thanks for your time.

Inconsistency in KMeans clustering result

Hi, I was trying to retrain PGP but I run into an issue with scikit-learn's KMeans implementation. Sometimes when the model tries to compute the Ward distances, it throws a broadcast exception for dists = wts * centroid_dists + np.diag(np.inf * np.ones(len(cluster_counts))) because the shapes of wts and centroid_dists are different.

The root cause seems to be that cluster_lbls and cluster_ctrs are inconsistent, so performing np.unique() for the cluster labels returns the wrong cluster_cnts. In scikit-learn's documentation, I notice the following

cluster_centers_ndarray of shape (n_clusters, n_features)
Coordinates of cluster centers. If the algorithm stops before fully converging (see tol and max_iter), these will not be consistent with labels_.

May I ask how should I handle this exception?

Encounter the RuntimeError when running train.py

Hello these days I've tried this code many times
But when I run the train.py it always crashed at the beginning
I follow the all instructions in your README.md
So I'm confused if there is anything I need to setting?

I run this command and use the original .yml file without modified
Hope you can help me~ Many thanks
python train.py -c configs/pgp_gatx2_lvm_traversal.yml -r /media/ee904/Data_stored/data/Nuscenes -d /media/ee904/Data_stored/data/Nuscenes/preprocess -o /media/ee904/Data_stored/PGP/results -n 100

image

image

Difference spotted regarding input feature dimension different in the code and paper

Hi,

Thank you for your contribution.
When I dive deeper into the source code, I found difference between this implementation and the paper.
According to the paper, lane node feature included (x, y, theta, I_stop_line, I_ped_crossing), which has the dimension of 5. At the same time, the trajectory of agents mentioned in the paper should have the dimension of 6.
image
image

However, in get_map_representation in nuScenes_graphs.py, the dimension of each lane node feature shows 6. Meanwhile, the representation of each surrounding agent trajectory have dimension of 5 instead of 6. What is the reason for this change?

Thank you.

Latent var only experiments

Thanks for sharing the great work!

I was wondering how to try the Latent var (LV) only in Table 3: Decoder ablations of PGP paper. Is there any configuration?

Thanks in advance,
Zhen

download of the trained model weight

Thanks for your work!
I can't download the trained model weight via the link you offered because of the restrictions of network permission. Could you offer other download channels. I will be infinitely grateful.

Multimodality

Amazing work. How do you implement the multimodality in your decoder?

Difference in the paper and code implementation about agents encoding

Hi

First of all, thank you for sharing the code, which is very helpful to the community!

When I'm reading your code, I found a major difference between the code implementation and the paper: the encoding process of surrounding agents.

  • the paper
    Both raw feature of vehicles and pedestrians are s_t^i which including an indicator I^i with value 1 for a pedestrian and 0 for a vehicle

image

And you use the same GRU to encode both vehicles and pedestrians
So there are 3 GRUs in total: one for lane nodes, one for target agent, and one for surrounding agents

image

but the code implementation is different
  • Code
    There is no indicator in the raw feature of vehicles and pedestrians
    And you use 2 independent GRUs to encode vehicles and pedestrians:

# Surrounding agent encoder
self.nbr_pedestrian_emb = nn.Linear(args['nbr_feat_size'], args['nbr_emb_size'])
self.nbr_pedestrian_enc = nn.GRU(args['nbr_emb_size'], args['nbr_enc_size'], batch_first=True)
self.nbr_vehicle_emb = nn.Linear(args['nbr_feat_size'], args['nbr_emb_size'])
self.nbr_vehicle_enc = nn.GRU(args['nbr_emb_size'], args['nbr_enc_size'], batch_first=True)

then cat them together:

nbr_encodings = torch.cat((nbr_vehicle_enc, nbr_ped_enc), dim=1)

So there are 4 GRUs in total: one for lane nodes, one for target agent, one for surrounding vehicles and one for surrounding pedestrians


They are two different encoding methods, have you tried both? And what's the reason for choosing the latter?

Many greetings

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.