GithubHelp home page GithubHelp logo

chuhuaw / sgnet.pytorch Goto Github PK

View Code? Open in Web Editor NEW
116.0 8.0 16.0 71 KB

Pytorch Implementation for Stepwise Goal-Driven Networks for Trajectory Prediction (RA-L/ICRA2022)

Python 100.00%
deep-learning trajectory-prediction pytorch

sgnet.pytorch's Introduction

Pytorch Implementation for Stepwise Goal-Driven Networks for Trajectory Prediction (RA-L/ICRA2022)

Installation

Cloning

We use part of the dataloader in Trajectron++, so we include Trajectron++ as a submodule.

git clone --recurse-submodules [email protected]:ChuhuaW/SGNet.pytorch.git

Environment

  • Install conda environment from yml file
conda env create --file SGNet_env.yml

Data

ln -s path/to/dataset/ ./data/
  • ETH/UCY We follow Trajectron++ to preprocess data splits for the ETH and UCY datasets in this repository. Please refer to their repository for instruction. After the data is generated, please create symlinks from the dataset path to ./data
ln -s path/to/dataset/ ./data/

Training

Stochastic prediction

  • Training on JAAD dataset:
cd SGDNet.Pytorch
python tools/jaad/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset JAAD --model SGNet_CVAE
  • Training on PIE dataset:
cd SGDNet.Pytorch
python tools/pie/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset PIE --model SGNet_CVAE
  • Training on ETH/UCY dataset:
cd SGDNet.Pytorch
python tools/ethucy/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ETH --model SGNet_CVAE
python tools/ethucy/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset HOTEL --model SGNet_CVAE
python tools/ethucy/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset UNIV --model SGNet_CVAE
python tools/ethucy/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA1 --model SGNet_CVAE
python tools/ethucy/train_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA2 --model SGNet_CVAE

Deterministic prediction

  • Training on JAAD dataset:
cd SGDNet.Pytorch
python tools/jaad/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset JAAD --model SGNet
  • Training on PIE dataset:
cd SGDNet.Pytorch
python tools/pie/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset PIE --model SGNet
  • Training on ETH/UCY dataset:
cd SGDNet.Pytorch
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ETH --model SGNet
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset HOTEL --model SGNet
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset UNIV --model SGNet
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA1 --model SGNet
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA2 --model SGNet

Evaluation

Stochastic prediction

  • Evaluating on JAAD dataset:
cd SGDNet.Pytorch
python tools/jaad/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset JAAD --model SGNet_CVAE --checkpoint path/to/checkpoint
  • Evaluating on PIE dataset:
cd SGDNet.Pytorch
python tools/pie/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset PIE --model SGNet_CVAE --checkpoint path/to/checkpoint
  • Evaluating on ETH/UCY dataset:
cd SGDNet.Pytorch
python tools/ethucy/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ETH --model SGNet_CVAE --checkpoint path/to/checkpoint
python tools/ethucy/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset HOTEL --model SGNet_CVAE --checkpoint path/to/checkpoint
python tools/ethucy/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset UNIV --model SGNet_CVAE --checkpoint path/to/checkpoint
python tools/ethucy/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA1 --model SGNet_CVAE --checkpoint path/to/checkpoint
python tools/ethucy/eval_cvae.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA2 --model SGNet_CVAE --checkpoint path/to/checkpoint

Deterministic prediction

cd SGDNet.Pytorch
python tools/ethucy/eval_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ETH --model SGNet --checkpoint path/to/checkpoint
python tools/ethucy/eval_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset HOTEL --model SGNet --checkpoint path/to/checkpoint
python tools/ethucy/eval_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset UNIV --model SGNet --checkpoint path/to/checkpoint
python tools/ethucy/eval_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA1 --model SGNet --checkpoint path/to/checkpoint
python tools/ethucy/eval_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ZARA2 --model SGNet --checkpoint path/to/checkpoint

JAAD/PIE checkpoints

Citation

@ARTICLE{9691856,
  author={Wang, Chuhua and Wang, Yuchen and Xu, Mingze and Crandall, David J.},
  journal={IEEE Robotics and Automation Letters}, 
  title={Stepwise Goal-Driven Networks for Trajectory Prediction}, 
  year={2022}}
- Rank 3rd on nuScences prediction task at 6th AI Driving Olympics, ICRA 2021

The source code and pretrained models will be made availble. Stay tuned. PWC PWC PWC

sgnet.pytorch's People

Contributors

chuhuaw avatar xumingze0308 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sgnet.pytorch's Issues

Big gap between my reproduction and paper's results in JAAD

Hi
I wonder why there's a Big gap between my reproduction and paper's results in JAAD dataset.

here's my results compared to paper's results:
Deterministic

my reproduction: MSE_05: 806.880521; MSE_10: 2894.095019; MSE_15: 6527.214091;
paper:  MSE_05: 82 MSE_10: 328 MSE_15: 1049 

CVAE

my reproduction: MSE_05: 95.9; MSE_10: 274.0; MSE_15: 617.9
paper:  MSE_05: 37 MSE_10: 86 MSE_15: 197 

my config is totally follow the code's default setting:

lr=5e-04
bbox_type='cxcywh'
dropout=0.0

Looking forward to your reply, thanks a lot @ChuhuaW

SGNet on custom data

Hello,
I would like to use the deterministic framework on datasets other than the default ones (JAAD, PIE, ETH, UCY).

The dataset at my disposal is a CSV with the following features:
['Position_X (m)', 'Position_Y (m)'
'Velocity_X (m/s)', 'Velocity_Y (m/s)',
'Acceleration_X (m/s^2)', 'Acceleration_Y (g)',
'Yaw Angle (rad)', 'Yaw Rate (rad/s)',
'Lateral Offset Left (m)', 'Lateral Offset Right (m)',
'Curvature Left (1)', 'Curvature Right (1)',
'Curvature Derivative Left (1)', 'Curvature Derivative Right (1)',
'Heading Angle Left (rad)']

I was wondering if you had any hints on how to process this data in a way compatible with the SGNet.

At the moment I am trying to recreate a structure similar to the ETH-processed dataset:
(input_x, input_x_st, target_y, target_y_st, first_history_index, scene_name, timestep)
My intuition is that recreating (input_x, input_x_st, target_y) and maybe modifying the input layers of the SGNet, it should work. However, I cannot understand what "input_x_st" is, if you had an insight into this structure it could be useful.

In general, any suggestion, also not related to this approach, will be really appreciated.
Thank you

Model Selection based on test results?

Hi there!

First of all thanks for your great work!
The code you provided is very easy to read and use, that's awesome!

After working with your code, I have some questions regarding the model selection when training SGNet/SGNet_CVAE.

Every epoch has the structure of (train -> val -> test).

  1. For the CVAE models the validation loss is used to schedule the learning rate.
    But for the deterministic models (train_deterministic.py), the line of code where the LR-scheduler is called is commented out.
    Is there a reason for doing that or was this rather a mistake when uploading the code?

  2. Model selection when training a ML model is usually done by selecting the model with lowest validation loss and then test this selected model to get the final result.
    For the SGNet models (CVAEs and deterministic) you test the model after every epoch and select the model based on the best test result. For JAAD/PIE that is MSE_15 and for ETH/UCY that is ADE_12.
    I have two more questions regarding this workflow:
    a) What is the reason for not using the validation loss for model selection as it is done mostly in literature?
    b) When testing the model after every epoch, why is the final metric (MSE/ADE) used for model selection and not the test loss?

Thanks in advance
Phillip

Visualization is not ideal(JAAD)

Thank you so much for releasing this awesome project.

I have done the training of the determined trajectories using the modified script.
200 epochs, MSE_05: 85.124462; MSE_10: 333.395368; MSE_15: 1057.618436. Then I implemented the visualization.

  1. red is observed traj(center of bbox)
  2. green is target traj(center of bbox)
  3. blue is predict traj(center of bbox)

I found that the predicted results seem to have a large error, and I want to know if I am wrong somewhere. Thanks!
image

image

image

image
code is here test_vis.zip

Additional Features for JAAD/PIE

Hi!

I want to improve the results.

Do you think if I add some more features from JAAD/PIE datasets, then it can improve the metrics. If so, which can be those?

Also, do you think if I add "interaction" or "scene context" features (which are not given in the dataset and thus I have to produce these features myself), will it improve the metrics? I think for "scene context" I would need convulational module, right?

Also, suggest those features also which are NOT included in the datasets but I can extract from the dataset myself and these also can improve the metrics. I was thinking like optical flow, acceleration or velocity of the target pedestrians or any feature else?

ETH/UCY visualization code

Hi,

Can you please provide the code you used in your paper to visualize ETH/UCY scenes with trajectories?

Questions on PIE

For the PIE Dataset the input is a tensor of the form:

  • [Batch Size] [Dec Size = 15] [ 4 ]

And the output goal tensor is of the form:

  • [ Batch Size ] [ 45 ?? ] [ Dec Size = 15 ] [ 4 ]

I have the following questions:

  1. In the input, [ 0 ] [ 0 : 14 ] [ 4 ] would correspond the 14 frames leading up to the current frame, correct?
  2. In the output goal tensor, Where does the 4th dimension come from? If we are predicting 45 frames into the future why is it not of size [ Batch Size ] [ 45 ?? ] [ 4 ] ?
  3. Is the output goal tensor of the form cxcywh ?

Failed to reproduce results and Bugs in related Trajectron++ code

I found that your implementation uses an old version of Trajectron++ code with bugs during data reprocessing (see StanfordASL/Trajectron-plus-plus#53).
I tried to reproduce the reported results using the last code of Trajectron++ with that bug fixed. The following are the results I achieved:

    ADE/FDE (reproduced)    ADE/FDE (reported)
eth:   0.47/0.77 ------------> 0.35/0.65
hotel: 0.20/0.38 ------------> 0.12/0.24
univ:  0.33/0.62 ------------> 0.20/0.42
zara1: 0.18/0.32 ------------> 0.12/0.24
zara2: 0.15/0.28 ------------> 0.10/0.21

I used your implementation with the default configs to reproduce the results. Could you confirm if there is anything wrong during my reproducing the results?

training on nuScenes dataset

Hi,
Thank you very much for making the code available.
I wanted to get results on nuScenes dataset after the Trajectron++ bug mentioned in #19 is fixed.
If possible, could you share the dataloader, training interfaces etc. for nuScenes?

Why to diff bbox at t from future bbox's of prediction length in "get_target" function of file "jaad_data_layer.py"

Your this piece of code

            `target[i,:,:] = np.asarray(session[target_start:target_start+predict_length,:] - 
                                       session[target_start-1:target_start,:])`

inside the function get_target of file jaad_data_layer.py

I can understand that you are subtracting the bbox at current time t from the all the future bbox's which fall inside the prediction length window.

But why?

Why can't we just use the future bbox's of prediction window as it is?

Choosing epochs based on test results?

Hello,

Thank you very much for your contribution. I have a question about the validation and test process. From the code, it seems like the best epoch is chosen based on test loss, while it should be based on validation loss.
Could you clarify? as I am confused about this. Thanks.

Deterministic scripts appear to be missing

In your readme, you write that deterministic scripts are available, but these seem to be missing from the current repo. For example, train_deterministic.py is not available under the tools/jaad folder.

Can you please upload working code with all the scripts available, or as many as possible? Thanks.

Cannot reproduce deterministic results

Hi, I have tried to reproduce your results by running SGNet.pytorch/tools/ethucy/train_deterministic.py and SGNet.pytorch/tools/ethucy/eval_deterministic.py. I didn't change anything except adding some model saving code from your previous commits in train_deterministic.py.

Can you help have a look at whether anything is wrong here?

Here are my training arguments:
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args}
where
dset_name=ETH,args='--lr=0.0005 --dropout=0.5 --sigma=1.5'
dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3'
dset_name=UNIV,args='--lr=0.0001'
dset_name=ZARA1,args='--lr=0.0001'
dset_name=ZARA2,args='--lr=0.0001'

Here are are training results:

ETH: ADE_08: 0.543764; FDE_08: 0.981109; ADE_12: 0.816298; FDE_12: 1.603263;
HOTEL: ADE_08: 0.251949; FDE_08: 0.487048; ADE_12: 0.406558; FDE_12: 0.865508;
UNIV: ADE_08: 0.405024; FDE_08: 0.795781; ADE_12: 0.647388; FDE_12: 1.345341;
ZARA1: ADE_08: 0.235853; FDE_08: 0.470461; ADE_12: 0.381671; FDE_12: 0.803334;
ZARA2: ADE_08: 0.188649; FDE_08: 0.383418; ADE_12: 0.311853; FDE_12: 0.669926;

Here are the training outputs from my terminal (e.g. Zara1):

ZARA1
ZARA1
ZARA1
Number of validation samples: 41
Number of test samples: 19
Train Epoch: 1 	 Goal loss: 13.3936	 Decoder loss: 11.0987	 Total: 24.4923
ADE_08: 0.294679;  FDE_08: 0.551690;  ADE_12: 0.447580;   FDE_12: 0.887748

Saving checkpoints: metric_epoch_001_loss_0.4476.pth
Train Epoch: 2 	 Goal loss: 8.3851	 Decoder loss: 7.7700	 Total: 16.1551
ADE_08: 0.380591;  FDE_08: 0.738910;  ADE_12: 0.574689;   FDE_12: 1.084587

Train Epoch: 3 	 Goal loss: 7.4418	 Decoder loss: 7.2957	 Total: 14.7374
ADE_08: 0.273209;  FDE_08: 0.530974;  ADE_12: 0.424851;   FDE_12: 0.850812

Saving checkpoints: metric_epoch_003_loss_0.4249.pth
Train Epoch: 4 	 Goal loss: 7.1369	 Decoder loss: 7.0551	 Total: 14.1920
ADE_08: 0.286998;  FDE_08: 0.565734;  ADE_12: 0.450437;   FDE_12: 0.906875

Train Epoch: 5 	 Goal loss: 6.9545	 Decoder loss: 6.9175	 Total: 13.8720
ADE_08: 0.260329;  FDE_08: 0.512583;  ADE_12: 0.414379;   FDE_12: 0.856634

Saving checkpoints: metric_epoch_005_loss_0.4144.pth
Train Epoch: 6 	 Goal loss: 6.7755	 Decoder loss: 6.7289	 Total: 13.5045
ADE_08: 0.290368;  FDE_08: 0.585993;  ADE_12: 0.480131;   FDE_12: 1.037864

Train Epoch: 7 	 Goal loss: 6.6577	 Decoder loss: 6.6200	 Total: 13.2777
ADE_08: 0.258839;  FDE_08: 0.510076;  ADE_12: 0.415547;   FDE_12: 0.869290

Train Epoch: 8 	 Goal loss: 6.5346	 Decoder loss: 6.4946	 Total: 13.0292
ADE_08: 0.246034;  FDE_08: 0.491056;  ADE_12: 0.397619;   FDE_12: 0.835792

Saving checkpoints: metric_epoch_008_loss_0.3976.pth
Train Epoch: 9 	 Goal loss: 6.4372	 Decoder loss: 6.4017	 Total: 12.8388
ADE_08: 0.251146;  FDE_08: 0.509857;  ADE_12: 0.417938;   FDE_12: 0.909648

Train Epoch: 10 	 Goal loss: 6.3593	 Decoder loss: 6.3158	 Total: 12.6751
ADE_08: 0.253486;  FDE_08: 0.506422;  ADE_12: 0.410926;   FDE_12: 0.867227

Train Epoch: 11 	 Goal loss: 6.2701	 Decoder loss: 6.2177	 Total: 12.4878
ADE_08: 0.278813;  FDE_08: 0.553365;  ADE_12: 0.449940;   FDE_12: 0.942615

Train Epoch: 12 	 Goal loss: 6.2296	 Decoder loss: 6.2038	 Total: 12.4334
ADE_08: 0.245076;  FDE_08: 0.489589;  ADE_12: 0.398719;   FDE_12: 0.846358

Train Epoch: 13 	 Goal loss: 6.1766	 Decoder loss: 6.1378	 Total: 12.3144
ADE_08: 0.261642;  FDE_08: 0.502992;  ADE_12: 0.413555;   FDE_12: 0.856336

Train Epoch: 14 	 Goal loss: 6.1079	 Decoder loss: 6.0661	 Total: 12.1739
ADE_08: 0.253986;  FDE_08: 0.501157;  ADE_12: 0.408927;   FDE_12: 0.858842

Train Epoch: 15 	 Goal loss: 6.0482	 Decoder loss: 6.0074	 Total: 12.0556
ADE_08: 0.237412;  FDE_08: 0.472600;  ADE_12: 0.385304;   FDE_12: 0.816462

Saving checkpoints: metric_epoch_015_loss_0.3853.pth
Train Epoch: 16 	 Goal loss: 5.9789	 Decoder loss: 5.9207	 Total: 11.8996
ADE_08: 0.248604;  FDE_08: 0.494847;  ADE_12: 0.403149;   FDE_12: 0.851186

Train Epoch: 17 	 Goal loss: 5.9886	 Decoder loss: 5.9620	 Total: 11.9507
ADE_08: 0.264590;  FDE_08: 0.521281;  ADE_12: 0.425393;   FDE_12: 0.888515

Train Epoch: 18 	 Goal loss: 5.8995	 Decoder loss: 5.8390	 Total: 11.7384
ADE_08: 0.249374;  FDE_08: 0.483068;  ADE_12: 0.396050;   FDE_12: 0.820998

Train Epoch: 19 	 Goal loss: 5.8871	 Decoder loss: 5.8269	 Total: 11.7140
ADE_08: 0.241822;  FDE_08: 0.488735;  ADE_12: 0.399218;   FDE_12: 0.861516

Train Epoch: 20 	 Goal loss: 5.8296	 Decoder loss: 5.7692	 Total: 11.5988
ADE_08: 0.238986;  FDE_08: 0.467400;  ADE_12: 0.382569;   FDE_12: 0.800729

Saving checkpoints: metric_epoch_020_loss_0.3826.pth
Train Epoch: 21 	 Goal loss: 5.8035	 Decoder loss: 5.7227	 Total: 11.5262
ADE_08: 0.246278;  FDE_08: 0.498420;  ADE_12: 0.406117;   FDE_12: 0.873861

Train Epoch: 22 	 Goal loss: 5.7840	 Decoder loss: 5.7143	 Total: 11.4984
ADE_08: 0.245556;  FDE_08: 0.489192;  ADE_12: 0.399299;   FDE_12: 0.848396

Train Epoch: 23 	 Goal loss: 5.7622	 Decoder loss: 5.6884	 Total: 11.4506
ADE_08: 0.241803;  FDE_08: 0.473674;  ADE_12: 0.386678;   FDE_12: 0.807247

Train Epoch: 24 	 Goal loss: 5.7139	 Decoder loss: 5.6287	 Total: 11.3426
ADE_08: 0.237854;  FDE_08: 0.473140;  ADE_12: 0.384558;   FDE_12: 0.809680

Train Epoch: 25 	 Goal loss: 5.6809	 Decoder loss: 5.5965	 Total: 11.2774
ADE_08: 0.243205;  FDE_08: 0.483125;  ADE_12: 0.392962;   FDE_12: 0.827676

Train Epoch: 26 	 Goal loss: 5.6746	 Decoder loss: 5.5734	 Total: 11.2479
ADE_08: 0.241132;  FDE_08: 0.483847;  ADE_12: 0.393726;   FDE_12: 0.837666

Train Epoch: 27 	 Goal loss: 5.6426	 Decoder loss: 5.5449	 Total: 11.1875
ADE_08: 0.238708;  FDE_08: 0.473718;  ADE_12: 0.385182;   FDE_12: 0.809509

Train Epoch: 28 	 Goal loss: 5.5965	 Decoder loss: 5.4837	 Total: 11.0802
ADE_08: 0.245108;  FDE_08: 0.486398;  ADE_12: 0.396431;   FDE_12: 0.837428

Train Epoch: 29 	 Goal loss: 5.6095	 Decoder loss: 5.5072	 Total: 11.1167
ADE_08: 0.244368;  FDE_08: 0.483016;  ADE_12: 0.391393;   FDE_12: 0.814580

Train Epoch: 30 	 Goal loss: 5.5618	 Decoder loss: 5.4448	 Total: 11.0066
ADE_08: 0.242074;  FDE_08: 0.474495;  ADE_12: 0.387724;   FDE_12: 0.810403

Train Epoch: 31 	 Goal loss: 5.5415	 Decoder loss: 5.4163	 Total: 10.9578
ADE_08: 0.235853;  FDE_08: 0.470461;  ADE_12: 0.381671;   FDE_12: 0.803334

Saving checkpoints: metric_epoch_031_loss_0.3817.pth
Train Epoch: 32 	 Goal loss: 5.5443	 Decoder loss: 5.4107	 Total: 10.9550
ADE_08: 0.244977;  FDE_08: 0.486834;  ADE_12: 0.394192;   FDE_12: 0.823241

Train Epoch: 33 	 Goal loss: 5.4947	 Decoder loss: 5.3558	 Total: 10.8505
ADE_08: 0.240755;  FDE_08: 0.478803;  ADE_12: 0.389154;   FDE_12: 0.818785

Train Epoch: 34 	 Goal loss: 5.4762	 Decoder loss: 5.3309	 Total: 10.8071
ADE_08: 0.244053;  FDE_08: 0.485707;  ADE_12: 0.394861;   FDE_12: 0.832431

Train Epoch: 35 	 Goal loss: 5.4659	 Decoder loss: 5.3150	 Total: 10.7809
ADE_08: 0.245577;  FDE_08: 0.487264;  ADE_12: 0.395676;   FDE_12: 0.829373

Train Epoch: 36 	 Goal loss: 5.4495	 Decoder loss: 5.2897	 Total: 10.7392
ADE_08: 0.239127;  FDE_08: 0.477790;  ADE_12: 0.386703;   FDE_12: 0.811449

Train Epoch: 37 	 Goal loss: 5.4346	 Decoder loss: 5.2822	 Total: 10.7167
ADE_08: 0.248550;  FDE_08: 0.500743;  ADE_12: 0.406487;   FDE_12: 0.865554

Train Epoch: 38 	 Goal loss: 5.3894	 Decoder loss: 5.2266	 Total: 10.6161
ADE_08: 0.258613;  FDE_08: 0.504668;  ADE_12: 0.411209;   FDE_12: 0.851276

Train Epoch: 39 	 Goal loss: 5.3827	 Decoder loss: 5.2078	 Total: 10.5905
ADE_08: 0.251402;  FDE_08: 0.500842;  ADE_12: 0.406572;   FDE_12: 0.854904

Train Epoch: 40 	 Goal loss: 5.3578	 Decoder loss: 5.1803	 Total: 10.5381
ADE_08: 0.252681;  FDE_08: 0.500086;  ADE_12: 0.407249;   FDE_12: 0.855631

Train Epoch: 41 	 Goal loss: 5.3440	 Decoder loss: 5.1498	 Total: 10.4937
ADE_08: 0.244587;  FDE_08: 0.492789;  ADE_12: 0.399140;   FDE_12: 0.845556

Train Epoch: 42 	 Goal loss: 5.3255	 Decoder loss: 5.1329	 Total: 10.4584
ADE_08: 0.240022;  FDE_08: 0.479255;  ADE_12: 0.389297;   FDE_12: 0.821116

Train Epoch: 43 	 Goal loss: 5.3073	 Decoder loss: 5.1165	 Total: 10.4238
ADE_08: 0.243913;  FDE_08: 0.486718;  ADE_12: 0.395276;   FDE_12: 0.832791

Train Epoch: 44 	 Goal loss: 5.2892	 Decoder loss: 5.0951	 Total: 10.3842
ADE_08: 0.246676;  FDE_08: 0.489490;  ADE_12: 0.398171;   FDE_12: 0.835557

Train Epoch: 45 	 Goal loss: 5.2664	 Decoder loss: 5.0586	 Total: 10.3250
ADE_08: 0.249650;  FDE_08: 0.498006;  ADE_12: 0.402356;   FDE_12: 0.839605

Train Epoch: 46 	 Goal loss: 5.2504	 Decoder loss: 5.0400	 Total: 10.2904
ADE_08: 0.246008;  FDE_08: 0.496606;  ADE_12: 0.402466;   FDE_12: 0.855231

Train Epoch: 47 	 Goal loss: 5.2392	 Decoder loss: 5.0044	 Total: 10.2437
ADE_08: 0.248422;  FDE_08: 0.500811;  ADE_12: 0.406097;   FDE_12: 0.862268

Train Epoch: 48 	 Goal loss: 5.2152	 Decoder loss: 4.9837	 Total: 10.1989
ADE_08: 0.247908;  FDE_08: 0.505135;  ADE_12: 0.409776;   FDE_12: 0.880500

Train Epoch: 49 	 Goal loss: 5.1839	 Decoder loss: 4.9462	 Total: 10.1302
ADE_08: 0.243334;  FDE_08: 0.492921;  ADE_12: 0.399223;   FDE_12: 0.851002

Train Epoch: 50 	 Goal loss: 5.1815	 Decoder loss: 4.9328	 Total: 10.1143
ADE_08: 0.247995;  FDE_08: 0.501935;  ADE_12: 0.406349;   FDE_12: 0.864324

paper code

First of all, thank you for such a wonderful research. I would like to ask when you will release the code of the paper???

CPU RAM usage increasing for every epoch

Hi!

First of all, I really enjoyed your paper. Unfortunately, I've been trying to train your model on PIE, but I always get OOM errors. I didn't change anything other than commenting out the lines importing and using the nuScenes toolkit.

My computer only has 16 GB of RAM and 2 GPUs with 11 GB of VRAM each. I reduced the batch size from 128 to 16 and the number of workers from 8 to 2. However, after step 216/2701 in epoch 1, the code simply starts to consume approximately 6 MB of memory per iteration and keeps doing it until I run out of memory. Your code seems right in terms of avoiding adding losses to lists without calling "loss.item()". I really don't know what's going on. I really appreciate it if you could give me any advice on this issue. Thanks.

Slightly weird validation curve

Hi guys,

First of all, thanks for your great contribution! I am training the deterministic model on JAAD from scratch. I am getting results which are close to the published results, however the validation loss curve looks a bit strange;

  • First 5 - 7 epochs: the model is not improving on validation loss at all (even slightly losing performance) stagnating at around 3.3-3.5. - Epoch 7 - 9: sudden drop in validation loss to about 2.0.
  • Epoch 9 - end (50): Validation loss creeps down to about 1.65.

Did you guys also experience this stagnating loss curve at the start after which a sudden drop occured? And if so, do you have an explanation for this?

Thanks in advance for answering the question. Cheers, Seger

Pre training model

Hello, due to hardware reasons, I have been training for a long time. Could you share the pre training model

Paper question: 1.5 second MSE of PIE on JAAD dataset

Hi,

I got another question regarding the paper (hope you do not mind me dropping it here): the performance of the PIE_traj method on JAAD for the 1.5 Second prediction horizon is indicated at 1280 MSE. However, according to the PIE authors themselves the performance is 1248 MSE on this particular prediction horizon. The mentioned performance in your paper on all other prediction horizons for all metrics (CMSE & CFMSE) on JAAD is equal to the values mentioned by the PIE authors themselves. Is there a particular reason why you changed the 1.5 second MSE ? Reproducibility or typo?

Looking forward to your response. Cheers, Seger

Deterministic Training of PIE/JAAD

Hi, I managed to modify your code and train the deterministic model, but the result is not as good as you reported in your paper.
I got 39.8 149.3 489.5 after 50 epochs for PIE dataset. 88.4, 357.3, 1153.4 after 50 epoch. While your paper reported in much better than this. I think I may miss some hyper parameters for training. So could you give some suggestions for deterministic training of JAAD/PIE to reproduce your result?

when training

Hello, when I try to run, this error occurs. There is no module named "nuscenes. prediction". I found it in "lib \ utils \ eval_ utils.py : from nuscenes. prediction import convert_ local_ coords_ to_ global." I would like to know whether this module is in the installation package environment? It's not in this folder, and I didn't find this module in the installation package.Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.