GithubHelp home page GithubHelp logo

opendrivelab / uniad Goto Github PK

View Code? Open in Web Editor NEW
3.1K 34.0 336.0 9.83 MB

[CVPR'23 Best Paper Award] Planning-oriented Autonomous Driving

License: Apache License 2.0

Python 99.37% Shell 0.42% Dockerfile 0.21%
autonomous-driving end-to-end-autonomous-driving motion-prediction multi-object-tracking perception-prediction-planning bev-segmentation motion-planning occupancy-prediction autonomous-driving-framework

uniad's People

Contributors

chonghaosima avatar eltociear avatar faikit avatar hli2020 avatar ilnehc avatar wljungbergh avatar yihanhu avatar ytep-zhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uniad's Issues

About detections

So cool, it's a great work!

From homepage demo video, I notice the model only detects vehicles. Does the end2end model also detect traffic lights and traffic signs?

Some questions about the open-loop evaluation framework.

Congratulation for your great job and thanks for sharing the code.

In the paper "Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes", they designed an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR.
Surprisingly, such a simple method achieves state-of-the-art end-to-end planning performance on the nuScenes dataset, reducing the average L2 error by about 30%.
They concluded that maybe we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes.
1685343154853

What do you think of this experiment? https://github.com/E2E-AD/AD-MLP
Is there a problem with their experimental results, or we do need a new open-loop/close-loop evaluation framework?

RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling `cusolverDnCreate(handle)`

Exception has occurred: RuntimeError
cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling cusolverDnCreate(handle)
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 270, in velo_update
g2l_r = torch.linalg.inv(l2g_r2).type(torch.float)
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 643, in _forward_single_frame_inference
ref_pts = self.velo_update(
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 748, in simple_test_track
frame_res = self._forward_single_frame_inference(
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/detectors/uniad_e2e.py", line 292, in forward_test
result_track = self.simple_test_track(img, l2g_t, l2g_r_mat, img_metas, timestamp)
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/detectors/uniad_e2e.py", line 83, in forward
return self.forward_test(**kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 799, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/workspaces/UniAD-main/projects/mmdet3d_plugin/uniad/apis/test.py", line 90, in custom_multi_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "/workspaces/UniAD-main/tools/test.py", line 231, in main
outputs = custom_multi_gpu_test(model, data_loader, args.tmpdir,
File "/workspaces/UniAD-main/tools/test.py", line 261, in
main()
RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling cusolverDnCreate(handle)
这个项目的检测效果看起来很神奇我就想自己复现一下,我使用docker配置的项目环境项目跑起来后在迭代dataloder的数据用模型进行推理时出现这个错误,神奇的是在这个错误发生前代码已经完成datalode迭代的第一个data的推理并且得到了推理结果,但当迭代第二个data时却出现这个问题,我在这个问题上困扰了很长时间,如果有大神能给予一些启发性的指点我将感激不尽

The detection effect of this project looks amazing, so I wanted to reproduce it myself. I used Docker to configure the project environment. After the project was running, when iterating through the dataloader's data and using the model for inference, this error occurred. What's amazing is that before this error occurred, the code had already completed the inference of the first set of data from the dataloader and obtained the inference results. However, when iterating to the second set of data, this problem arose.I have been troubled by this issue for a long time. If any expert could give me some enlightening guidance, I would be immensely grateful.

File Not Found Error

The is may be a bug in the code. I tried to figure out. But I could not find it. The path to image looks like broken.
FileNotFoundError: img file does not exist: data/nuscenes/./data/nuscenes/samples/CAM_FRONT/n008-2018-08-01-15-16-36-0400__CAM_FRONT__1533151603512404.jpg

image

How to generate ground truth lable for occupancy prediction?

Thanks for you great work!

As demonstrated in this paper, the occupancy model's ultimate output is a binary segmentation that indicates whether each BEV grid is free or occupied. The original nuScenes dataset does not provide an occupancy label. I am interested to know if there is any documentation available that explains the process of generating the ground truth label for occupancy prediction.

FileNotFoundError: img file does not exist: data/nuscenes/samples/CAM_FRONT/n015-2018-07-11-11-54-16+0800__CAM_FRONT__1531281439762460.jpg

My environment is stand-alone ubuntu, nvidia 3080 Ti.

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ ./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./ckpts/uniad_base_track_map.pth 1
projects.mmdet3d_plugin
======
Loading NuScenes tables for version v1.0-trainval...
23 category,
8 attribute,
4 visibility,
64386 instance,
12 sensor,
10200 calibrated_sensor,
2631083 ego_pose,
68 log,
850 scene,
34149 sample,
2631083 sample_data,
1166187 sample_annotation,
4 map,
Done loading in 19.026 seconds.
======
Reverse indexing ...
Done reverse indexing in 3.4 seconds.
======
load checkpoint from local path: ./ckpts/uniad_base_track_map.pth
2023-07-06 19:35:38,816 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.0.conv2 is upgraded to version 2.
2023-07-06 19:35:38,819 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.1.conv2 is upgraded to version 2.
2023-07-06 19:35:38,821 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.2.conv2 is upgraded to version 2.
2023-07-06 19:35:38,823 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.3.conv2 is upgraded to version 2.
2023-07-06 19:35:38,825 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.4.conv2 is upgraded to version 2.
2023-07-06 19:35:38,828 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.5.conv2 is upgraded to version 2.
2023-07-06 19:35:38,830 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.6.conv2 is upgraded to version 2.
2023-07-06 19:35:38,832 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.7.conv2 is upgraded to version 2.
2023-07-06 19:35:38,835 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.8.conv2 is upgraded to version 2.
2023-07-06 19:35:38,837 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.9.conv2 is upgraded to version 2.
2023-07-06 19:35:38,839 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.10.conv2 is upgraded to version 2.
2023-07-06 19:35:38,842 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.11.conv2 is upgraded to version 2.
2023-07-06 19:35:38,844 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.12.conv2 is upgraded to version 2.
2023-07-06 19:35:38,846 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.13.conv2 is upgraded to version 2.
2023-07-06 19:35:38,849 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.14.conv2 is upgraded to version 2.
2023-07-06 19:35:38,851 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.15.conv2 is upgraded to version 2.
2023-07-06 19:35:38,853 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.16.conv2 is upgraded to version 2.
2023-07-06 19:35:38,855 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.17.conv2 is upgraded to version 2.
2023-07-06 19:35:38,858 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.18.conv2 is upgraded to version 2.
2023-07-06 19:35:38,860 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.19.conv2 is upgraded to version 2.
2023-07-06 19:35:38,862 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.20.conv2 is upgraded to version 2.
2023-07-06 19:35:38,864 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.21.conv2 is upgraded to version 2.
2023-07-06 19:35:38,866 - root - INFO - ModulatedDeformConvPack img_backbone.layer3.22.conv2 is upgraded to version 2.
2023-07-06 19:35:38,869 - root - INFO - ModulatedDeformConvPack img_backbone.layer4.0.conv2 is upgraded to version 2.
2023-07-06 19:35:38,874 - root - INFO - ModulatedDeformConvPack img_backbone.layer4.1.conv2 is upgraded to version 2.
2023-07-06 19:35:38,878 - root - INFO - ModulatedDeformConvPack img_backbone.layer4.2.conv2 is upgraded to version 2.
The model and loaded state dict do not match exactly

unexpected key in source state_dict: bbox_size_fc.weight, bbox_size_fc.bias, occ_head.bev_light_proj.conv_layers.0.conv.weight, occ_head.bev_light_proj.conv_layers.0.bn.weight, occ_head.bev_light_proj.conv_layers.0.bn.bias, occ_head.bev_light_proj.conv_layers.0.bn.running_mean, occ_head.bev_light_proj.conv_layers.0.bn.running_var, occ_head.bev_light_proj.conv_layers.0.bn.num_batches_tracked, occ_head.bev_light_proj.conv_layers.1.conv.weight, occ_head.bev_light_proj.conv_layers.1.bn.weight, occ_head.bev_light_proj.conv_layers.1.bn.bias, occ_head.bev_light_proj.conv_layers.1.bn.running_mean, occ_head.bev_light_proj.conv_layers.1.bn.running_var, occ_head.bev_light_proj.conv_layers.1.bn.num_batches_tracked, occ_head.bev_light_proj.conv_layers.2.conv.weight, occ_head.bev_light_proj.conv_layers.2.bn.weight, occ_head.bev_light_proj.conv_layers.2.bn.bias, occ_head.bev_light_proj.conv_layers.2.bn.running_mean, occ_head.bev_light_proj.conv_layers.2.bn.running_var, occ_head.bev_light_proj.conv_layers.2.bn.num_batches_tracked, occ_head.bev_light_proj.conv_layers.3.weight, occ_head.bev_light_proj.conv_layers.3.bias, occ_head.base_downscale.0.layers.conv_down_project.weight, occ_head.base_downscale.0.layers.abn_down_project.0.weight, occ_head.base_downscale.0.layers.abn_down_project.0.bias, occ_head.base_downscale.0.layers.abn_down_project.0.running_mean, occ_head.base_downscale.0.layers.abn_down_project.0.running_var, occ_head.base_downscale.0.layers.abn_down_project.0.num_batches_tracked, occ_head.base_downscale.0.layers.conv.weight, occ_head.base_downscale.0.layers.abn.0.weight, occ_head.base_downscale.0.layers.abn.0.bias, occ_head.base_downscale.0.layers.abn.0.running_mean, occ_head.base_downscale.0.layers.abn.0.running_var, occ_head.base_downscale.0.layers.abn.0.num_batches_tracked, occ_head.base_downscale.0.layers.conv_up_project.weight, occ_head.base_downscale.0.layers.abn_up_project.0.weight, occ_head.base_downscale.0.layers.abn_up_project.0.bias, occ_head.base_downscale.0.layers.abn_up_project.0.running_mean, occ_head.base_downscale.0.layers.abn_up_project.0.running_var, occ_head.base_downscale.0.layers.abn_up_project.0.num_batches_tracked, occ_head.base_downscale.0.projection.conv_skip_proj.weight, occ_head.base_downscale.0.projection.bn_skip_proj.weight, occ_head.base_downscale.0.projection.bn_skip_proj.bias, occ_head.base_downscale.0.projection.bn_skip_proj.running_mean, occ_head.base_downscale.0.projection.bn_skip_proj.running_var, occ_head.base_downscale.0.projection.bn_skip_proj.num_batches_tracked, occ_head.base_downscale.1.layers.conv_down_project.weight, occ_head.base_downscale.1.layers.abn_down_project.0.weight, occ_head.base_downscale.1.layers.abn_down_project.0.bias, occ_head.base_downscale.1.layers.abn_down_project.0.running_mean, occ_head.base_downscale.1.layers.abn_down_project.0.running_var, occ_head.base_downscale.1.layers.abn_down_project.0.num_batches_tracked, occ_head.base_downscale.1.layers.conv.weight, occ_head.base_downscale.1.layers.abn.0.weight, occ_head.base_downscale.1.layers.abn.0.bias, occ_head.base_downscale.1.layers.abn.0.running_mean, occ_head.base_downscale.1.layers.abn.0.running_var, occ_head.base_downscale.1.layers.abn.0.num_batches_tracked, occ_head.base_downscale.1.layers.conv_up_project.weight, occ_head.base_downscale.1.layers.abn_up_project.0.weight, occ_head.base_downscale.1.layers.abn_up_project.0.bias, occ_head.base_downscale.1.layers.abn_up_project.0.running_mean, occ_head.base_downscale.1.layers.abn_up_project.0.running_var, occ_head.base_downscale.1.layers.abn_up_project.0.num_batches_tracked, occ_head.base_downscale.1.projection.conv_skip_proj.weight, occ_head.base_downscale.1.projection.bn_skip_proj.weight, occ_head.base_downscale.1.projection.bn_skip_proj.bias, occ_head.base_downscale.1.projection.bn_skip_proj.running_mean, occ_head.base_downscale.1.projection.bn_skip_proj.running_var, occ_head.base_downscale.1.projection.bn_skip_proj.num_batches_tracked, occ_head.transformer_decoder.layers.0.attentions.0.attn.in_proj_weight, occ_head.transformer_decoder.layers.0.attentions.0.attn.in_proj_bias, occ_head.transformer_decoder.layers.0.attentions.0.attn.out_proj.weight, occ_head.transformer_decoder.layers.0.attentions.0.attn.out_proj.bias, occ_head.transformer_decoder.layers.0.attentions.1.attn.in_proj_weight, occ_head.transformer_decoder.layers.0.attentions.1.attn.in_proj_bias, occ_head.transformer_decoder.layers.0.attentions.1.attn.out_proj.weight, occ_head.transformer_decoder.layers.0.attentions.1.attn.out_proj.bias, occ_head.transformer_decoder.layers.0.ffns.0.layers.0.0.weight, occ_head.transformer_decoder.layers.0.ffns.0.layers.0.0.bias, occ_head.transformer_decoder.layers.0.ffns.0.layers.1.weight, occ_head.transformer_decoder.layers.0.ffns.0.layers.1.bias, occ_head.transformer_decoder.layers.0.norms.0.weight, occ_head.transformer_decoder.layers.0.norms.0.bias, occ_head.transformer_decoder.layers.0.norms.1.weight, occ_head.transformer_decoder.layers.0.norms.1.bias, occ_head.transformer_decoder.layers.0.norms.2.weight, occ_head.transformer_decoder.layers.0.norms.2.bias, occ_head.transformer_decoder.layers.1.attentions.0.attn.in_proj_weight, occ_head.transformer_decoder.layers.1.attentions.0.attn.in_proj_bias, occ_head.transformer_decoder.layers.1.attentions.0.attn.out_proj.weight, occ_head.transformer_decoder.layers.1.attentions.0.attn.out_proj.bias, occ_head.transformer_decoder.layers.1.attentions.1.attn.in_proj_weight, occ_head.transformer_decoder.layers.1.attentions.1.attn.in_proj_bias, occ_head.transformer_decoder.layers.1.attentions.1.attn.out_proj.weight, occ_head.transformer_decoder.layers.1.attentions.1.attn.out_proj.bias, occ_head.transformer_decoder.layers.1.ffns.0.layers.0.0.weight, occ_head.transformer_decoder.layers.1.ffns.0.layers.0.0.bias, occ_head.transformer_decoder.layers.1.ffns.0.layers.1.weight, occ_head.transformer_decoder.layers.1.ffns.0.layers.1.bias, occ_head.transformer_decoder.layers.1.norms.0.weight, occ_head.transformer_decoder.layers.1.norms.0.bias, occ_head.transformer_decoder.layers.1.norms.1.weight, occ_head.transformer_decoder.layers.1.norms.1.bias, occ_head.transformer_decoder.layers.1.norms.2.weight, occ_head.transformer_decoder.layers.1.norms.2.bias, occ_head.transformer_decoder.layers.2.attentions.0.attn.in_proj_weight, occ_head.transformer_decoder.layers.2.attentions.0.attn.in_proj_bias, occ_head.transformer_decoder.layers.2.attentions.0.attn.out_proj.weight, occ_head.transformer_decoder.layers.2.attentions.0.attn.out_proj.bias, occ_head.transformer_decoder.layers.2.attentions.1.attn.in_proj_weight, occ_head.transformer_decoder.layers.2.attentions.1.attn.in_proj_bias, occ_head.transformer_decoder.layers.2.attentions.1.attn.out_proj.weight, occ_head.transformer_decoder.layers.2.attentions.1.attn.out_proj.bias, occ_head.transformer_decoder.layers.2.ffns.0.layers.0.0.weight, occ_head.transformer_decoder.layers.2.ffns.0.layers.0.0.bias, occ_head.transformer_decoder.layers.2.ffns.0.layers.1.weight, occ_head.transformer_decoder.layers.2.ffns.0.layers.1.bias, occ_head.transformer_decoder.layers.2.norms.0.weight, occ_head.transformer_decoder.layers.2.norms.0.bias, occ_head.transformer_decoder.layers.2.norms.1.weight, occ_head.transformer_decoder.layers.2.norms.1.bias, occ_head.transformer_decoder.layers.2.norms.2.weight, occ_head.transformer_decoder.layers.2.norms.2.bias, occ_head.transformer_decoder.layers.3.attentions.0.attn.in_proj_weight, occ_head.transformer_decoder.layers.3.attentions.0.attn.in_proj_bias, occ_head.transformer_decoder.layers.3.attentions.0.attn.out_proj.weight, occ_head.transformer_decoder.layers.3.attentions.0.attn.out_proj.bias, occ_head.transformer_decoder.layers.3.attentions.1.attn.in_proj_weight, occ_head.transformer_decoder.layers.3.attentions.1.attn.in_proj_bias, occ_head.transformer_decoder.layers.3.attentions.1.attn.out_proj.weight, occ_head.transformer_decoder.layers.3.attentions.1.attn.out_proj.bias, occ_head.transformer_decoder.layers.3.ffns.0.layers.0.0.weight, occ_head.transformer_decoder.layers.3.ffns.0.layers.0.0.bias, occ_head.transformer_decoder.layers.3.ffns.0.layers.1.weight, occ_head.transformer_decoder.layers.3.ffns.0.layers.1.bias, occ_head.transformer_decoder.layers.3.norms.0.weight, occ_head.transformer_decoder.layers.3.norms.0.bias, occ_head.transformer_decoder.layers.3.norms.1.weight, occ_head.transformer_decoder.layers.3.norms.1.bias, occ_head.transformer_decoder.layers.3.norms.2.weight, occ_head.transformer_decoder.layers.3.norms.2.bias, occ_head.transformer_decoder.layers.4.attentions.0.attn.in_proj_weight, occ_head.transformer_decoder.layers.4.attentions.0.attn.in_proj_bias, occ_head.transformer_decoder.layers.4.attentions.0.attn.out_proj.weight, occ_head.transformer_decoder.layers.4.attentions.0.attn.out_proj.bias, occ_head.transformer_decoder.layers.4.attentions.1.attn.in_proj_weight, occ_head.transformer_decoder.layers.4.attentions.1.attn.in_proj_bias, occ_head.transformer_decoder.layers.4.attentions.1.attn.out_proj.weight, occ_head.transformer_decoder.layers.4.attentions.1.attn.out_proj.bias, occ_head.transformer_decoder.layers.4.ffns.0.layers.0.0.weight, occ_head.transformer_decoder.layers.4.ffns.0.layers.0.0.bias, occ_head.transformer_decoder.layers.4.ffns.0.layers.1.weight, occ_head.transformer_decoder.layers.4.ffns.0.layers.1.bias, occ_head.transformer_decoder.layers.4.norms.0.weight, occ_head.transformer_decoder.layers.4.norms.0.bias, occ_head.transformer_decoder.layers.4.norms.1.weight, occ_head.transformer_decoder.layers.4.norms.1.bias, occ_head.transformer_decoder.layers.4.norms.2.weight, occ_head.transformer_decoder.layers.4.norms.2.bias, occ_head.transformer_decoder.post_norm.weight, occ_head.transformer_decoder.post_norm.bias, occ_head.temporal_mlps.0.layers.0.weight, occ_head.temporal_mlps.0.layers.0.bias, occ_head.temporal_mlps.0.layers.1.weight, occ_head.temporal_mlps.0.layers.1.bias, occ_head.temporal_mlps.1.layers.0.weight, occ_head.temporal_mlps.1.layers.0.bias, occ_head.temporal_mlps.1.layers.1.weight, occ_head.temporal_mlps.1.layers.1.bias, occ_head.temporal_mlps.2.layers.0.weight, occ_head.temporal_mlps.2.layers.0.bias, occ_head.temporal_mlps.2.layers.1.weight, occ_head.temporal_mlps.2.layers.1.bias, occ_head.temporal_mlps.3.layers.0.weight, occ_head.temporal_mlps.3.layers.0.bias, occ_head.temporal_mlps.3.layers.1.weight, occ_head.temporal_mlps.3.layers.1.bias, occ_head.temporal_mlps.4.layers.0.weight, occ_head.temporal_mlps.4.layers.0.bias, occ_head.temporal_mlps.4.layers.1.weight, occ_head.temporal_mlps.4.layers.1.bias, occ_head.downscale_convs.0.layers.conv_down_project.weight, occ_head.downscale_convs.0.layers.abn_down_project.0.weight, occ_head.downscale_convs.0.layers.abn_down_project.0.bias, occ_head.downscale_convs.0.layers.abn_down_project.0.running_mean, occ_head.downscale_convs.0.layers.abn_down_project.0.running_var, occ_head.downscale_convs.0.layers.abn_down_project.0.num_batches_tracked, occ_head.downscale_convs.0.layers.conv.weight, occ_head.downscale_convs.0.layers.abn.0.weight, occ_head.downscale_convs.0.layers.abn.0.bias, occ_head.downscale_convs.0.layers.abn.0.running_mean, occ_head.downscale_convs.0.layers.abn.0.running_var, occ_head.downscale_convs.0.layers.abn.0.num_batches_tracked, occ_head.downscale_convs.0.layers.conv_up_project.weight, occ_head.downscale_convs.0.layers.abn_up_project.0.weight, occ_head.downscale_convs.0.layers.abn_up_project.0.bias, occ_head.downscale_convs.0.layers.abn_up_project.0.running_mean, occ_head.downscale_convs.0.layers.abn_up_project.0.running_var, occ_head.downscale_convs.0.layers.abn_up_project.0.num_batches_tracked, occ_head.downscale_convs.0.projection.conv_skip_proj.weight, occ_head.downscale_convs.0.projection.bn_skip_proj.weight, occ_head.downscale_convs.0.projection.bn_skip_proj.bias, occ_head.downscale_convs.0.projection.bn_skip_proj.running_mean, occ_head.downscale_convs.0.projection.bn_skip_proj.running_var, occ_head.downscale_convs.0.projection.bn_skip_proj.num_batches_tracked, occ_head.downscale_convs.1.layers.conv_down_project.weight, occ_head.downscale_convs.1.layers.abn_down_project.0.weight, occ_head.downscale_convs.1.layers.abn_down_project.0.bias, occ_head.downscale_convs.1.layers.abn_down_project.0.running_mean, occ_head.downscale_convs.1.layers.abn_down_project.0.running_var, occ_head.downscale_convs.1.layers.abn_down_project.0.num_batches_tracked, occ_head.downscale_convs.1.layers.conv.weight, occ_head.downscale_convs.1.layers.abn.0.weight, occ_head.downscale_convs.1.layers.abn.0.bias, occ_head.downscale_convs.1.layers.abn.0.running_mean, occ_head.downscale_convs.1.layers.abn.0.running_var, occ_head.downscale_convs.1.layers.abn.0.num_batches_tracked, occ_head.downscale_convs.1.layers.conv_up_project.weight, occ_head.downscale_convs.1.layers.abn_up_project.0.weight, occ_head.downscale_convs.1.layers.abn_up_project.0.bias, occ_head.downscale_convs.1.layers.abn_up_project.0.running_mean, occ_head.downscale_convs.1.layers.abn_up_project.0.running_var, occ_head.downscale_convs.1.layers.abn_up_project.0.num_batches_tracked, occ_head.downscale_convs.1.projection.conv_skip_proj.weight, occ_head.downscale_convs.1.projection.bn_skip_proj.weight, occ_head.downscale_convs.1.projection.bn_skip_proj.bias, occ_head.downscale_convs.1.projection.bn_skip_proj.running_mean, occ_head.downscale_convs.1.projection.bn_skip_proj.running_var, occ_head.downscale_convs.1.projection.bn_skip_proj.num_batches_tracked, occ_head.downscale_convs.2.layers.conv_down_project.weight, occ_head.downscale_convs.2.layers.abn_down_project.0.weight, occ_head.downscale_convs.2.layers.abn_down_project.0.bias, occ_head.downscale_convs.2.layers.abn_down_project.0.running_mean, occ_head.downscale_convs.2.layers.abn_down_project.0.running_var, occ_head.downscale_convs.2.layers.abn_down_project.0.num_batches_tracked, occ_head.downscale_convs.2.layers.conv.weight, occ_head.downscale_convs.2.layers.abn.0.weight, occ_head.downscale_convs.2.layers.abn.0.bias, occ_head.downscale_convs.2.layers.abn.0.running_mean, occ_head.downscale_convs.2.layers.abn.0.running_var, occ_head.downscale_convs.2.layers.abn.0.num_batches_tracked, occ_head.downscale_convs.2.layers.conv_up_project.weight, occ_head.downscale_convs.2.layers.abn_up_project.0.weight, occ_head.downscale_convs.2.layers.abn_up_project.0.bias, occ_head.downscale_convs.2.layers.abn_up_project.0.running_mean, occ_head.downscale_convs.2.layers.abn_up_project.0.running_var, occ_head.downscale_convs.2.layers.abn_up_project.0.num_batches_tracked, occ_head.downscale_convs.2.projection.conv_skip_proj.weight, occ_head.downscale_convs.2.projection.bn_skip_proj.weight, occ_head.downscale_convs.2.projection.bn_skip_proj.bias, occ_head.downscale_convs.2.projection.bn_skip_proj.running_mean, occ_head.downscale_convs.2.projection.bn_skip_proj.running_var, occ_head.downscale_convs.2.projection.bn_skip_proj.num_batches_tracked, occ_head.downscale_convs.3.layers.conv_down_project.weight, occ_head.downscale_convs.3.layers.abn_down_project.0.weight, occ_head.downscale_convs.3.layers.abn_down_project.0.bias, occ_head.downscale_convs.3.layers.abn_down_project.0.running_mean, occ_head.downscale_convs.3.layers.abn_down_project.0.running_var, occ_head.downscale_convs.3.layers.abn_down_project.0.num_batches_tracked, occ_head.downscale_convs.3.layers.conv.weight, occ_head.downscale_convs.3.layers.abn.0.weight, occ_head.downscale_convs.3.layers.abn.0.bias, occ_head.downscale_convs.3.layers.abn.0.running_mean, occ_head.downscale_convs.3.layers.abn.0.running_var, occ_head.downscale_convs.3.layers.abn.0.num_batches_tracked, occ_head.downscale_convs.3.layers.conv_up_project.weight, occ_head.downscale_convs.3.layers.abn_up_project.0.weight, occ_head.downscale_convs.3.layers.abn_up_project.0.bias, occ_head.downscale_convs.3.layers.abn_up_project.0.running_mean, occ_head.downscale_convs.3.layers.abn_up_project.0.running_var, occ_head.downscale_convs.3.layers.abn_up_project.0.num_batches_tracked, occ_head.downscale_convs.3.projection.conv_skip_proj.weight, occ_head.downscale_convs.3.projection.bn_skip_proj.weight, occ_head.downscale_convs.3.projection.bn_skip_proj.bias, occ_head.downscale_convs.3.projection.bn_skip_proj.running_mean, occ_head.downscale_convs.3.projection.bn_skip_proj.running_var, occ_head.downscale_convs.3.projection.bn_skip_proj.num_batches_tracked, occ_head.downscale_convs.4.layers.conv_down_project.weight, occ_head.downscale_convs.4.layers.abn_down_project.0.weight, occ_head.downscale_convs.4.layers.abn_down_project.0.bias, occ_head.downscale_convs.4.layers.abn_down_project.0.running_mean, occ_head.downscale_convs.4.layers.abn_down_project.0.running_var, occ_head.downscale_convs.4.layers.abn_down_project.0.num_batches_tracked, occ_head.downscale_convs.4.layers.conv.weight, occ_head.downscale_convs.4.layers.abn.0.weight, occ_head.downscale_convs.4.layers.abn.0.bias, occ_head.downscale_convs.4.layers.abn.0.running_mean, occ_head.downscale_convs.4.layers.abn.0.running_var, occ_head.downscale_convs.4.layers.abn.0.num_batches_tracked, occ_head.downscale_convs.4.layers.conv_up_project.weight, occ_head.downscale_convs.4.layers.abn_up_project.0.weight, occ_head.downscale_convs.4.layers.abn_up_project.0.bias, occ_head.downscale_convs.4.layers.abn_up_project.0.running_mean, occ_head.downscale_convs.4.layers.abn_up_project.0.running_var, occ_head.downscale_convs.4.layers.abn_up_project.0.num_batches_tracked, occ_head.downscale_convs.4.projection.conv_skip_proj.weight, occ_head.downscale_convs.4.projection.bn_skip_proj.weight, occ_head.downscale_convs.4.projection.bn_skip_proj.bias, occ_head.downscale_convs.4.projection.bn_skip_proj.running_mean, occ_head.downscale_convs.4.projection.bn_skip_proj.running_var, occ_head.downscale_convs.4.projection.bn_skip_proj.num_batches_tracked, occ_head.upsample_adds.0.upsample_layer.1.weight, occ_head.upsample_adds.0.upsample_layer.2.weight, occ_head.upsample_adds.0.upsample_layer.2.bias, occ_head.upsample_adds.0.upsample_layer.2.running_mean, occ_head.upsample_adds.0.upsample_layer.2.running_var, occ_head.upsample_adds.0.upsample_layer.2.num_batches_tracked, occ_head.upsample_adds.1.upsample_layer.1.weight, occ_head.upsample_adds.1.upsample_layer.2.weight, occ_head.upsample_adds.1.upsample_layer.2.bias, occ_head.upsample_adds.1.upsample_layer.2.running_mean, occ_head.upsample_adds.1.upsample_layer.2.running_var, occ_head.upsample_adds.1.upsample_layer.2.num_batches_tracked, occ_head.upsample_adds.2.upsample_layer.1.weight, occ_head.upsample_adds.2.upsample_layer.2.weight, occ_head.upsample_adds.2.upsample_layer.2.bias, occ_head.upsample_adds.2.upsample_layer.2.running_mean, occ_head.upsample_adds.2.upsample_layer.2.running_var, occ_head.upsample_adds.2.upsample_layer.2.num_batches_tracked, occ_head.upsample_adds.3.upsample_layer.1.weight, occ_head.upsample_adds.3.upsample_layer.2.weight, occ_head.upsample_adds.3.upsample_layer.2.bias, occ_head.upsample_adds.3.upsample_layer.2.running_mean, occ_head.upsample_adds.3.upsample_layer.2.running_var, occ_head.upsample_adds.3.upsample_layer.2.num_batches_tracked, occ_head.upsample_adds.4.upsample_layer.1.weight, occ_head.upsample_adds.4.upsample_layer.2.weight, occ_head.upsample_adds.4.upsample_layer.2.bias, occ_head.upsample_adds.4.upsample_layer.2.running_mean, occ_head.upsample_adds.4.upsample_layer.2.running_var, occ_head.upsample_adds.4.upsample_layer.2.num_batches_tracked, occ_head.dense_decoder.layers.0.conv.1.weight, occ_head.dense_decoder.layers.0.conv.2.weight, occ_head.dense_decoder.layers.0.conv.2.bias, occ_head.dense_decoder.layers.0.conv.2.running_mean, occ_head.dense_decoder.layers.0.conv.2.running_var, occ_head.dense_decoder.layers.0.conv.2.num_batches_tracked, occ_head.dense_decoder.layers.0.conv.4.weight, occ_head.dense_decoder.layers.0.conv.5.weight, occ_head.dense_decoder.layers.0.conv.5.bias, occ_head.dense_decoder.layers.0.conv.5.running_mean, occ_head.dense_decoder.layers.0.conv.5.running_var, occ_head.dense_decoder.layers.0.conv.5.num_batches_tracked, occ_head.dense_decoder.layers.0.up.weight, occ_head.dense_decoder.layers.0.up.bias, occ_head.dense_decoder.layers.1.conv.1.weight, occ_head.dense_decoder.layers.1.conv.2.weight, occ_head.dense_decoder.layers.1.conv.2.bias, occ_head.dense_decoder.layers.1.conv.2.running_mean, occ_head.dense_decoder.layers.1.conv.2.running_var, occ_head.dense_decoder.layers.1.conv.2.num_batches_tracked, occ_head.dense_decoder.layers.1.conv.4.weight, occ_head.dense_decoder.layers.1.conv.5.weight, occ_head.dense_decoder.layers.1.conv.5.bias, occ_head.dense_decoder.layers.1.conv.5.running_mean, occ_head.dense_decoder.layers.1.conv.5.running_var, occ_head.dense_decoder.layers.1.conv.5.num_batches_tracked, occ_head.dense_decoder.layers.1.up.weight, occ_head.dense_decoder.layers.1.up.bias, occ_head.mode_fuser.0.weight, occ_head.mode_fuser.0.bias, occ_head.mode_fuser.1.weight, occ_head.mode_fuser.1.bias, occ_head.multi_query_fuser.0.weight, occ_head.multi_query_fuser.0.bias, occ_head.multi_query_fuser.1.weight, occ_head.multi_query_fuser.1.bias, occ_head.multi_query_fuser.3.weight, occ_head.multi_query_fuser.3.bias, occ_head.query_to_occ_feat.layers.0.weight, occ_head.query_to_occ_feat.layers.0.bias, occ_head.query_to_occ_feat.layers.1.weight, occ_head.query_to_occ_feat.layers.1.bias, occ_head.query_to_occ_feat.layers.2.weight, occ_head.query_to_occ_feat.layers.2.bias, motion_head.learnable_motion_query_embedding.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.linear1.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.linear1.bias, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.linear2.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.linear2.bias, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.norm1.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.norm1.bias, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.norm2.weight, motion_head.deep_interact.intention_interaction_layers.interaction_transformer.norm2.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.linear1.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.linear1.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.linear2.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.linear2.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm1.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm1.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm2.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm2.bias, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm3.weight, motion_head.deep_interact.track_agent_interaction_layers.0.interaction_transformer.norm3.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.linear1.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.linear1.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.linear2.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.linear2.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm1.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm1.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm2.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm2.bias, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm3.weight, motion_head.deep_interact.track_agent_interaction_layers.1.interaction_transformer.norm3.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.linear1.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.linear1.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.linear2.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.linear2.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm1.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm1.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm2.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm2.bias, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm3.weight, motion_head.deep_interact.track_agent_interaction_layers.2.interaction_transformer.norm3.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.linear1.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.linear1.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.linear2.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.linear2.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm1.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm1.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm2.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm2.bias, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm3.weight, motion_head.deep_interact.map_interaction_layers.0.interaction_transformer.norm3.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.linear1.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.linear1.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.linear2.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.linear2.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm1.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm1.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm2.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm2.bias, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm3.weight, motion_head.deep_interact.map_interaction_layers.1.interaction_transformer.norm3.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.self_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.self_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.self_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.self_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.multihead_attn.in_proj_weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.multihead_attn.in_proj_bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.multihead_attn.out_proj.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.multihead_attn.out_proj.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.linear1.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.linear1.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.linear2.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.linear2.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm1.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm1.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm2.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm2.bias, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm3.weight, motion_head.deep_interact.map_interaction_layers.2.interaction_transformer.norm3.bias, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.sampling_offsets.weight, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.sampling_offsets.bias, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.attention_weights.weight, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.attention_weights.bias, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.value_proj.weight, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.value_proj.bias, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.output_proj.0.weight, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.output_proj.0.bias, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.output_proj.1.weight, motion_head.deep_interact.bev_interaction_layers.0.attentions.0.output_proj.1.bias, motion_head.deep_interact.bev_interaction_layers.0.ffns.0.layers.0.0.weight, motion_head.deep_interact.bev_interaction_layers.0.ffns.0.layers.0.0.bias, motion_head.deep_interact.bev_interaction_layers.0.ffns.0.layers.1.weight, motion_head.deep_interact.bev_interaction_layers.0.ffns.0.layers.1.bias, motion_head.deep_interact.bev_interaction_layers.0.norms.0.weight, motion_head.deep_interact.bev_interaction_layers.0.norms.0.bias, motion_head.deep_interact.bev_interaction_layers.0.norms.1.weight, motion_head.deep_interact.bev_interaction_layers.0.norms.1.bias, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.sampling_offsets.weight, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.sampling_offsets.bias, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.attention_weights.weight, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.attention_weights.bias, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.value_proj.weight, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.value_proj.bias, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.output_proj.0.weight, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.output_proj.0.bias, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.output_proj.1.weight, motion_head.deep_interact.bev_interaction_layers.1.attentions.0.output_proj.1.bias, motion_head.deep_interact.bev_interaction_layers.1.ffns.0.layers.0.0.weight, motion_head.deep_interact.bev_interaction_layers.1.ffns.0.layers.0.0.bias, motion_head.deep_interact.bev_interaction_layers.1.ffns.0.layers.1.weight, motion_head.deep_interact.bev_interaction_layers.1.ffns.0.layers.1.bias, motion_head.deep_interact.bev_interaction_layers.1.norms.0.weight, motion_head.deep_interact.bev_interaction_layers.1.norms.0.bias, motion_head.deep_interact.bev_interaction_layers.1.norms.1.weight, motion_head.deep_interact.bev_interaction_layers.1.norms.1.bias, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.sampling_offsets.weight, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.sampling_offsets.bias, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.attention_weights.weight, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.attention_weights.bias, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.value_proj.weight, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.value_proj.bias, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.output_proj.0.weight, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.output_proj.0.bias, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.output_proj.1.weight, motion_head.deep_interact.bev_interaction_layers.2.attentions.0.output_proj.1.bias, motion_head.deep_interact.bev_interaction_layers.2.ffns.0.layers.0.0.weight, motion_head.deep_interact.bev_interaction_layers.2.ffns.0.layers.0.0.bias, motion_head.deep_interact.bev_interaction_layers.2.ffns.0.layers.1.weight, motion_head.deep_interact.bev_interaction_layers.2.ffns.0.layers.1.bias, motion_head.deep_interact.bev_interaction_layers.2.norms.0.weight, motion_head.deep_interact.bev_interaction_layers.2.norms.0.bias, motion_head.deep_interact.bev_interaction_layers.2.norms.1.weight, motion_head.deep_interact.bev_interaction_layers.2.norms.1.bias, motion_head.deep_interact.static_dynamic_fuser.0.weight, motion_head.deep_interact.static_dynamic_fuser.0.bias, motion_head.deep_interact.static_dynamic_fuser.2.weight, motion_head.deep_interact.static_dynamic_fuser.2.bias, motion_head.deep_interact.dynamic_embed_fuser.0.weight, motion_head.deep_interact.dynamic_embed_fuser.0.bias, motion_head.deep_interact.dynamic_embed_fuser.2.weight, motion_head.deep_interact.dynamic_embed_fuser.2.bias, motion_head.deep_interact.query_fuser.0.weight, motion_head.deep_interact.query_fuser.0.bias, motion_head.deep_interact.query_fuser.2.weight, motion_head.deep_interact.query_fuser.2.bias, motion_head.traj_cls_branches.0.0.weight, motion_head.traj_cls_branches.0.0.bias, motion_head.traj_cls_branches.0.1.weight, motion_head.traj_cls_branches.0.1.bias, motion_head.traj_cls_branches.0.3.weight, motion_head.traj_cls_branches.0.3.bias, motion_head.traj_cls_branches.0.4.weight, motion_head.traj_cls_branches.0.4.bias, motion_head.traj_cls_branches.0.6.weight, motion_head.traj_cls_branches.0.6.bias, motion_head.traj_cls_branches.1.0.weight, motion_head.traj_cls_branches.1.0.bias, motion_head.traj_cls_branches.1.1.weight, motion_head.traj_cls_branches.1.1.bias, motion_head.traj_cls_branches.1.3.weight, motion_head.traj_cls_branches.1.3.bias, motion_head.traj_cls_branches.1.4.weight, motion_head.traj_cls_branches.1.4.bias, motion_head.traj_cls_branches.1.6.weight, motion_head.traj_cls_branches.1.6.bias, motion_head.traj_cls_branches.2.0.weight, motion_head.traj_cls_branches.2.0.bias, motion_head.traj_cls_branches.2.1.weight, motion_head.traj_cls_branches.2.1.bias, motion_head.traj_cls_branches.2.3.weight, motion_head.traj_cls_branches.2.3.bias, motion_head.traj_cls_branches.2.4.weight, motion_head.traj_cls_branches.2.4.bias, motion_head.traj_cls_branches.2.6.weight, motion_head.traj_cls_branches.2.6.bias, motion_head.traj_reg_branches.0.0.weight, motion_head.traj_reg_branches.0.0.bias, motion_head.traj_reg_branches.0.2.weight, motion_head.traj_reg_branches.0.2.bias, motion_head.traj_reg_branches.0.4.weight, motion_head.traj_reg_branches.0.4.bias, motion_head.traj_reg_branches.1.0.weight, motion_head.traj_reg_branches.1.0.bias, motion_head.traj_reg_branches.1.2.weight, motion_head.traj_reg_branches.1.2.bias, motion_head.traj_reg_branches.1.4.weight, motion_head.traj_reg_branches.1.4.bias, motion_head.traj_reg_branches.2.0.weight, motion_head.traj_reg_branches.2.0.bias, motion_head.traj_reg_branches.2.2.weight, motion_head.traj_reg_branches.2.2.bias, motion_head.traj_reg_branches.2.4.weight, motion_head.traj_reg_branches.2.4.bias, motion_head.intention_ep_embedding_agent.0.weight, motion_head.intention_ep_embedding_agent.0.bias, motion_head.intention_ep_embedding_agent.2.weight, motion_head.intention_ep_embedding_agent.2.bias, motion_head.intention_ep_embedding_offset.0.weight, motion_head.intention_ep_embedding_offset.0.bias, motion_head.intention_ep_embedding_offset.2.weight, motion_head.intention_ep_embedding_offset.2.bias, motion_head.intention_ep_embedding_ego.0.weight, motion_head.intention_ep_embedding_ego.0.bias, motion_head.intention_ep_embedding_ego.2.weight, motion_head.intention_ep_embedding_ego.2.bias, motion_head.query_embedding_box.0.weight, motion_head.query_embedding_box.0.bias, motion_head.query_embedding_box.2.weight, motion_head.query_embedding_box.2.bias, pts_bbox_head.query_embedding.weight, pts_bbox_head.transformer.reference_points.weight, pts_bbox_head.transformer.reference_points.bias

[                                                  ] 0/6019, elapsed: 0s, ETA:Traceback (most recent call last):
  File "./tools/test.py", line 261, in <module>
    main()
  File "./tools/test.py", line 231, in main
    outputs = custom_multi_gpu_test(model, data_loader, args.tmpdir,
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/uniad/apis/test.py", line 88, in custom_multi_gpu_test
    for i, data in enumerate(data_loader):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
    raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 726, in __getitem__
    return self.prepare_test_data(idx)
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 254, in prepare_test_data
    example = self.pipeline(input_dict)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmdet/datasets/pipelines/compose.py", line 40, in __call__
    data = t(data)
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/pipelines/loading.py", line 53, in __call__
    img = mmcv.imread(img_path, self.color_type)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/image/io.py", line 176, in imread
    check_file_exist(img_or_path,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/utils/path.py", line 23, in check_file_exist
    raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: img file does not exist: data/nuscenes/samples/CAM_FRONT/n015-2018-07-11-11-54-16+0800__CAM_FRONT__1531281439762460.jpg

^[[B^[[B^[[B/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 92363) of binary: /home/jarvis/anaconda3/envs/uniad38/bin/python
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2023-07-06_19:36:26
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 92363)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ 

_pickle.UnpicklingError: pickle data was truncated

Hello, how should I solve the following problems?

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ ./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./ckpts/uniad_base_track_map.pth 1
projects.mmdet3d_plugin
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
    return obj_cls(**args)
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 78, in __init__
    super().__init__(*args, **kwargs)
  File "/home/jarvis/coding/pyhome/github.com/meua/mmdetection3d/mmdet3d/datasets/nuscenes_dataset.py", line 129, in __init__
    super().__init__(
  File "/home/jarvis/coding/pyhome/github.com/meua/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 64, in __init__
    self.data_infos = self.load_annotations(self.ann_file)
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 152, in load_annotations
    data = pickle.loads(self.file_client.get(ann_file))
_pickle.UnpicklingError: pickle data was truncated

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./tools/test.py", line 261, in <module>
    main()
  File "./tools/test.py", line 190, in main
    dataset = build_dataset(cfg.data.test)
  File "/home/jarvis/coding/pyhome/github.com/meua/mmdetection3d/mmdet3d/datasets/builder.py", line 41, in build_dataset
    dataset = build_from_cfg(cfg, DATASETS, default_args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
_pickle.UnpicklingError: NuScenesE2EDataset: pickle data was truncated
/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 70524) of binary: /home/jarvis/anaconda3/envs/uniad38/bin/python
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2023-07-05_13:50:17
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 70524)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$

关于nuscenes_e2e_dataset.py中的旋转矩阵

我正在另外一个数据集上实现uniad,但是我不是很确定自己是否正确理解了nuscenes_e2e_dataset.py 499行中的这段坐标变换关系。

    l2e_r = info['lidar2ego_rotation']
    l2e_t = info['lidar2ego_translation']
    e2g_r = info['ego2global_rotation']
    e2g_t = info['ego2global_translation']
    l2e_r_mat = Quaternion(l2e_r).rotation_matrix
    e2g_r_mat = Quaternion(e2g_r).rotation_matrix
    l2g_r_mat = l2e_r_mat.T @ e2g_r_mat.T
    l2g_t = l2e_t @ e2g_r_mat.T + e2g_t

如果l2e_r和l2e_t分别表示lidar到ego坐标系的坐标变换矩阵,其他的也以此类推,那lidar到global的坐标变换矩阵应该是由ego到global的变换矩阵左乘lidar到ego的坐标变换矩阵得到。因此根据我的理解,这段代码应该是:

   l2g_r_mat = e2g_r_mat @ l2e_r_mat
   l2g_t = e2g_r_mat@l2e_t + e2g_t

可以看出来,我求的旋转矩阵和你求的旋矩阵可能是转置(逆)的关系,请问这里对旋转矩阵求转置的原因是什么呢?

Map Element Inconsistency in UniAD Implementation

Thank you for your efforts.

I have noticed a discrepancy regarding the map elements between the description in the paper and their implementation. According to the paper, the online map is categorized into four distinct elements: lanes, drivable areas, dividers, and pedestrian crossings. However, upon examining the code, it appears that the map elements from nuScenes are used instead, and they are classified into three classes: {0: ['road_divider', 'lane_divider'], 1: ['ped_crossing'], 2: ['road_segment', 'lane']}.

Could you please confirm if my understanding is accurate?

ERROR: Can't roll back mmdet3d; was not uninstalled

    gcc -pthread -B /home/jarvis/anaconda3/envs/uniad/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/jarvis/coding/pyhome/github.com/open-mmlab/mmdetection3d/mmdet3d/ops/spconv/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/TH -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/THC -I/include -I/home/jarvis/anaconda3/envs/uniad/include/python3.8 -c mmdet3d/ops/spconv/src/all.cc -o build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    gcc -pthread -B /home/jarvis/anaconda3/envs/uniad/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/jarvis/coding/pyhome/github.com/open-mmlab/mmdetection3d/mmdet3d/ops/spconv/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/TH -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/THC -I/include -I/home/jarvis/anaconda3/envs/uniad/include/python3.8 -c mmdet3d/ops/spconv/src/indice.cc -o build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    /bin/nvcc -DWITH_CUDA -I/home/jarvis/coding/pyhome/github.com/open-mmlab/mmdetection3d/mmdet3d/ops/spconv/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/TH -I/home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages/torch/include/THC -I/include -I/home/jarvis/anaconda3/envs/uniad/include/python3.8 -c mmdet3d/ops/spconv/src/indice_cuda.cu -o build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86
    nvcc fatal   : Unsupported gpu architecture 'compute_86'
    error: command '/bin/nvcc' failed with exit code 1
    error: subprocess-exited-with-error
    
    × python setup.py develop did not run successfully.
    │ exit code: 1
    ╰─> See above for output.
    
    note: This error originates from a subprocess, and is likely not a problem with pip.
    full command: /home/jarvis/anaconda3/envs/uniad/bin/python -c '
    exec(compile('"'"''"'"''"'"'
    # This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
    #
    # - It imports setuptools before invoking setup.py, to enable projects that directly
    #   import from `distutils.core` to work with newer packaging standards.
    # - It provides a clear error message when setuptools is not installed.
    # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
    #   setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
    #     manifest_maker: standard file '"'"'-c'"'"' not found".
    # - It generates a shim setup.py, for handling setup.cfg-only projects.
    import os, sys, tokenize
    
    try:
        import setuptools
    except ImportError as error:
        print(
            "ERROR: Can not execute `setup.py` since setuptools is not available in "
            "the build environment.",
            file=sys.stderr,
        )
        sys.exit(1)
    
    __file__ = %r
    sys.argv[0] = __file__
    
    if os.path.exists(__file__):
        filename = __file__
        with tokenize.open(__file__) as f:
            setup_py_code = f.read()
    else:
        filename = "<auto-generated setuptools caller>"
        setup_py_code = "from setuptools import setup; setup()"
    
    exec(compile(setup_py_code, filename, "exec"))
    '"'"''"'"''"'"' % ('"'"'/home/jarvis/coding/pyhome/github.com/open-mmlab/mmdetection3d/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' develop --no-deps
    cwd: /home/jarvis/coding/pyhome/github.com/open-mmlab/mmdetection3d/
  ERROR: Can't roll back mmdet3d; was not uninstalled
error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
(uniad) jarvis@jia:~/coding/pyhome/github.com/open-mmlab/mmdetection3d$ 

steps to reproduce:

git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1
pip install scipy==1.7.3
pip install scikit-image==0.20.0
pip install -v -e .

Installation Environment Description:

OS: `Linux jia 5.15.0-75-generic #82~20.04.1-Ubuntu SMP Wed Jun 7 19:37:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux`
nvidia info: 
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.41.03              Driver Version: 530.41.03    CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
gcc info:
`Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.4.0-1ubuntu1~20.04.1' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1) `

AssertionError: Database version not found: data/nuscenes/v1.0-trainval

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ ./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./ckpts/uniad_base_track_map.pth 1
projects.mmdet3d_plugin
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
    return obj_cls(**args)
  File "/home/jarvis/coding/pyhome/github.com/meua/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 93, in __init__
    self.nusc = NuScenes(version=self.version,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 62, in __init__
    assert osp.exists(self.table_root), 'Database version not found: {}'.format(self.table_root)
AssertionError: Database version not found: data/nuscenes/v1.0-trainval

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./tools/test.py", line 261, in <module>
    main()
  File "./tools/test.py", line 190, in main
    dataset = build_dataset(cfg.data.test)
  File "/home/jarvis/coding/pyhome/github.com/meua/mmdetection3d/mmdet3d/datasets/builder.py", line 41, in build_dataset
    dataset = build_from_cfg(cfg, DATASETS, default_args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
AssertionError: NuScenesE2EDataset: Database version not found: data/nuscenes/v1.0-trainval
/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 90698) of binary: /home/jarvis/anaconda3/envs/uniad38/bin/python
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2023-07-06_18:11:29
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 90698)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ 

planning head metric

Hello, we downloaded your recent code and weight files and reproduced your results. Regarding the reproduced experimental results, we found significant discrepancies in the planning compared to the data provided in the paper. May I ask what is the reason for this?
image

Could you share the training log of stage one?

I am reimplementing the stage one of UniAD, but the result is very different from the metrics on the repo (AMOTA: 0.3491 vs 0.390).

Could you share your training log of stage one? Thanks!

2023-04-29 17:27:50,507 - mmdet - INFO - Epoch(val) [6][753] pts_bbox_NuScenes/car_AP_dist_0.5: 0.2318, pts_bbox_NuScenes/car_AP_dist_1.0: 0.5167, pts_bbox_NuScenes/car_AP_dist_2.0: 0.7178, pts_bbox_NuScenes/car_AP_dist_4.0: 0.8132, pts_bbox_NuScenes/car_trans_err: 0.4704, pts_bbox_NuScenes/car_scale_err: 0.1501, pts_bbox_NuScenes/car_orient_err: 0.0767, pts_bbox_NuScenes/car_vel_err: 0.3336, pts_bbox_NuScenes/car_attr_err: 0.2142, pts_bbox_NuScenes/mATE: 0.6905, pts_bbox_NuScenes/mASE: 0.2765, pts_bbox_NuScenes/mAOE: 0.3952, pts_bbox_NuScenes/mAVE: 0.3878, pts_bbox_NuScenes/mAAE: 0.2061, pts_bbox_NuScenes/truck_AP_dist_0.5: 0.0345, pts_bbox_NuScenes/truck_AP_dist_1.0: 0.1856, pts_bbox_NuScenes/truck_AP_dist_2.0: 0.4547, pts_bbox_NuScenes/truck_AP_dist_4.0: 0.5935, pts_bbox_NuScenes/truck_trans_err: 0.7482, pts_bbox_NuScenes/truck_scale_err: 0.2197, pts_bbox_NuScenes/truck_orient_err: 0.0908, pts_bbox_NuScenes/truck_vel_err: 0.3117, pts_bbox_NuScenes/truck_attr_err: 0.1936, pts_bbox_NuScenes/construction_vehicle_AP_dist_0.5: 0.0000, pts_bbox_NuScenes/construction_vehicle_AP_dist_1.0: 0.0197, pts_bbox_NuScenes/construction_vehicle_AP_dist_2.0: 0.1024, pts_bbox_NuScenes/construction_vehicle_AP_dist_4.0: 0.2860, pts_bbox_NuScenes/construction_vehicle_trans_err: 0.9201, pts_bbox_NuScenes/construction_vehicle_scale_err: 0.4812, pts_bbox_NuScenes/construction_vehicle_orient_err: 1.1397, pts_bbox_NuScenes/construction_vehicle_vel_err: 0.1425, pts_bbox_NuScenes/construction_vehicle_attr_err: 0.3970, pts_bbox_NuScenes/bus_AP_dist_0.5: 0.0418, pts_bbox_NuScenes/bus_AP_dist_1.0: 0.2882, pts_bbox_NuScenes/bus_AP_dist_2.0: 0.5570, pts_bbox_NuScenes/bus_AP_dist_4.0: 0.7124, pts_bbox_NuScenes/bus_trans_err: 0.7131, pts_bbox_NuScenes/bus_scale_err: 0.2101, pts_bbox_NuScenes/bus_orient_err: 0.0862, pts_bbox_NuScenes/bus_vel_err: 0.7570, pts_bbox_NuScenes/bus_attr_err: 0.2888, pts_bbox_NuScenes/trailer_AP_dist_0.5: 0.0000, pts_bbox_NuScenes/trailer_AP_dist_1.0: 0.0151, pts_bbox_NuScenes/trailer_AP_dist_2.0: 0.1401, pts_bbox_NuScenes/trailer_AP_dist_4.0: 0.4041, pts_bbox_NuScenes/trailer_trans_err: 1.0503, pts_bbox_NuScenes/trailer_scale_err: 0.2635, pts_bbox_NuScenes/trailer_orient_err: 0.4992, pts_bbox_NuScenes/trailer_vel_err: 0.3285, pts_bbox_NuScenes/trailer_attr_err: 0.0919, pts_bbox_NuScenes/barrier_AP_dist_0.5: 0.1771, pts_bbox_NuScenes/barrier_AP_dist_1.0: 0.4155, pts_bbox_NuScenes/barrier_AP_dist_2.0: 0.6018, pts_bbox_NuScenes/barrier_AP_dist_4.0: 0.7037, pts_bbox_NuScenes/barrier_trans_err: 0.5555, pts_bbox_NuScenes/barrier_scale_err: 0.2882, pts_bbox_NuScenes/barrier_orient_err: 0.1781, pts_bbox_NuScenes/barrier_vel_err: nan, pts_bbox_NuScenes/barrier_attr_err: nan, pts_bbox_NuScenes/motorcycle_AP_dist_0.5: 0.0846, pts_bbox_NuScenes/motorcycle_AP_dist_1.0: 0.2814, pts_bbox_NuScenes/motorcycle_AP_dist_2.0: 0.5137, pts_bbox_NuScenes/motorcycle_AP_dist_4.0: 0.6110, pts_bbox_NuScenes/motorcycle_trans_err: 0.6728, pts_bbox_NuScenes/motorcycle_scale_err: 0.2607, pts_bbox_NuScenes/motorcycle_orient_err: 0.4699, pts_bbox_NuScenes/motorcycle_vel_err: 0.5474, pts_bbox_NuScenes/motorcycle_attr_err: 0.2717, pts_bbox_NuScenes/bicycle_AP_dist_0.5: 0.1006, pts_bbox_NuScenes/bicycle_AP_dist_1.0: 0.3026, pts_bbox_NuScenes/bicycle_AP_dist_2.0: 0.4610, pts_bbox_NuScenes/bicycle_AP_dist_4.0: 0.5316, pts_bbox_NuScenes/bicycle_trans_err: 0.5727, pts_bbox_NuScenes/bicycle_scale_err: 0.2676, pts_bbox_NuScenes/bicycle_orient_err: 0.5571, pts_bbox_NuScenes/bicycle_vel_err: 0.2733, pts_bbox_NuScenes/bicycle_attr_err: 0.0177, pts_bbox_NuScenes/pedestrian_AP_dist_0.5: 0.0804, pts_bbox_NuScenes/pedestrian_AP_dist_1.0: 0.3098, pts_bbox_NuScenes/pedestrian_AP_dist_2.0: 0.5757, pts_bbox_NuScenes/pedestrian_AP_dist_4.0: 0.7174, pts_bbox_NuScenes/pedestrian_trans_err: 0.7118, pts_bbox_NuScenes/pedestrian_scale_err: 0.2931, pts_bbox_NuScenes/pedestrian_orient_err: 0.4587, pts_bbox_NuScenes/pedestrian_vel_err: 0.4087, pts_bbox_NuScenes/pedestrian_attr_err: 0.1734, pts_bbox_NuScenes/traffic_cone_AP_dist_0.5: 0.2482, pts_bbox_NuScenes/traffic_cone_AP_dist_1.0: 0.4972, pts_bbox_NuScenes/traffic_cone_AP_dist_2.0: 0.6581, pts_bbox_NuScenes/traffic_cone_AP_dist_4.0: 0.7315, pts_bbox_NuScenes/traffic_cone_trans_err: 0.4896, pts_bbox_NuScenes/traffic_cone_scale_err: 0.3307, pts_bbox_NuScenes/traffic_cone_orient_err: nan, pts_bbox_NuScenes/traffic_cone_vel_err: nan, pts_bbox_NuScenes/traffic_cone_attr_err: nan, pts_bbox_NuScenes/NDS: 0.4884, pts_bbox_NuScenes/mAP: 0.3679, pts_bbox_NuScenes/amota: 0.3491, pts_bbox_NuScenes/amotp: 1.3378, pts_bbox_NuScenes/recall: 0.4500, pts_bbox_NuScenes/motar: 0.6470, pts_bbox_NuScenes/gt: 14556.7143, pts_bbox_NuScenes/mota: 0.3015, pts_bbox_NuScenes/motp: 0.7156, pts_bbox_NuScenes/mt: 2043.0000, pts_bbox_NuScenes/ml: 2778.0000, pts_bbox_NuScenes/faf: 50.1770, pts_bbox_NuScenes/tp: 54523.0000, pts_bbox_NuScenes/fp: 14212.0000, pts_bbox_NuScenes/fn: 46560.0000, pts_bbox_NuScenes/ids: 814.0000, pts_bbox_NuScenes/frag: 774.0000, pts_bbox_NuScenes/tid: 1.6311, pts_bbox_NuScenes/lgd: 2.6311, drivable_iou: 0.6575, lanes_iou: 0.2736, divider_iou: 0.2135, crossing_iou: 0.0882, contour_iou: 0.2236, drivable_iou_mean: 0.6542, lanes_iou_mean: 0.2768, divider_iou_mean: 0.2124, crossing_iou_mean: 0.0472, contour_iou_mean: 0.2361

Dataloader worker killed with runtime error.

Hello,

While training stage to network, im seeing the following error.

Is anyone seeing the same error?

Traceback (most recent call last):
File "./tools/train.py", line 256, in
main()
File "./tools/train.py", line 245, in main
custom_train_model(
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/apis/train.py", line 21, in custom_train_model
custom_train_detector(
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/apis/mmdet_train.py", line 194, in custom_train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 52, in train_step
output = self.module.train_step(*inputs[0], **kwargs[0])
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_e2e.py", line 81, in forward
return self.forward_train(**kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_e2e.py", line 163, in forward_train
losses_track, outs_track = self.forward_track_train(img, gt_bboxes_3d, gt_labels_3d, gt_past_traj, gt_past_traj_mask, gt_inds, gt_sdc_bbox, gt_sdc_label,
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 555, in forward_track_train
frame_res = self._forward_single_frame_train(
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 385, in _forward_single_frame_train
bev_embed, bev_pos = self.get_bevs(
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 342, in get_bevs
img_feats = self.extract_img_feat(img=imgs)
File "/home/ubuntu/torc/git/personal/UniAD/projects/mmdet3d_plugin/uniad/detectors/uniad_track.py", line 162, in extract_img_feat
img_feats = self.img_backbone(img)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/models/backbones/resnet.py", line 638, in forward
x = self.maxpool(x)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/nn/modules/pooling.py", line 162, in forward
return F.max_pool2d(input, self.kernel_size, self.stride,
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/_jit_internal.py", line 405, in fn
return if_false(*args, **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/nn/functional.py", line 718, in _max_pool2d
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 1103017) is killed by signal: Killed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 1099909) of binary: /home/ubuntu/.conda/envs/uniad/bin/python
/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py:367: UserWarning:


           CHILD PROCESS FAILED WITH NO ERROR_FILE                

CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 1099909 (local_rank 1) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:

from torch.distributed.elastic.multiprocessing.errors import record

@record
def trainer_main(args):
# do train


warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/run.py", line 702, in
main()
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 361, in wrapper
return f(*args, **kwargs)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/run.py", line 698, in main
run(args)
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/.conda/envs/uniad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:


    ./tools/train.py FAILED        

=======================================
Root Cause:
[0]:
time: 2023-07-07_12:12:31
rank: 1 (local_rank: 1)
exitcode: 1 (pid: 1099909)
error_file: <N/A>
msg: "Process failed with exitcode 1"

Other Failures:
<NO_OTHER_FAILURES>


Thanks for your attention. I'm training this on an AWS EC2 instance (g5-12x) with 4 A10 gpus!

Regards,
Venkat

Visualization Error

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ ./tools/uniad_vis_result.sh
======
Loading NuScenes tables for version v1.0-mini...
25 category,
12 attribute,
4 visibility,
911 instance,
6 sensor,
50 calibrated_sensor,
650 ego_pose,
44 log,
10 scene,
50 sample,
650 sample_data,
18538 sample_annotation,
4 map,
Done loading in 0.197 seconds.
======
Reverse indexing ...
Traceback (most recent call last):
  File "./tools/analysis_tools/visualize/run.py", line 340, in <module>
    main(args)
  File "./tools/analysis_tools/visualize/run.py", line 302, in main
    viser = Visualizer(version='v1.0-mini', predroot=args.predroot, dataroot='data/nuscenes', **render_cfg)
  File "./tools/analysis_tools/visualize/run.py", line 46, in __init__
    self.nusc = NuScenes(version=version, dataroot=dataroot, verbose=True)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 124, in __init__
    self.__make_reverse_index__(verbose)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 171, in __make_reverse_index__
    record['category_name'] = self.get('category', inst['category_token'])['name']
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 216, in get
    return getattr(self, table_name)[self.getind(table_name, token)]
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 225, in getind
    return self._token2ind[table_name][token]
KeyError: 'bb867e2064014279863c71a29b1eb381'
(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ 

The content of the file uniad_vis_result.sh is as follows:

#!/bin/bash

python ./tools/analysis_tools/visualize/run.py \
    --predroot ./data/infos/nuscenes_infos_temporal_val.pkl \
    --out_folder ./data/visualize/output \
    --demo_video ./data/video/test.mp4 \
    --project_to_cam True

how to eval on 1 gpu ?

Hi, what parameters I should modify when I only have one gpu ? I don't want to train but to eval our model.

I modified ./tools/uniad_dist_eval.sh as following but occurs bug that KeyError: 'RANK' in File "./tools/test.py", line 184, in main
init_dist(args.launcher, **cfg.dist_params)

        #!/usr/bin/env bash
        
        T=`date +%m%d%H%M`
        
        # -------------------------------------------------- #
        # Usually you only need to customize these variables #
        CFG=$1                                               #
        CKPT=$2                                              #
        # -------------------------------------------------- #
        GPUS_PER_NODE=1
        
        MASTER_PORT=${MASTER_PORT:-28596}
        WORK_DIR=$(echo ${CFG%.*} | sed -e "s/configs/work_dirs/g")/
        # Intermediate files and logs will be saved to UniAD/projects/work_dirs/
        
        if [ ! -d ${WORK_DIR}logs ]; then
            mkdir -p ${WORK_DIR}logs
        fi
        
        PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
        python $(dirname "$0")/test.py \
            $CFG \
            $CKPT \
            --launcher pytorch ${@:3} \
            --eval bbox \
            --show-dir ${WORK_DIR} \
            2>&1 | tee ${WORK_DIR}logs/eval.$T

What should I do to eval on 1 gpu ?

E2E checkpoint

Thanks for your great work! I wonder to know when you will release the e2e checkpoint for the projects/configs/stage2_e2e/base_e2e.py?

python version? waymo-open-dataset-tf-2-1-0==1.2.0?

我在配置mmdetection3d的时候:

cd ~
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1 # Other versions may not be compatible.
python setup.py install
pip install -r requirements.txt  # Install packages for mmdet3d

在使用 python setup.py install 命令的时候出现了如下问题:

RuntimeError: Python version >= 3.9 required.

想问问作者最初的conda create 的python版本应该设置为3.9吗?

Frequently Asked & Good Questions

We will continually gather both frequently asked- and good questions in this thread.

Frequently Asked Questions

  1. Unexpected key when loading stage2 model? #97
    Answer: unexpected key in source state_dict: bbox_size_fc.weight, bbox_size_fc.bias, pts_bbox_head.query_embedding.weight, pts_bbox_head.transformer.reference_points.weight, pts_bbox_head.transformer.reference_points.bias
    The unmatched keys come from the code refactor, and will NOT hurt the functionality of the model at all. You can just ignore this warning message.

Good Questions

  1. What do we think of the open-loop evaluation? Are the open-loop results practically meaningful?
    We've answered this question from multiple perspectives to make it clarified: #29 (comment)

  2. Why introduce occupancy in this framework? What's the difference between the BEV occupancy and 3D occupancy? #36

  3. About the generation and usage of high-level command. #89

ModuleNotFoundError: No module named 'pytorch_lightning.metrics'

Hello author: Due to the version problem of the necessary software installed in the system, I cannot use pytorch-lightning well. After checking, I found that the lower version of pytorch-lightning no longer supports .metrics. And moved into torchtext, but it is not clear how to replace the code: several packages imported in C:\PycharmProjects\UniAD\projects\mmdet3d_plugin\uniad\dense_heads\occ_head_plugin/metrics.py;
Please help me out, thank you very much!

cuda:10.2
torch:1.13.0
torchtext:0.14.1
pytorch-lightning:1.9.4
ubuntu:20.0.4

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 95119) of binary: /home/jarvis/anaconda3/envs/uniad38/bin/python

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ ./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./path/to/ckpts.pth 1
projects.mmdet3d_plugin
======
Loading NuScenes tables for version v1.0-trainval...
23 category,
8 attribute,
4 visibility,
64386 instance,
12 sensor,
10200 calibrated_sensor,
2631083 ego_pose,
68 log,
850 scene,
34149 sample,
2631083 sample_data,
1166187 sample_annotation,
4 map,
Done loading in 42.824 seconds.
======
Reverse indexing ...
Done reverse indexing in 3.6 seconds.
======
load checkpoint from local path: ./path/to/ckpts.pth
Traceback (most recent call last):
  File "./tools/test.py", line 261, in <module>
    main()
  File "./tools/test.py", line 206, in main
    checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/runner/checkpoint.py", line 531, in load_checkpoint
    checkpoint = _load_checkpoint(filename, map_location, logger)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/runner/checkpoint.py", line 470, in _load_checkpoint
    return CheckpointLoader.load_checkpoint(filename, map_location, logger)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/runner/checkpoint.py", line 249, in load_checkpoint
    return checkpoint_loader(filename, map_location)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/mmcv/runner/checkpoint.py", line 265, in load_from_local
    raise IOError(f'{filename} is not a checkpoint file')
OSError: ./path/to/ckpts.pth is not a checkpoint file
/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 95119) of binary: /home/jarvis/anaconda3/envs/uniad38/bin/python
Traceback (most recent call last):
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jarvis/anaconda3/envs/uniad38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2023-07-06_21:02:42
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 95119)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

(uniad38) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ 

ImportError:undefined symbol: _ZNK2at10TensorBase8data_ptrIdEEPT_v

Hello, thanks for your great work!

When I run the evaluation example using the provided command, it raises the following error:

Traceback (most recent call last):
File "./tools/test.py", line 14, in
from mmdet3d.apis import single_gpu_test
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet3d-0.17.1-py3.8-linux-x86_64.egg/mmdet3d/apis/init.py", line 2, in
from .inference import (convert_SyncBN, inference_detector,
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet3d-0.17.1-py3.8-linux-x86_64.egg/mmdet3d/apis/inference.py", line 11, in
from mmdet3d.core import (Box3DMode, CameraInstance3DBoxes,
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet3d-0.17.1-py3.8-linux-x86_64.egg/mmdet3d/core/init.py", line 2, in
from .anchor import * # noqa: F401, F403
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet3d-0.17.1-py3.8-linux-x86_64.egg/mmdet3d/core/anchor/init.py", line 2, in
from mmdet.core.anchor import build_prior_generator
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/core/init.py", line 2, in
from .bbox import * # noqa: F401, F403
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/core/bbox/init.py", line 7, in
from .samplers import (BaseSampler, CombinedSampler,
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/core/bbox/samplers/init.py", line 9, in
from .score_hlr_sampler import ScoreHLRSampler
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmdet/core/bbox/samplers/score_hlr_sampler.py", line 2, in
from mmcv.ops import nms_match
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/ops/init.py", line 2, in
from .assign_score_withk import assign_score_withk
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/ops/assign_score_withk.py", line 5, in
ext_module = ext_loader.load_ext(
File "/home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
ext = importlib.import_module('mmcv.' + name)
File "/home/xxx/.conda/envs/uniad/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: /home/xxx/.conda/envs/uniad/lib/python3.8/site-packages/mmcv/_ext.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK2at10TensorBase8data_ptrIdEEPT_v

Below are my installed packages and versions:
absl-py 1.4.0
addict 2.4.0
aiohttp 3.8.4
aiosignal 1.3.1
anyio 3.7.0
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.2.1
async-timeout 4.0.2
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
black 23.3.0
bleach 6.0.0
cachetools 5.3.1
casadi 3.5.5
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
comm 0.1.3
contourpy 1.1.0
cycler 0.11.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
descartes 1.1.0
einops 0.4.1
exceptiongroup 1.1.1
executing 1.2.0
fastjsonschema 2.17.1
fire 0.5.0
flake8 6.0.0
fonttools 4.40.0
fqdn 1.5.1
frozenlist 1.3.3
fsspec 2023.6.0
future 0.18.3
google-api-core 2.11.1
google-auth 2.20.0
google-auth-oauthlib 1.0.0
google-cloud-bigquery 3.11.1
google-cloud-core 2.3.2
google-crc32c 1.5.0
google-resumable-media 2.5.0
googleapis-common-protos 1.59.1
grpcio 1.54.2
grpcio-status 1.54.2
idna 3.4
importlib-metadata 6.7.0
importlib-resources 5.12.0
iniconfig 2.0.0
ipykernel 6.23.2
ipython 8.12.2
ipython-genutils 0.2.0
ipywidgets 8.0.6
isoduration 20.11.0
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsonpointer 2.4
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 8.2.0
jupyter-console 6.6.3
jupyter_core 5.3.1
jupyter-events 0.6.3
jupyter_server 2.6.0
jupyter_server_terminals 0.4.4
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.7
kiwisolver 1.4.4
lyft-dataset-sdk 0.0.8
Markdown 3.4.3
MarkupSafe 2.1.3
matplotlib 3.5.2
matplotlib-inline 0.1.6
mccabe 0.7.0
mistune 3.0.1
mmcv-full 1.4.0
mmdet 2.14.0
mmdet3d 0.17.1
mmsegmentation 0.14.1
motmetrics 1.1.3
multidict 6.0.4
mypy-extensions 1.0.0
nbclassic 1.0.0
nbclient 0.8.0
nbconvert 7.6.0
nbformat 5.9.0
nest-asyncio 1.5.6
networkx 2.2
notebook 6.5.4
notebook_shim 0.2.3
numba 0.48.0
numpy 1.20.0
nuscenes-devkit 1.1.10
oauthlib 3.2.2
opencv-python 4.7.0.72
overrides 7.3.1
packaging 23.1
pandas 1.4.4
pandocfilters 1.5.0
parso 0.8.3
pathspec 0.11.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
pkgutil_resolve_name 1.3.10
platformdirs 3.6.0
plotly 5.15.0
pluggy 1.0.0
plyfile 0.9
prettytable 3.8.0
prometheus-client 0.17.0
prompt-toolkit 3.0.38
proto-plus 1.22.2
protobuf 4.23.3
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.5.0
pyasn1-modules 0.3.0
pycocotools 2.0.6
pycodestyle 2.10.0
pycparser 2.21
pyflakes 3.0.1
Pygments 2.15.1
pyparsing 3.1.0
pyquaternion 0.9.9
pyrsistent 0.19.3
pytest 7.3.2
python-dateutil 2.8.2
python-json-logger 2.0.7
pytorch-lightning 1.2.5
pytz 2023.3
PyYAML 6.0
pyzmq 25.1.0
qtconsole 5.4.3
QtPy 2.3.1
requests 2.31.0
requests-oauthlib 1.3.1
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rsa 4.9
scikit-image 0.21.0
scikit-learn 1.2.2
scipy 1.10.1
Send2Trash 1.8.2
setuptools 67.8.0
Shapely 1.8.5
six 1.16.0
sniffio 1.3.0
soupsieve 2.4.1
stack-data 0.6.2
tenacity 8.2.2
tensorboard 2.13.0
tensorboard-data-server 0.7.1
termcolor 2.3.0
terminado 0.17.1
terminaltables 3.1.10
threadpoolctl 3.1.0
tinycss2 1.2.1
tomli 2.0.1
torch 1.9.1+cu111
torchaudio 0.9.1
torchmetrics 0.11.4
torchvision 0.10.1+cu111
tornado 6.3.2
tqdm 4.65.0
traitlets 5.9.0
trimesh 2.35.39
typing_extensions 4.6.3
uri-template 1.2.0
urllib3 1.26.16
wcwidth 0.2.6
webcolors 1.13
webencodings 0.5.1
websocket-client 1.6.0
Werkzeug 2.3.6
wheel 0.38.4
widgetsnbextension 4.0.7
yapf 0.40.1
yarl 1.9.2
zipp 3.15.0

Do you have solutions for that problem?
Thanks!

scikit-image 0.20.0 requires individual dependencies, which is incompatible

Install dependencies, report the following error:

(uniad) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ pip install -r requirements.txt 
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: google-cloud-bigquery in /home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages (from -r requirements.txt (line 1)) (3.11.3)
Requirement already satisfied: motmetrics==1.1.3 in /home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (1.1.3)
Requirement already satisfied: einops==0.4.1 in /home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages (from -r requirements.txt (line 3)) (0.4.1)
Collecting numpy==1.20.0 (from -r requirements.txt (line 4))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/ca/e5/8abad0d947199a7c66995c710fa8c9fb1de0af6239575f9129d75fa4e9ed/numpy-1.20.0-cp38-cp38-manylinux2010_x86_64.whl (15.4 MB)
...
Requirement already satisfied: oauthlib>=3.0.0 in /home/jarvis/anaconda3/envs/uniad/lib/python3.8/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard>=2.2.0->pytorch-lightning==1.2.5->-r requirements.txt (line 6)) (3.2.2)
Installing collected packages: numpy
  Attempting uninstall: numpy
    Found existing installation: numpy 1.24.4
    Uninstalling numpy-1.24.4:
      Successfully uninstalled numpy-1.24.4
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
scikit-image 0.20.0 requires numpy>=1.21.1, but you have numpy 1.20.0 which is incompatible.
scikit-image 0.20.0 requires scipy<1.9.2,>=1.8; python_version <= "3.9", but you have scipy 1.7.3 which is incompatible.
Successfully installed numpy-1.20.0
(uniad) jarvis@jia:~/coding/pyhome/github.com/meua/UniAD$ 

steps to reproduce:

conda activate uniad
git clone https://github.com/OpenDriveLab/UniAD.git
cd UniAD
pip install -r requirements.txt

Training on RTX 4090

Hi, thanks for the great work. However, I find it difficult to correctly train the e2e model on 8 RTX 4090 GPUs, which is troubled by NCCL errors and cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR. Can you provide some supports or examples? Thank you.

Question: Reference point update strategy

Great work, congratulations on receiving the best paper award!

I have a question about the reference point update strategy.

velo = output_coords[-1, 0, :, -2:]  # [num_query, 3]
if l2g_r2 is not None:
    # Update ref_pts for next frame considering each agent's velocity
    ref_pts = self.velo_update(
        last_ref_pts[0],
        velo,
        l2g_r1,
        l2g_t1,
        l2g_r2,
        l2g_t2,
        time_delta=time_delta,
    )
else:
    ref_pts = last_ref_pts[0]

dim = track_instances.query.shape[-1]
track_instances.ref_pts = self.reference_points(track_instances.query[..., :dim//2])
track_instances.ref_pts[...,:2] = ref_pts[...,:2]

Why do we need to update the z-coordinate of the reference point with self.reference_points after updating reference point with velocity?

How should we interpret the output of the Planner?

Hi,

I'll start off by saying; amazing work on model! I've been exploring it and have been really impressed.

I'm trying to replicate your results, mainly the visualization on the README, and have succeeded for all modules except the Planner. I'm not sure how to interpret the outputs, mainly how to transform them to the bird's eye view.

Specifically, I've gotten the following output for the planning_traj key for the first frame of the results.pkl when running inference on the v1.0-mini dataset.

array([[ 0.11929719,  2.7894588 ],
       [ 0.34095547,  5.5962896 ],
       [ 0.86797214,  8.415201  ],
       [ 1.6254576 , 11.165663  ],
       [ 2.505731  , 13.961164  ],
       [ 3.6870646 , 16.615181  ]], dtype=float32)

This appears to form a line which I'm assuming is the predicted trajectory, but I haven't found what transformation to apply to project it on the bird's eye view output by the Map, Motion, and Occupancy modules. Would you be able to point me in the right direction?

Thanks!

Why not MapRT in MapFormer

I noticed that in mapping task you used MapFormer which based on SegFormer. Your lab had MapTR to detect the lane, divider, crosswalk , etc. My question is why you choose to use SegFormer instead of MapTR?

something wrong in visualizing

i first use

./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./ckpts/uniad_base_track_map.pth 8

and nothing went wrong

then i want to use the visualize

python ./tools/analysis_tools/visualize/run.py --predroot ./output/results.pkl --out_folder ./output_visualize --demo_video test_demo.avi --project_to_cam True

it goes

Loading NuScenes tables for version v1.0-mini...
23 category,
8 attribute,
4 visibility,
911 instance,
12 sensor,
120 calibrated_sensor,
31206 ego_pose,
8 log,
10 scene,
404 sample,
31206 sample_data,
18538 sample_annotation,
4 map,
Done loading in 0.426 seconds.

Reverse indexing ...
Done reverse indexing in 0.1 seconds.


Traceback (most recent call last):
  File "./tools/analysis_tools/visualize/run.py", line 342, in <module>
    main(args)
  File "./tools/analysis_tools/visualize/run.py", line 304, in main
    viser = Visualizer(version='v1.0-mini', predroot=args.predroot, dataroot='data/nuscenes', **render_cfg)
  File "./tools/analysis_tools/visualize/run.py", line 64, in __init__
    self.predictions = self._parse_predictions_multitask_pkl(predroot)
  File "./tools/analysis_tools/visualize/run.py", line 113, in _parse_predictions_multitask_pkl
    trajs = outputs[k][f'traj'].numpy()
KeyError: 'traj'

export onnx for inference

thank you for you excellent work.
how can i export onnx for inferecne and for test the performance with tensorrt.

How to process nuScenes Dataset for UniAD

Hi, thanks for your great work! I want to know how to process the nuScenes data for UniAD. Since when I evaluate the model as your guidance, there is a error as followings.
image

I directly used the nuScenes dataset as following settings

ann_file_train=data_root + f"nuscenes_infos_train.pkl"
ann_file_val=data_root + f"nuscenes_infos_val.pkl"
ann_file_test=data_root + f"nuscenes_infos_val.pkl"

instead of your dataset setting

ann_file_train=info_root + f"nuscenes_infos_temporal_train.pkl"
ann_file_val=info_root + f"nuscenes_infos_temporal_val.pkl"
ann_file_test=info_root + f"nuscenes_infos_temporal_val.pkl"

Hope to receive your feedback!

Ask a question

I don't want to do training, I just want to run the UniAD service and process a single image, what should I do?

How to only train perception tasks (tracker and map seg)

Hi, if I just want to train tracker and map segmentation, which config should I use to reproduce your result? As far as I see, the perception part is trained only several epochs in the first stage, it seems that you can get an initial parameters, which might be further finetuned or optimized during the second e2e stage.
While in fact I do not want to do the e2e training, can you give me some advice on what should I do to only train the first stage and get a well result?

More question, I see you have released the visualization code, could you tell me the command to do visualization?

Look forward to your reply, thanks.

Could you provide the configs / code for each task, e.g. Motion Forecastint only ?

Hello,

Congratulation for your great job and thanks for sharing the code.
In paper you have done the ablations on the effectiveness of each task as below.
How do you do the forecasting tasks if you don't have Detection/Tracking task? Could you provide the configs or code for each specific task, e.g. Motion Forecasting only (ID-4), Occ Prediction only (ID-7), and Planning only (ID-10).

Ablations on the effectiveness of each task

Could you share the training log?

I am training UniAD on our own dataset, but I don't know what the final value of each loss should be.

Could you share your training log? Thanks!

About Planning Label

Hi, thank you so much for your wonderful work. I would like to know where are the labels about the planning task, and the code snippet to evaluate the performance of the planning. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.