GithubHelp home page GithubHelp logo

jeffwang987 / asap Goto Github PK

View Code? Open in Web Editor NEW
64.0 64.0 3.0 2.08 MB

[CVPR 2023] Are We Ready for Vision-Centric Driving Streaming Perception? The ASAP Benchmark

License: MIT License

Python 98.24% Shell 1.76%

asap's People

Contributors

jeffwang987 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

asap's Issues

challenge in eval.ai

I am wondering if there is a plan about generating a challenge in eval ai website. Thank you

No such file or directory: './out/lidar_20Hz/results_nusc.json' when running scripts/ann_generator.sh

Hi,

I was following the procedure as described here, however I got the following exception:

# bash scripts/ann_generator.sh 12 --ann_strategy 'interp' 
Traceback (most recent call last):
  File "/root/miniconda3/envs/magicdrive/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/envs/magicdrive/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/disk1/ASAP/sAP3D/nusc_annotation_generator.py", line 357, in <module>
    nusc_20Hz_rst = mmcv.load(opts.lidar_inf_rst_path)
  File "/root/miniconda3/envs/magicdrive/lib/python3.8/site-packages/mmcv/fileio/io.py", line 57, in load
    with StringIO(file_client.get_text(file)) as f:
  File "/root/miniconda3/envs/magicdrive/lib/python3.8/site-packages/mmcv/fileio/file_client.py", line 1006, in get_text
    return self.client.get_text(filepath, encoding)
  File "/root/miniconda3/envs/magicdrive/lib/python3.8/site-packages/mmcv/fileio/file_client.py", line 535, in get_text
    with open(filepath, 'r', encoding=encoding) as f:
FileNotFoundError: [Errno 2] No such file or directory: './out/lidar_20Hz/results_nusc.json'
loading nuscenes dataset...
loading 20Hz LiDAR inference results...

are there any instructions on how I can generate or download this file?

thanks in advance.

Can open the 12hz annotation files?

Hi,
Thanks for the great work!
I see that you open the processing code of 12hz dataset. But for different people, the results of centerpoint may be slightly different. So can you open 12hz annotation files? So that everyone can test fairly on your benchmark.

run very slowly

image
Hi, I run this step and found it really slow.
Could you please share the requirement of the hardware(cpu) ?

Some questions about nuScenes-H and streaming results

Great work towards real world applications, but I have some questions about nuScenes-H and streaming results when duplicating your work.

  1. Many 3D detectors use history frames feature, i.e. BEVFormer. History BEV features are from 2Hz frames when training, while the frame rate is fastter in nuScenes-H. So directly applying BEVFormer into nuScenes-H results in a historical frame rate mismatch. The use of ready-made model weights will cause a loss of accuracy. Did you re-train BEVFormer on 12Hz or did you adjust the temporal interval to fit the official testing pipeline.
  2. Using your code can not generate pkl data file to test the BEV-based method. It seems the prev and next token name does not correspond.
  3. 5 frames are interpolated between adjacent key frames, and the image data can be well matched, but the original point cloud annotation rate is 20Hz, which does not correspond.

Some question

Thank you for your work. I noticed that when the model's inference speed is faster than the input frame rate, the ASAP benchmark still compares the current frame's prediction to the next frame's ground truth. However, There are some opinions that the sAP should be consistent with off AP at this time, which mandates that the prediction of the current frame be juxtaposed with the ground truth of the current frame, as demonstrated in Table 4 of streamYOLO. Hence, I am inquiring to which one should I choose?

There is a specific situation, given a frame rate of 10hz and a model's inference time of 150ms, the frame input at time 0ms will be predicted at 150ms. In this instance, should the sAP result be computed employing the frame from 100ms or the frame that is anticipated to arrive at 200ms?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.