GithubHelp home page GithubHelp logo

argoverse / av2-api Goto Github PK

View Code? Open in Web Editor NEW
289.0 10.0 67.0 6.52 MB

Argoverse 2: Next generation datasets for self-driving perception and forecasting.

Home Page: https://argoverse.github.io/user-guide/

License: MIT License

Shell 0.02% Python 94.18% Jupyter Notebook 0.88% Rust 4.92%
autonomous-driving autonomous-vehicles argoverse 3d-object-detection motion-forecasting av av2

av2-api's People

Contributors

benjaminrwilson avatar dependabot[bot] avatar duzzzs avatar engyasin avatar kingwmk avatar kylevedder avatar nchodosh avatar neeharperi avatar senselessdev1 avatar viktortnk avatar wqi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

av2-api's Issues

Questions about training image-based detectors on AV2 dataset

Hi,

I am wondering if it is possible to train image-based methods on the AV2 sensor dataset.

I find that the timestamps between LiDAR data and Ring-image data are not aligned. I tried to project the annotations on the corresponding ring images and found that they were not aligned well. So I am wondering whether the AV2 sensor dataset currently is only valid for detectors using LiDAR point cloud inputs but not for image-based detectors?

Labels for Wheelchairs and Strollers in Sensor Dataset

Hi, I'm currently trying to access the Sensor dataset and was looking for images that have Wheelchairs and Strollers in them. I modified the file in the tutorial (av2-api/blob/main/tutorials/generate_sensor_dataset_visualizations.py) for this. However, I'm not able to find any images with those labels. Can you please tell me specific sequence ID's that have wheelchairs and strollers and the other rare classes?

submission file format

The ChallengeSubmission class of av2 generates parquet files - but eval.ai only accepts json, zip, gz, txt, csv, tsv, ... (via file upload)
What file name is required? Which file name extension?
I tried to zip the submission.parquet file as well as changing the compression in submission_df.to_parquet(submission_file_path, compression='gzip' ) and storing as .gz but neither if those works.

Downloading datasets using S3

Hello, there! I tried to download av2 datasets using S3, I rely on these instructions.

But it doesn't work. I try both: s5cmd and usual awscli clients.

  1. s5cmd:
s5cmd --no-sign-request cp s3://argoai-argoverse/av2 .


ERROR "cp s3://argoai-argoverse/av2 av2/av2": AccessDenied: Access Denied status code: 403, request id: 2HZ44EQ5VX276MJ5, host id: 8NIIVQuhziyBavM4B98jmDrhj6Qpbyw4FbTFo76M6wtc3RHhQRehm7I83ZspMdOjThNy1RUv0Io=
  1. aws cli
aws s3 ls s3://argoai-argoverse/av2

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

Does it should not work or I do it in a non-proper way?

Questions about the height (z axis) of 3D lane

I found that the height of 3d lanes might be a little buggy. It is somewhat discontinuous and appears inaccurate. You can see the example as follows. Different views are presented.

Screenshot from 2022-08-01 15-20-12

However, if I project them to 2D space by removing z axis, they will look good.

Screenshot from 2022-08-01 15-20-38

Could you please tell me how the height is obtained? By lidar?

How can I process the lanes? Only use XY?

HD-Map Files Download

Hi Argo folks,

Thank you for your Argoverse 2.0 dataset! But I have a question related to the HD-Map part. In brief, I am wondering:

  1. If the "Map Change" part corresponds to the "HD-Map" in Argoverse 1.1, and I should download it via "s3://argoai-argoverse/av2/tbv/"?
  2. Since this time the HD-Map file is significantly larger than Argoverse 1.1, are they all needed for motion forecasting?

Best,

Ziqi

How to get individual lidar sweeps/ scans

Hi,

First, thank you for this awesome work.
However, the documentation says that both the lidar scans are aggregated into single sweeps.

Is it possible to isolate each lidar's scans?
I would be interested in training using data from a single lidar(down lidar).

Please let me know how may I do this.

Best Regards
Sambit

Similarity argoverse 1 / argoverse 2

Hey,
the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset.
Looking forward to your answer!
Thanks a lot!

[Motion Forecasting Dataset] Insufficient ego-vehicle(object_category = 3) samples in some scenarios from training dataset

Thank you for the great work.

I'm dealing with motion forecasting data and I found out that, from my understanding, some scenarios from training data have insufficient ego-vehicle(object_category = 3) samples. For example, 'scenario_00baa620-0342-4b0c-a7c2-77a5f583d6b1' only have 100 samples for agent instead of 110.

I'm wondering if my understanding of the motion forecasting dataset is wrong or if I should ignore the problematic scenarios

3D maps but 2D tracks

It seems that the map elements have 3D coordinates but the tracks are still in 2D. Why the dataset does not provide 3D tracks? Is it possible to infer the z coordinate of the tracks?

Another quick question: will the future release include the traffic light state?

'UNKNOWN' is not a valid LaneMarkType in v0.2.0

Hi,
I get the following error running code ArgoverseStaticMap.from_json(static_map_path) with certain map files:
ValueError: 'UNKNOWN' is not a valid LaneMarkType
The version of av2 is 0.2.0.
When I check the LaneMarkType class while running, I notice that it does not have the member "UNKNOWN", which exists in the github code.
Could you please check and fix it in the released version?

Download the trajectory prediction part of the map dataset

Hello, we have a question that needs to be interrupted. The direction of our research is trajectory prediction. Your description of the map dataset is that it contains four maps of trajectory prediction, sensor, radar, and Tbv. We wonder if we only need the part about 250,000 vector graphics for trajectory prediction, the official download link has 21 parts in the map dataset itself. Can you tell us which part is the trajectory prediction part? We would appreciate it if you could.

Baseline models

Hi guys,

Do you know if some GNN based baseline methods are going to be implemented in the tutorial? Similar than NN and LSTM for Argoverse 1.0 API, in order to know how to deal with the vectorized map.

Error Downloading av2 datasets.

Hi, I'm trying to download av2 motion-forecasting datasets. Following instructions given by Download, I install s5cmd and try commands below, and this is what I got:

>> s5cmd --no-sign-request cp s3://argoai-argoverse/av2/motion-forecasting/* /data/Argoverse2Datasets/sensor/
zsh: no matches found: s3://argoai-argoverse/av2/motion-forecasting/*
>> s5cmd --no-sign-request cp s3://argoai-argoverse/av2/* /data/zhouxinning/Argoverse2Datasets/
zsh: no matches found: s3://argoai-argoverse/av2/*

I just wondering that have you changed the path to the dataset?

questions for visualization

Dear all:

When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'.
Is any parameter in the program that needs to be debugged or something else?
Hoping for your reply and thanks so much.

Error with generate_egoview_overlaid_vector_map.py

Hi, I've just found a problem with the tutorials/generate_egoview_overlaid_vector_map.py. The option -d is used in not only --data-root, but also --use-depth-map-for_occlusion, maybe this problem should be fixed~

Error with generate_sensor_dataset_visualizations.py

Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.

ValueError: 'UNKNOWN' is not a valid LaneMarkType

Hello, Thanks for your great contributions.
I found an error when parsing map data.
For example for log_map_archive_f75d80d6-a1c8-4a0b-915a-7dc304d122b0.json:

Traceback (most recent call last):
File "map_test.py", line 28, in
avm = ArgoverseStaticMap.from_json(Path(dataroot))
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/map_api.py", line 328, in from_json
vector_lane_segments = {ls["id"]: LaneSegment.from_dict(ls) for ls in vector_data["lane_segments"].values()}
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/map_api.py", line 328, in
vector_lane_segments = {ls["id"]: LaneSegment.from_dict(ls) for ls in vector_data["lane_segments"].values()}
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/lane_segment.py", line 110, in from_dict
left_mark_type=LaneMarkType(json_data["left_lane_mark_type"]),
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/enum.py", line 339, in call
return cls.new(cls, value)
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/enum.py", line 663, in new
raise ve_exc
ValueError: 'UNKNOWN' is not a valid LaneMarkType

Downloading the tbv dataset.

I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?

One here:

  1. https://github.com/argoai/argoverse2-api/blob/main/DOWNLOAD.md
    s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory

And another here:
2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md
SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}

When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.

When I try 2, I get an error
"Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."

  1. probably downloads half of the dataset, while 2. doesn't initiate the download. I will probably continue with 1, but 2. probably is faster. I'm using Linux Ubuntu 18.04.

What are the unit for the position: (x, y) attributes ?

Thanks very much for making this great data available to researchers.
I'm working with the Motion Forecasting Dataset. I observed that, when it comes to compute ADE or FDE metric, the unit for the position attributes doesn't matter. But I'm actually working in a project where I should compute the RMSE for both position (x,y) and speed (x,y). I must then compare the results to others experiments results, but I can't do that for position attributes, as I don't know what the units for the position attributes are. It's clearly mentioned in paper and also here in GitHub that velocity it's in m/s, but for Position (x,y), I didn't find any clear information about the unit (is it in meter, centimeter, millimeter or something else?). Thanks for assist

ID of Objects in Sensor Dataset

Hi, I am trying to use Sensor Dataset for 3D MOT problem, but I cannot find the ID information for each bounding box.

Specifically, I tried to use the function get_labels_at_lidar_timestamp in AV2SensorDataLoader to get a list of bounding boxes, which are objects of Cuboids in AV2. However, it seems that Cuboids do not contain the ID information for objects.

Therefore, I am wondering if I have missed something or are there other ways to get the ID information for objects?

Motion forecasting: Focal agent not always observered over the full scenario length

Hey everyone,

I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent.
According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations:
"Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"

However, this is not the case for some scenarios (~3% of the scenarios).
One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.

Can you reproduce my problem?
Is this intended or can we expect this to be fixed in the near future?

Looking forward hearing from you!

Best regards

SchDevel

WIMP Bseline Code

Hi,
Will you release the WIMP code since is one of the baselines in the paper?
Thank you!

Follow up for https://github.com/argoai/av2-api/issues/77

Hi,

Sorry for the delay.
Thank you for your help!
I went through the dataset API and was able to isolate individual point clouds.

Joint(L), Top (R)
image

Top(L), Bottom (R)
image

Does this look sensible?
Here is the code snippet.
`
dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True,
with_cache=True)
for index, data_frame in enumerate(dataset):
sweep = data_frame.sweep # has lidar info
annotations = data_frame.annotations # has boxes
pose = data_frame.timestamp_city_SE3_ego_dict

    # get the lidar - both combined into single pcl
    pcl_joint = sweep.xyz

    # append reflectances and laser numbers
    pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1), 
                                        np.expand_dims(sweep.laser_number, -1)])

    # laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
    r_up = np.where(pcl_joint[:, -1] < 32)
    pcl_up = pcl_joint[r_up]  # get top lidar point cloud

    r_down = np.where(pcl_joint[:, -1] >= 32)
    pcl_down = pcl_joint[r_down]

`

Please let me know if this is the correct way, just to be sure.

Best Regards
Sambit

submission error.

Hello, I got an error when submit my results using evalai:

Error: {'detail': 'Given token not valid for any token type', 'code': 'token_not_valid', 'messages': [{'token_class': 'RefreshToken', 'token_type': 'refresh', 'message': 'Token is invalid or expired'}]}

How to evaluate 3D object detection on validation split?

Thanks for your excellent work!
I would like to know how to do the evaluation of 3D object detection on validation split.
And I notice there is PR about this. When will the stable version be released? I am looking forward to it!

Argoverse 2.0 vs Argoverse 1.1 API

Hi folks,

I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?

Is it possible to extract the route information ?

Hi, thank you for providing the outstanding dataset.

I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?

map_tutorial.ipynb

In your map_tutorial.ipynb, I encountered some issues:
from av2.datasets.sensor_dataset.av2_sensor_dataloader import AV2SensorDataLoader
should be
from av2.datasets.sensor.av2_sensor_dataloader import AV2SensorDataLoader

Each instance of ArgoverseStaticMap.from_json() should be ArgoverseStaticMap.from_map_dir()

And then args should be like

from pathlib import Path
args = SimpleNamespace(**{"dataroot": Path(dataroot), "log_id": Path(log_id)})

Motion forecasting eval.ai leaderboard

Hi, I think the Motion forecasting eval.ai leaderboard might be ranking submissions in the wrong order, it is higher is better now whereas it should be lower is better if I am not mistaken.

lane label annotation method inquery.

hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.

hope you can help me figure out and thanks in advance :)

Multi-process option to load sensor data

Hi,

While using the API function to load data, I observe that the reading is quite slow, I think this is mainly due to the large number of annotations in each frame which are of CuboidList object types.

Is there a way we could use multi-process to make it faster?
Something on the lines of Pytorch data loader?

Can I just directly provide the generator handle to torch Dataloader?

Best Regards
Sambit

Type of the LiDAR scanners

Hello everyone,

thank you for this amazing dataset! Is it possible to share the specific manufacturer and type of the used laser scanners ?
I could not find any information about that. Thank you very much.

Best regards,

Till Beemelmanns

ego-vehicle

Hi,
How to obtain the ego-vehicle tracks.
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.