argoverse / av2-api Goto Github PK
View Code? Open in Web Editor NEWArgoverse 2: Next generation datasets for self-driving perception and forecasting.
Home Page: https://argoverse.github.io/user-guide/
License: MIT License
Argoverse 2: Next generation datasets for self-driving perception and forecasting.
Home Page: https://argoverse.github.io/user-guide/
License: MIT License
Hi,
I am wondering if it is possible to train image-based methods on the AV2 sensor dataset.
I find that the timestamps between LiDAR data and Ring-image data are not aligned. I tried to project the annotations on the corresponding ring images and found that they were not aligned well. So I am wondering whether the AV2 sensor dataset currently is only valid for detectors using LiDAR point cloud inputs but not for image-based detectors?
Hi, I'm currently trying to access the Sensor dataset and was looking for images that have Wheelchairs and Strollers in them. I modified the file in the tutorial (av2-api/blob/main/tutorials/generate_sensor_dataset_visualizations.py) for this. However, I'm not able to find any images with those labels. Can you please tell me specific sequence ID's that have wheelchairs and strollers and the other rare classes?
hi, if i want to do motion forecasting from lidar data, can i find the corresponding lidar raw data for each frame?
The ChallengeSubmission class of av2 generates parquet files - but eval.ai only accepts json, zip, gz, txt, csv, tsv, ... (via file upload)
What file name is required? Which file name extension?
I tried to zip the submission.parquet file as well as changing the compression in submission_df.to_parquet(submission_file_path, compression='gzip' ) and storing as .gz but neither if those works.
Hello, there! I tried to download av2 datasets using S3, I rely on these instructions.
But it doesn't work. I try both: s5cmd and usual awscli clients.
s5cmd --no-sign-request cp s3://argoai-argoverse/av2 .
ERROR "cp s3://argoai-argoverse/av2 av2/av2": AccessDenied: Access Denied status code: 403, request id: 2HZ44EQ5VX276MJ5, host id: 8NIIVQuhziyBavM4B98jmDrhj6Qpbyw4FbTFo76M6wtc3RHhQRehm7I83ZspMdOjThNy1RUv0Io=
aws s3 ls s3://argoai-argoverse/av2
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
Does it should not work or I do it in a non-proper way?
I think the CDS should be the bigger the better, right? But the rank seems in the wrong order. The link is https://eval.ai/challenge/1710/leaderboard/4078
I found that the height of 3d lanes might be a little buggy. It is somewhat discontinuous and appears inaccurate. You can see the example as follows. Different views are presented.
However, if I project them to 2D space by removing z axis, they will look good.
Could you please tell me how the height is obtained? By lidar?
How can I process the lanes? Only use XY?
Hi Argo folks,
Thank you for your Argoverse 2.0 dataset! But I have a question related to the HD-Map part. In brief, I am wondering:
Best,
Ziqi
Hi,
First, thank you for this awesome work.
However, the documentation says that both the lidar scans are aggregated into single sweeps.
Is it possible to isolate each lidar's scans?
I would be interested in training using data from a single lidar(down lidar).
Please let me know how may I do this.
Best Regards
Sambit
Argoverse V1.1 & V2 is missing the 2D bbox (length & width) of objects.
Will Argoverse V2 provide these data?
Dose tracks include AV in Motion Forecasting dataset? The prediction of objects should consider the AV as well.
Hello, I found Submission Options: "Upload File" Option can not choose .parquet file which is needed for leaderboard evaluation.
Hey,
the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset.
Looking forward to your answer!
Thanks a lot!
Thank you for the great work.
I'm dealing with motion forecasting data and I found out that, from my understanding, some scenarios from training data have insufficient ego-vehicle(object_category = 3) samples. For example, 'scenario_00baa620-0342-4b0c-a7c2-77a5f583d6b1' only have 100 samples for agent instead of 110.
I'm wondering if my understanding of the motion forecasting dataset is wrong or if I should ignore the problematic scenarios
It seems that the map elements have 3D coordinates but the tracks are still in 2D. Why the dataset does not provide 3D tracks? Is it possible to infer the z coordinate of the tracks?
Another quick question: will the future release include the traffic light state?
Hi,
I get the following error running code ArgoverseStaticMap.from_json(static_map_path)
with certain map files:
ValueError: 'UNKNOWN' is not a valid LaneMarkType
The version of av2 is 0.2.0.
When I check the LaneMarkType class while running, I notice that it does not have the member "UNKNOWN", which exists in the github code.
Could you please check and fix it in the released version?
Hello, we have a question that needs to be interrupted. The direction of our research is trajectory prediction. Your description of the map dataset is that it contains four maps of trajectory prediction, sensor, radar, and Tbv. We wonder if we only need the part about 250,000 vector graphics for trajectory prediction, the official download link has 21 parts in the map dataset itself. Can you tell us which part is the trajectory prediction part? We would appreciate it if you could.
Hi!
I see the previous version of the argoverse datasets has some interesting open-source converters from other dataset formats (e.g. Waymo to Argoverse. Are there any plans to release a Waymo Open Motion Dataset to Argoverse Forecasting dataset converter? I would find it really useful.
Hi guys,
Do you know if some GNN based baseline methods are going to be implemented in the tutorial? Similar than NN and LSTM for Argoverse 1.0 API, in order to know how to deal with the vectorized map.
Hi, I'm trying to download av2 motion-forecasting datasets. Following instructions given by Download, I install s5cmd and try commands below, and this is what I got:
>> s5cmd --no-sign-request cp s3://argoai-argoverse/av2/motion-forecasting/* /data/Argoverse2Datasets/sensor/
zsh: no matches found: s3://argoai-argoverse/av2/motion-forecasting/*
>> s5cmd --no-sign-request cp s3://argoai-argoverse/av2/* /data/zhouxinning/Argoverse2Datasets/
zsh: no matches found: s3://argoai-argoverse/av2/*
I just wondering that have you changed the path to the dataset?
Dear all:
When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'.
Is any parameter in the program that needs to be debugged or something else?
Hoping for your reply and thanks so much.
The challenge of av2 is now with the same setup as the challenge of av1, would your team update it for more setups such as the multiple class or multi-agent trajectory prediction in the future? If your team has such a plan, when would it be released?
Thanks a lot for your work and the new dataset!
Hi @wqi,
Is the motion forecasting leaderboard open now? I tried to submit to it, but the status is always "running".
The traffic light infos (color&state&lane_id) are missing.
Hi, I've just found a problem with the tutorials/generate_egoview_overlaid_vector_map.py. The option -d is used in not only --data-root, but also --use-depth-map-for_occlusion, maybe this problem should be fixed~
Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2
, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory
. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.
Hello, Thanks for your great contributions.
I found an error when parsing map data.
For example for log_map_archive_f75d80d6-a1c8-4a0b-915a-7dc304d122b0.json:
Traceback (most recent call last):
File "map_test.py", line 28, in
avm = ArgoverseStaticMap.from_json(Path(dataroot))
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/map_api.py", line 328, in from_json
vector_lane_segments = {ls["id"]: LaneSegment.from_dict(ls) for ls in vector_data["lane_segments"].values()}
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/map_api.py", line 328, in
vector_lane_segments = {ls["id"]: LaneSegment.from_dict(ls) for ls in vector_data["lane_segments"].values()}
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/site-packages/av2/map/lane_segment.py", line 110, in from_dict
left_mark_type=LaneMarkType(json_data["left_lane_mark_type"]),
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/enum.py", line 339, in call
return cls.new(cls, value)
File "/Users/wangmingkun/opt/anaconda3/envs/argo2/lib/python3.8/enum.py", line 663, in new
raise ve_exc
ValueError: 'UNKNOWN' is not a valid LaneMarkType
I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?
One here:
s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory
And another here:
2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md
SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}
When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.
When I try 2, I get an error
"Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."
Thanks very much for making this great data available to researchers.
I'm working with the Motion Forecasting Dataset. I observed that, when it comes to compute ADE or FDE metric, the unit for the position attributes doesn't matter. But I'm actually working in a project where I should compute the RMSE for both position (x,y) and speed (x,y). I must then compare the results to others experiments results, but I can't do that for position attributes, as I don't know what the units for the position attributes are. It's clearly mentioned in paper and also here in GitHub that velocity it's in m/s, but for Position (x,y), I didn't find any clear information about the unit (is it in meter, centimeter, millimeter or something else?). Thanks for assist
Hi, I am trying to use Sensor Dataset for 3D MOT problem, but I cannot find the ID information for each bounding box.
Specifically, I tried to use the function get_labels_at_lidar_timestamp
in AV2SensorDataLoader to get a list of bounding boxes, which are objects of Cuboids in AV2. However, it seems that Cuboids do not contain the ID information for objects.
Therefore, I am wondering if I have missed something or are there other ways to get the ID information for objects?
Hey everyone,
I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent.
According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations:
"Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"
However, this is not the case for some scenarios (~3% of the scenarios).
One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.
Can you reproduce my problem?
Is this intended or can we expect this to be fixed in the near future?
Looking forward hearing from you!
Best regards
SchDevel
Hi,
Will you release the WIMP code since is one of the baselines in the paper?
Thank you!
Hi,
Sorry for the delay.
Thank you for your help!
I went through the dataset API and was able to isolate individual point clouds.
Does this look sensible?
Here is the code snippet.
`
dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True,
with_cache=True)
for index, data_frame in enumerate(dataset):
sweep = data_frame.sweep # has lidar info
annotations = data_frame.annotations # has boxes
pose = data_frame.timestamp_city_SE3_ego_dict
# get the lidar - both combined into single pcl
pcl_joint = sweep.xyz
# append reflectances and laser numbers
pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1),
np.expand_dims(sweep.laser_number, -1)])
# laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
r_up = np.where(pcl_joint[:, -1] < 32)
pcl_up = pcl_joint[r_up] # get top lidar point cloud
r_down = np.where(pcl_joint[:, -1] >= 32)
pcl_down = pcl_joint[r_down]
`
Please let me know if this is the correct way, just to be sure.
Best Regards
Sambit
Hello, I got an error when submit my results using evalai:
Error: {'detail': 'Given token not valid for any token type', 'code': 'token_not_valid', 'messages': [{'token_class': 'RefreshToken', 'token_type': 'refresh', 'message': 'Token is invalid or expired'}]}
I would like to confirm if the competition ends on June 13 ?
Thanks for your excellent work!
I would like to know how to do the evaluation of 3D object detection on validation split.
And I notice there is PR about this. When will the stable version be released? I am looking forward to it!
Are there some baselines provided for av2? for 3d object detection task
Hi!
I was going through Argoverse 1 dataset and found this file: https://github.com/argoai/argoverse-api/blob/7c4952d4d080b5b173c25e187c46d3293245a67b/demo_usage/cuboids_to_bboxes.py#L105 that can render 3D bounding boxes on images as per README. Is there a similar function call in the av2 APIs as well? Alternatively, is there a documentation around how to do this?
Thank you!
Hi folks,
I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?
Hi, thank you for providing the outstanding dataset.
I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?
Hi! I am about to make a submission to the motion forecasting challenge (https://eval.ai/challenge/1719/my-submissions) but got an error "KeyError(1)" from Stderr File. Is there any way to get a detailed error message?
In your map_tutorial.ipynb, I encountered some issues:
from av2.datasets.sensor_dataset.av2_sensor_dataloader import AV2SensorDataLoader
should be
from av2.datasets.sensor.av2_sensor_dataloader import AV2SensorDataLoader
Each instance of ArgoverseStaticMap.from_json()
should be ArgoverseStaticMap.from_map_dir()
And then args
should be like
from pathlib import Path
args = SimpleNamespace(**{"dataroot": Path(dataroot), "log_id": Path(log_id)})
Hi, I think the Motion forecasting eval.ai leaderboard might be ranking submissions in the wrong order, it is higher is better now whereas it should be lower is better if I am not mistaken.
I wonder whether there is roundabout scenario in this Dataset?
Much thanks to your reply :)
hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.
hope you can help me figure out and thanks in advance :)
Hi,
While using the API function to load data, I observe that the reading is quite slow, I think this is mainly due to the large number of annotations in each frame which are of CuboidList object types.
Is there a way we could use multi-process to make it faster?
Something on the lines of Pytorch data loader?
Can I just directly provide the generator handle to torch Dataloader?
Best Regards
Sambit
Hello everyone,
thank you for this amazing dataset! Is it possible to share the specific manufacturer and type of the used laser scanners ?
I could not find any information about that. Thank you very much.
Best regards,
Till Beemelmanns
Hi,
How to obtain the ego-vehicle tracks.
Thank you
The Submission Guidelines have nothing about the submission format, could you give more details? Or could you provide a submission sample?
Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.