GithubHelp home page GithubHelp logo

coperception / coperception Goto Github PK

View Code? Open in Web Editor NEW
123.0 5.0 17.0 3.6 MB

An SDK for multi-agent collaborative perception.

License: Apache License 2.0

Python 98.85% Makefile 0.75% CMake 0.08% C++ 0.32%
autonomous-driving computer-vision deep-learning graph knowledge-distillation communication-networks v2v 3d-object-detection 3d-scene-understanding collaborative-learning

coperception's Introduction

CoPerception

An SDK for collaborative perception

Documentation Status PyTorch GitHub issues total GitHub issues GitHub stars

Getting started:

Please refer to our docs website for detailed documentations: https://coperception.readthedocs.io/en/latest/

Installation

Download dataset

How to run the following tasks:

Supported models

Download checkpoints: Google Drive (US)
See README.md in ./tools/det, ./tools/seg, and ./tools/track for model performance under different tasks.

Supported datasets

Related works

Related papers

V2X-Sim dataset:

@article{Li_2021_RAL,
  title = {V2X-Sim: A Virtual Collaborative Perception Dataset and Benchmark for Autonomous Driving},
  author = {Li, Yiming and Ma, Dekun and An, Ziyan and Wang, Zixun and Zhong, Yiqi and Chen, Siheng and Feng, Chen},
  booktitle = {IEEE Robotics and Automation Letters},
  year = {2022}
}

DisoNet:

@article{li2021learning,
  title={Learning distilled collaboration graph for multi-agent perception},
  author={Li, Yiming and Ren, Shunli and Wu, Pengxiang and Chen, Siheng and Feng, Chen and Zhang, Wenjun},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={29541--29552},
  year={2021}
}

V2VNet:

@inproceedings{wang2020v2vnet,
  title={V2vnet: Vehicle-to-vehicle communication for joint perception and prediction},
  author={Wang, Tsun-Hsuan and Manivasagam, Sivabalan and Liang, Ming and Yang, Bin and Zeng, Wenyuan and Urtasun, Raquel},
  booktitle={European Conference on Computer Vision},
  pages={605--621},
  year={2020},
  organization={Springer}
}

When2com:

@inproceedings{liu2020when2com,
  title={When2com: Multi-agent perception via communication graph grouping},
  author={Liu, Yen-Cheng and Tian, Junjiao and Glaser, Nathaniel and Kira, Zsolt},
  booktitle={Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition},
  pages={4106--4115},
  year={2020}
}

Who2com:

@inproceedings{liu2020who2com,
  title={Who2com: Collaborative perception via learnable handshake communication},
  author={Liu, Yen-Cheng and Tian, Junjiao and Ma, Chih-Yao and Glaser, Nathan and Kuo, Chia-Wen and Kira, Zsolt},
  booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={6876--6883},
  year={2020},
  organization={IEEE}
}

OPV2V:

@inproceedings{xu2022opencood,
  author = {Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, Jiaqi Ma},
  title = {OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication},
  booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
  year = {2022}
}

coperception's People

Contributors

dekunma avatar kylema000 avatar qgjyf2001 avatar simbaforrest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

coperception's Issues

ImportError: cannot import name 'print_log' from 'mmcv.utils'

I faced this error when running make test command. I was able to resolve it by replacing
# from mmcv.utils import print_log
with:
from mmengine import print_log
in coperception/coperception/utils/mean_app.py

Full error message:

(coperception) your_user@host:~/coperception/tools/det$ make test com=upperbound
python test_codet.py \
--data /home/your_user/data/V2X-Sim-2.0-mini/det/test \
--com upperbound \
--resume logs/upperbound/with_rsu/epoch_100.pth \
--tracking \
--logpath logs \
--apply_late_fusion 0 \
--visualization 0 \
--rsu 1
Traceback (most recent call last):
  File "test_codet.py", line 13, in <module>
    from coperception.utils.mean_ap import eval_map
  File "/home/your_user/coperception/coperception/utils/mean_ap.py", line 3, in <module>
    from mmcv.utils import print_log
ImportError: cannot import name 'print_log' from 'mmcv.utils' (/home/your_user/miniconda/envs/coperception/lib/python3.7/site-packages/mmcv/utils/__init__.py)
make: *** [Makefile:106: test] Error 1

LowerBound and UpperBound Test run issue

First of all, I would like to thank you all and appreciate the work that has been put into this project.

So, Regarding the issue, When trying to do the test for BEV Segmentation task using either the lower bound or the upper bound approach, Getting an issue with invalid key error for ['trans matrices']

image

Would greatly be helpful, if someone can assist me in knowing what the issue is actually.

Thanking you in advance

Train/test scenes overlap

Originally for V2X-Sim dataset, we used scenes 0-79 as the training set, 80-99 as the validation/test set.
We just found that the data of the first 80 scenes has some overlap with the last 20 scenes.
So here is the new split for train/val/test:
train:

82
25
95
0
2
6
7
9
10
11
12
13
14
15
16
17
18
20
21
22
23
24
26
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
64
66
67
69
70
71
72
73
74
75
77
80
81
83
85
86
87
88
89
90
93
94
98
99

validation:

1
3
4
63
65
68
76
78
79
84

test:

5
8
19
27
28
29
91
92
96
97

Currently this issue can only be solved manually by moving scenes around, like:

import os
import shutil


scene_file = 'train_scenes.txt'
train_scene_file = open(scene_file, 'r')

train_idxs = set()
for line in train_scene_file:
    line = line.strip()
    train_idxs.add(int(line))

from_loc = '/scratch/dm4524/data/V2X-Sim-seg/all'
to_loc = '/scratch/dm4524/data/V2X-Sim-seg/train'

for agent_dir in os.listdir(from_loc):
    to_dir = os.path.join(to_loc, agent_dir)
    agent_dir = os.path.join(from_loc, agent_dir)
    for f in os.listdir(agent_dir):
        scene_file_path = os.path.join(agent_dir, f)
        scene_idx = int(f.split('_')[0])
        if scene_idx in train_idxs:
            shutil.move(scene_file_path, to_dir)

We will fix this issue in a near future release.

Question about V2VNet

Thanks for the great work! I found that the paper of V2VNet contains the prediction function. Does the code also support the prediction function?

error in : make train_disco

Thankyou for your great work and kind help in my previous issues. I have encountered one more error and would like some help.
I have trained upperbound and it trained well. Now I am trying to train disconet and getting the error like this :

error1

image

Help on PixelWeightedFusionSoftmax

Hello,

I was going through the DiscoGraph paper and the code for the Disconet, Can you please help me understand if the PixelWeightedFusionSoftmax is the same as the collaboration graph process for the student model, mentioned in the paper?

Thanking you in advance

issue to train the tracker

Thank you for your great work. I would like to train the tracker via the command 'make sort'. The error shows that one file can not be found. How can I find this file?

Screen Shot 2023-09-14 at 9 14 31 PM

question for trainning result

Hi, can I know the epoch numbers you choose in DiscoNet paper? I downloaded the V2X-Sim 1.0 dataset and trained model using parameters in paper, but still can't get the same result. And when I set "kd_flag" as false, I found that the AP didn't change so much compared to "kd_flag=True". I don't know if there is something wrong when I run the code or if the version of environments have some influence on the results.

Besides, some of the pretrained models(100 epoch) in google drive(directory checkpoints/det) get different test APs from the APs in documentation, but others are the same. I think the test_codet.py has no errors because some results are right, but v2v and disco det pretrained model just get a low test AP. Maybe some checkpoint files are uploaded wrong?

Look forward to your reply. Thank you very much!

DLL load failed: The specified module could not be found while runnig file examples in "coperception"

Hello,
I have installed "coperception" on windows 10 using the following documentation:
image
However, I'm having this problem every time I try to execute one of the example files within the coperception folder:
image
To solve this issue, I have installed the Microsoft Visual C++ Redistributable latest supported downloads and It didn't work either. I have also tried to update pip and run this command on conda: pip install msvc-runtime and the error is still persistent.

Question about Downloading cam_int.zip

Hi. Last time I asked a question about downloading the full dataset due to some damaged .zip files which cannot be unzip. And thanks for your effort on uploading the whole dataset. I successfully downloaded and unzipped all .zip files but cam_int.zip (39GB). Can you solve this problem? Looking forward to your reply and appreciating all your effort.

generating preprocessed dataset issues

Hi,
I have downloaded the V2X-Sim 2.0 dataset and installed CoPerception by following these steps:
image
However, I'm having a problem to generate preprocessed data.
Whenever I run this command make create_file
I get the following issue:
image
Does anyone know how to solve this issue ?

About V2XSim

Hi,thanks for your great work!
I have one question: what is the information in .pcd.bin file?
The x,y,z of point and other information?
In other words, how many fields of one point? And what is the coordinate of the pcd file?

And I want load the .pcd.bin file in the below way:
np.fromfile(pcd_path, dtype=np.float32)
Is it right?
Or could you please tell me a better and faster way?

Looking forward to your reply, Thanks!

Testing model not working

I have tried to run upperbound model testing with make test_no_rsu and make test, but getting the same error. Main error message is TypeError: forward() missing 1 required positional argument: 'bevs' in CoDetModule.py. The same error message when I ran with rsu=1 Below is full error message. Let me know if you know the cause or ever faced a similar issue.

I'm downloaded checkpoints for upperbound from preprocessed folder. For test folder, I manually divided the dataset.

(coperception) jju4@amax:~/coperception/tools/det$ make test_no_rsu com=upperbound
python test_codet.py \
--data /home/jju4/data/V2X-Sim-2.0-mini/det/test \
--com upperbound \
--resume logs/upperbound/no_rsu/epoch_100.pth \
--tracking \
--logpath logs \
--apply_late_fusion 0 \
--visualization 0 \
--rsu 0
Namespace(apply_late_fusion=0, box_com=False, com='upperbound', compress_level=0, data='/home/jju4/data/V2X-Sim-2.0-mini/det/test', gnn_iter_times=3, inference=None, kd_flag=0, kd_weight=100000, layer=3, log=False, logpath='logs', lr=0.001, nepoch=100, num_agent=6, nworker=1, only_v2i=0, pose_noise=0, resume='logs/upperbound/no_rsu/epoch_100.pth', resume_teacher='', rsu=0, tracking=True, visualization=0, warp_flag=0)
device number 4
flag upperbound
The number of val sequences: 100
The number of val sequences: 100
Validation dataset size: 100
Load model from logs/upperbound/no_rsu/epoch_100.pth, at epoch 100
([('/home/jju4/data/V2X-Sim-2.0-mini/det/test/agent1/0_0/0.npy',)], [('/home/jju4/data/V2X-Sim-2.0-mini/det/test/agent2/0_0/0.npy',)], [('/home/jju4/data/V2X-Sim-2.0-mini/det/test/agent3/0_0/0.npy',)], [('/home/jju4/data/V2X-Sim-2.0-mini/det/test/agent4/0_8/0.npy',)], [('/home/jju4/data/V2X-Sim-2.0-mini/det/test/agent5/0_0/0.npy',)])
Traceback (most recent call last):
  File "test_codet.py", line 586, in <module>
    main(args)
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "test_codet.py", line 283, in main
    data, 1, num_agent=num_agent
  File "/home/jju4/coperception/coperception/utils/CoDetModule.py", line 431, in predict_all
    bev_seq, trans_matrices, num_all_agents, batch_size=batch_size
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 181, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 89, in parallel_apply
    output.reraise()
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/_utils.py", line 543, in reraise
    raise exception
TypeError: Caught TypeError in replica 1 on device 1.
Original Traceback (most recent call last):
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
    output = module(*input, **kwargs)
  File "/home/jju4/miniconda3/envs/coperception/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'bevs'

make: *** [Makefile:117: test_no_rsu] Error 1

question about data format

Thank you for the wonderful work. I have a question about the shape of ground truth labels in a mini-batch. The shape is 20 * 256 * 256 * 6 * 2. I know 20 is equal to batch size(4) * number of agents(5), 256 * 256 refers to the BEV area, 2 is number of classes. But what does fourth dimension 6 mean?

Screen Shot 2023-09-18 at 3 31 25 PM

Question about downloading dataset

Hi. Thanks for your excellent job. I have a question about downloading the full dataset by Baidu Netdisk.
Some .zip files downloaded on Baidu Netdisk such as dep_front1.zip are damaged and connot be used.
Looking forward to your answer. Thanks a lot!

Request for Code Used to generate the V2X-sim Metadata:

Hi,
I am currently conducting research on data sharing optimization in collaborative perception among Autonomous Vehicles (AVs). To facilitate my work, I have utilized the coperception library in conjunction with the V2X-sim dataset.

While the coperception library has been instrumental in my research, I am now in need of the code used to generate the metadata for the V2X-sim dataset. This is crucial for enabling compatibility with my own dataset and ensuring consistency in my analyses.

Questions for the class prediction and label annotation

I am confused about the question about the class prediction and label annotation:

per_entry_cross_ent = _softmax_cross_entropy_with_logits(
            labels=target_tensor, logits=prediction_tensor
        )

        # convert [N, num_anchors] to [N, num_anchors, num_classes]
        per_entry_cross_ent = per_entry_cross_ent.unsqueeze(-1) * target_tensor
        prediction_probabilities = F.softmax(prediction_tensor, dim=-1)
        p_t = (target_tensor * prediction_probabilities) + (
            (1 - target_tensor) * (1 - prediction_probabilities)
        )

First, as shown above,the class SoftmaxFocalClassificationLoss in loss.py contain these code lines. The _softmax_cross_entropy_with_logits function uses prediction_tensor and target_tensor as input. But from my view, the prediction_tensor should firstly be compressed to [0,1] using F.softmax and then be used as the input. So I think this function should use prediction_probabilities and target_tensor as input.

data = {
            "bev_seq": padded_voxel_points.to(device),
            "labels": label_one_hot.to(device),
            "reg_targets": reg_target.to(device),
            "anchors": anchors_map.to(device),
            "vis_maps": vis_maps.to(device),
            "reg_loss_mask": reg_loss_mask.to(device).type(dtype=torch.bool),
            "target_agent_ids": target_agent_ids.to(device),
            "num_agent": num_all_agents.to(device),
            "trans_matrices": trans_matrices.to(device),
        }

Second, I find that the shape of variable label_one_hot is [6,256,256,6,2]. The dim0=6 is the num of agent. The dim1,dim2=256 is the num of voxels. But I am confused about what the last two dimension 6 and 2 stand for. Maybe now I'm not very familiar with vehicle perception, but I hope you can help answer it.

Loonking forward to your reply. Thanks a lot!

How to test model with sample data

Hello! I am working on implementing adversarial attack white-box method using this library. I'm facing an issue with using sample data for applying perturbations.

My setup

I'm using Ubuntu. In root /coperception directory, I have adversarial.py file which loads model checkpoint and dataset, and then intends to apply adversarial attack. I copied and pasted code from tutorial > Individual Models.

Code

# copied code
data_shapes = {
  'bev_seq': ...
  ...
  'trans_matrices': ...
}

data = ...

model = FaFNet(
        config, 
        layer=collaboration_layer, 
        kd_flag=False, 
        num_agent=num_agent
)

config.flag = 'lowerbound' # [lowerbound / upperbound]
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

faf_module = FaFModule(
        model=model,
        teacher=None,
        config=config, 
        optimizer=optimizer,
        criterion=criterion, 
        kd_flag=False
)

loss, cls_loss, loc_loss = faf_module.step(data, batch_size, num_agent=num_agent)

faf_module.model.eval()

# Testing
checkpoint = torch.load('/path/to/checkpoint/file.pth')
faf_module.model.load_state_dict(checkpoint['model_state_dict'])
faf_module.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
faf_module.scheduler.load_state_dict(checkpoint['scheduler_state_dict'])

loss, cls_loss, loc_loss, result = faf_module.predict_all(data, batch_size=1, num_agent=num_agent)

# this is where I have to continue from

the code above is working, except for that it does not run test for sample data. 'result' output array consists of empty 'pred', 'score', etc. So, my goal is to select a sample data and test it.

Here is what it should look like I think:

    agent_data_path = f'{preprocessed_data_path}/agent0'
    v2x_det_dataset = V2XSimDet(
        dataset_roots=[agent_data_path], split=split, config_global=config_global,
        config=config, val=True, bound='both', kd_flag=0, rsu=1)

    formatted_data = {
        'bev_seq': ...,
        'labels': ...,
        'reg_targets': ...,
        'reg_loss_mask': ...,
        'anchors': ...,
        'vis_maps': ...,
        'target_agent_ids': ...,
        'num_agent': ...,
        'trans_matrices': ...
    }

    # then i should be able to test this data, as well as apply adversarial attack before testing
    loss, cls_loss, loc_loss, result = faf_module.predict_all(data, batch_size=1, num_agent=num_agent)

I have made several attempts, but they weren't successful so far. Could you provide an overview guide on this please?

About SyncNet

Dear author, thank you for your great work. I came across your code repository while reading the SyncNet paper, and I would like to know if it is possible to run SyncNet smoothly?

Code Questions for the loss

when I make train,There are always Trackback that RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 256, 32, 32]], which is output 0 of AsStridedBackward0, is at version 24; expected version 23 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).I went through a lot of information. But I still can't solve it

Questions for DiscoNet training on V2X-Sim-seg

Hi, when I was trying to train the DiscoNet model on V2X-Sim-seg, I noticed that the forward function in DiscoNet was commented out, which caused the code to automatically jump and execute the forward function in FusionBase. After I uncommented the forward function in DiscoNet, I encountered a new issue:

AttributeError: 'super' object has no attribute 'build_feat_map_and_feat_list'.

It seems that there is no code implementation of the build_feat_map_and_feat_list function in FusionBase.

How can I solve this issue? Looking forward to your reply. Thank you!

Question about decoder

Hi, I have a question about the decoder implementation details in backbone.
In my understanding, the encoder and decoder are usually deployed on different vehicles in the actual V2V scenario, which means that the decoder cannot obtain the information from the encoder.
However, in the decoder code, each layer of the decoder first concatenates the previous feature map with the corresponding feature map in the encoder, and then uses a 1 x 1 convolutional operation to halve the number of channels. Is this reasonable?

Disco Test result query

Thankyou for your support and kind respose in my issues.
I have one small question related to my Test result of disco with rsu .
In the average local mAP, I think there is some error as it is not actually the average of all the 5 agents.
Can you please help me to understand this.

Disco_Test_result

How to put all agents to the same map?

Thanks very much for showing this great work to us.

May I ask a question? For the detection, each agent detects the boxes on their own ego map, how could I put detections of all agents on the same map? I tried to find the location parameters to connect the detection outputs, but I am sorry they are too complicated, and I cannot figure them out.

about the code for mapping the boxes

Hi !
Thanks for your great work.
I have a question about the code snippet for mapping the bounding boxes to the local sensor coordinate

# caused by coordinate inconsistency of nuscene-toolkit
box.center[0] = - box.center[0]
# debug
shift = [box.center[0], box.center[1], box.center[2]]
box.translate(-np.array(shift))
box.rotate(Quaternion([0, 1, 0, 0]).inverse)
box.translate(np.array(shift))

Could you please tell me why it needs the code in detail. Thanks a lot!

Camera Only Coperception

Hi,

Thanks for the amazing work and for creating this library.
I have two questions.
1) I have a similar dataset collected in the v2x setup, how can I use this package? (A high-level guide would be great)
2) Does the library support running camera-only input coperception?

Thanks,

Question about dataset preprocessing

Hi, thanks for your excellent work!
When I run command make create_data, I got the following error:
image
and the error is:
image
My system is Ubuntu 22.04 , and python version is 3.7.16.
My cuda version is 11.2
Could you tell me what the problem is?

error in dataset preprocessing

Hi, thanks for your excellent work!
When I run make create_data command, I got the following error:

image

Through debug, I traced back to line 57 in coperception/utils/nuscenes_pc_util.py, and found that the caculation result of num_sensor is 0, so the content inside the for loop will not be executed, which causes trans_matrix_list (line 115) to be empty.

image

image

Could you tell me what the problem is?

Issue with Downloading the complete Dataset

@dekunma Thankyou for you kind support in my previous issue.
I have downloaded the dateset V2X-Sim 1.0, but while preprocessing the data set I also get the same error as #12 .
Then I checked the size of downloaded LiDAR data and found it to be 15.3 GB, but it should be 16.5 GB. I do not understand why it is not downloading the complete folder.

error_prepro1

error_prepro2

So Now as suggested by @dekunma in #12 , I am thinking of downloading the ' V2X-Sim 2.0 - Parsed datasets & model checkpoints for coperception SDK' . Can you please suggest me how shall i download it directly to my Terminal . As I can not 'gdown' it as it is only shared folder (not a public).

about the support for the position of vehicles and RSU

Hello! My lab is now doing some research about Internet of Vehicles and V2XSim may be very useful for our research.
But my research need the specific positions of every vehicle and RSU and I didn't find this kind of message in the file or codes in the Repo.
So if specific positions can be incorporated in later versions, it will be very helpful to my research. Thanks a lot!

DiscoNet test result : Error in IoU 0.5 & 0.7 reults

I would like to thankyou for your great work & consistent support.
I have raised some concerns before in #33 and got a quick fix by @dekunma, which I am very thankful of.
I have noticed that the mAP values for each agent at IoU thresholds of 0.5 and 0.7 are exactly the same, which is unusual.

Also, please correct me if I am wrong, the average local mAP of all 5 Agents is somewhat different than what I calculated normally.

average [email protected] = (0.6579220294952393 + 0.7744371891021729 + 0.7319439053535461 + 0.7088317275047302 + 0.7414842844009399) / 5
= 0.7229238271903257

This the test result which I am referring to:
image

Please have a look.
Thank you

Question about Tracking

Hi, thanks for your excellent work!
When I try to use tools of tracking with data V2X-SIM-v2.0-mini, I don't know where is the “det_data_path" (the path of the preprocessed detection dataset.) Could you give me some guidance?

Not able to download checkpoints

I am trying to download the checkpoints but it is getting stuck after getting zipped. Sometimes, it got downloaded but the folder contains broken files. Please help.

Issues while testing When2com

Hi,
I have downloaded the checkpoints and then used them to train my model using the following command:
make train com=when2com rsu=0
image

While trying to test the model using:
make test_no-rsu com=when2com rsu=0 wrap=0
I get the following error
image
Something seem to be wrong with the inference mode.

error in V2X-Sim Tutorial

Hey, thank you for the great work. I am just following the tutorial to try the V2X-Sim 2.0 dataset. But I am facing an issue

AssertionError: Error: there are 47638 label files but 47300 lidarseg records.

when running: nusc = NuScenes(version='v1.0-mini', dataroot=in_path, verbose=True).

Issue output:

Loading NuScenes tables for version v1.0-mini...
Loading nuScenes-lidarseg...

AssertionError Traceback (most recent call last)
/tmp/ipykernel_5481/2787126174.py in
----> 2 nusc = NuScenes(version='v1.0-mini', dataroot=in_path, verbose=True)

~/anaconda3/envs/nuscenes/lib/python3.7/site-packages/nuscenes/nuscenes.py in init(self, version, dataroot, verbose, map_resolution)
102 num_lidarseg_recs = len(getattr(self, lidar_task))
103 assert num_lidarseg_recs == num_label_files,
--> 104 f'Error: there are {num_label_files} label files but {num_lidarseg_recs} {lidar_task} records.'
105 self.table_names.append(lidar_task)
106 # Sort the colormap to ensure that it is ordered according to the indices in self.category.

AssertionError: Error: there are 47638 label files but 47300 lidarseg records.

I think the issue is caused by the number of files in 'lidarseg' folder does not match with the number of lidarseg records in 'lidarseg.json'. Could you help me to figure this out? Thank you!

Could you provide the teacher checkpoint?

To train the Disco model, there is an argument, --resume_teacher. How to get this teacher model? Could you provide the checkpoint or the way to train this model?
Selection_030

Change the Perception Range

Thanks for your solid work on collaborative perception!
I have a question about the perception range. Is there a way to modify the perceived range (default [-32,32])?
But if it is impossible to modify it, it doesn't matter.
I am looking forward to your answer. Thanks a lot!

Question about dataset preprocessing

Hi, thanks for your excellent work!
When I run command make create_data, I got the following error:
14d236345d0be4c7a560dc0510d9832

Through debug, I traced back to coperception\utils\mapping\mapping.py, and run command python mapping.py, I got the following error:
9caff0d9031e62bde5357969dd515ad

My system is Windows 11 x64, and python version is 3.7.12.
Is this because the .pyd file dosen't match my system?
Could you tell me what the problem is?

Dataset download error

Thanks for the great contribution.

I'm facing an error while downloading the V2X-Sim 2.0 dataset (cam_front.zip). The downloading was terminated and failed with "Network error". Have tried several times shown with the same issue.

Wondering if other approaches provided for downloading the dataset.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.