GithubHelp home page GithubHelp logo

once_benchmark's People

Contributors

pointscoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

once_benchmark's Issues

激光雷达的安装高度是多少?

Hi, what a great dataset!

Could you plz tell me what is the relative distance of the LIDAR coordinate system's origin to the ground? I have searched the official website and paper but didn't find this information.
I look forward to your reply.
请问是否可以告知激光雷达坐标系原点距离地面的相对高度为多少?查找了官网和paper都没有找到这个信息。
期待您的回复。

#27

some question about semi-supervised learning

Hi, @PointsCoder , thanks for sharing the source code, I have one question about the code after reading, for 3dioumatch, why you use cls_score as nms_score ? does't it give better result? as now many detector use iou score as nms score

About DistributedDataParallel model encapsulation

Hello~I'm wondering why do you encapsulate the DistributedDataParallel model as follows?

class DistStudent(nn.Module):
def __init__(self, student):
super().__init__()
self.onepass = student
def forward(self, ld_batch, ud_batch):
return self.onepass(ld_batch), self.onepass(ud_batch)
class DistTeacher(nn.Module):
def __init__(self, teacher):
super().__init__()
self.onepass = teacher
def forward(self, ld_batch, ud_batch):
if ld_batch is not None:
return self.onepass(ld_batch), self.onepass(ud_batch)
else:
return None, self.onepass(ud_batch)

Where the code to transform the original ONCE lidar point clouds to unified normative coordinate?

Hi

The coordinate of original ONCE lidar point clouds is x pointing left, y pointing backwards. However, I can't find the code to explicitly transform the once lidar coordinate to the unified normative coordinate as the openpcdet documentation states.

The getitem method of once

def __getitem__(self, index):
if self._merge_all_iters_to_one_epoch:
index = index % len(self.once_infos)
info = copy.deepcopy(self.once_infos[index])
frame_id = info['frame_id']
seq_id = info['sequence_id']
points = self.get_lidar(seq_id, frame_id)
if self.dataset_cfg.get('POINT_PAINTING', False):
points = self.point_painting(points, info)
input_dict = {
'points': points,
'frame_id': frame_id,
}
if 'annos' in info:
annos = info['annos']
input_dict.update({
'gt_names': annos['name'],
'gt_boxes': annos['boxes_3d'],
'num_points_in_gt': annos.get('num_points_in_gt', None)
})
data_dict = self.prepare_data(data_dict=input_dict)
data_dict.pop('num_points_in_gt', None)
return data_dict

The get_lidar method of once

def get_lidar(self, sequence_id, frame_id):
return self.toolkits.load_point_cloud(sequence_id, frame_id)

The load_point_cloud method of once

def load_point_cloud(self, seq_id, frame_id):
    bin_path = osp.join(self.data_root, seq_id, 'lidar_roof', '{}.bin'.format(frame_id))
    points = np.fromfile(bin_path, dtype=np.float32).reshape(-1, 4)
    return points

Update the Documentation for using WSL 2

Hello. I had a lot of trouble installing this in my WSL workspace. I spent several hours researching CUDA in WSL to get this to work in my Ubuntu 18.04 setup. I would like to contribute to the docs markdown to share my findings and document it in a wsl section.

question about pointpainting

In function point_painting, used_classes are [0,1,2,3,4,5], which differ from the segmentation results of HRNet trained on CityScape. How to modify it? Besides, the definition of "cyclist" differs from the "rider". How to modify it?

PVRCNN implemented differently?

Hi,

Thanks for the great work!
I'm working on comparing this with 3dioumatch, one of the baselines used. I have a doubt on sup_models/pvrcnn.yaml
The features_source used by 3dioumatch's pv_rcnn_ssl.yaml is as follows :
FEATURES_SOURCE: ['bev', 'x_conv1', 'x_conv2', 'x_conv3', 'x_conv4', 'raw_points']

Whereas, the features_source in config files here is :
FEATURES_SOURCE: ['bev', 'x_conv3', 'x_conv4', 'raw_points']

What's the reason behind x_conv1 and x_conv2 not being used?

Have you tried Pyramid-RCNN and VOTR on ONCE dataset?

Your excellent Pyramid-RCNN and VOTR have good performance on kitti and waymo datastes.Have you tried these two models on ONCE dataset?Can you report their performance on ONCE dataset, I try to run these two models on ONCE ,but we donnot have enough GPUs to train them, two 2080ti cannnot train them.

Lidar to road extrinsics information

Hello,

Thank you so much for your work and for making this data freely accessible. It is very impressive and awesome. But I have a question, do you have the calibration information of the lidar ? ex. the calibration of lidar to road information, the calibration of lidar to earth information.

best regards,
L

Implementing PV-RCNN

I am looking forward to using this framework for PV-RCNN. For that, I have made the following cfg file

ioumatch3d_pvrcnn_small.txt

Although the Pretraining stage ran successfully I am stuck in SSL training stage in VSA Module and getting some dimension errors.

Appreciate your help

Organising the dataset & MM benchmark

Hi All,
Thank you for your excellent work!
I have 2 questions:

  1. You request to organize the dataset as follow:
ONCE_Benchmark
├── data
│   ├── once
│   │   │── ImageSets
|   |   |   ├──train.txt
|   |   |   ├──val.txt
|   |   |   ├──test.txt
|   |   |   ├──raw_small.txt (100k unlabeled)
|   |   |   ├──raw_medium.txt (500k unlabeled)
|   |   |   ├──raw_large.txt (1M unlabeled)
│   │   │── data
│   │   │   ├──000000
|   |   |   |   |──000000.json (infos)
|   |   |   |   |──lidar_roof (point clouds)
|   |   |   |   |   |──frame_timestamp_1.bin
|   |   |   |   |  ...
|   |   |   |   |──cam0[1-9] (images)
|   |   |   |   |   |──frame_timestamp_1.jpg
|   |   |   |   |  ...
|   |   |   |  ...
├── pcdet
├── tools

But on the dataset website it does not look like that. In addition, I extracted the train annotation tar file and did not find any .txt file. Can you please check it out or update the "getting started" file + Dataset object?

  1. Did you perform a benchmark on multimodal model? if so, which model did you use?

Cheers,
A

About SHIFT_COOR parameter in waymo-to-once setting

In tools/cfgs/once_models/uda_models/waymo_to_once/secondiou_waymo_origin.yaml, the SHIFT_COOR parameter is set to [0,0,0] and [0,0,1.6] is noted. However, the origin of corrdinates in Waymo point cloud is on the ground while that in ONCE point cloud is the LiDAR sensor. Thus, there exists origin shift between two datasets and why you use [0,0,0]. Thanks!

About the once dataset

I have downloaded and unpacked the once dataset, but I can't find train/val.txt. Could you please share the ImageSets directory? I want to reproduce the benchmark. Thanks a lot!

│   ├── once
│   │   │── ImageSets
|   |   |   ├──train.txt
|   |   |   ├──val.txt
|   |   |   ├──test.txt
|   |   |   ├──raw_small.txt (100k unlabeled)
|   |   |   ├──raw_medium.txt (500k unlabeled)
|   |   |   ├──raw_large.txt (1M unlabeled)

Merge request with OpenPCDet

Hi,

I've been working on a merge of your codebase with the official OpenPCDet from OpenMMLab for the supervised part. The purpose would be that OpenPCDet would support supervised training on ONCE directly.

I would like to make a PR of my merge on OpenPCDet when it's ready, but I will wait for your agreement on that. Also if you have specific requirements on License, citation or parts of the code I should not include, don't hesitate to tell me, and we can see with OpenMMLab what can be done.

About the results of ONCE Benchmark

How can I continue to obtain the results of the ONCE test split data after the 'ICCV 2021 Workshop SSLAD Track 2 - 3D Object Detection' competition is over?
Thanks!

About projecting 3D boxes on image planes

Thanks for sharing your code.
I obtain the 2d box by projecting 3D boxes on image planes. But the result is different from the official xxxxxx.json file.
The key code comes from:

def project_lidar_to_image(self, seq_id, frame_id):
points = self.load_point_cloud(seq_id, frame_id)
split_name = self._find_split_name(seq_id)
frame_info = getattr(self, '{}_info'.format(split_name))[seq_id][frame_id]
points_img_dict = dict()
for cam_name in self.__class__.camera_names:
calib_info = frame_info['calib'][cam_name]
cam_2_velo = calib_info['cam_to_velo']
cam_intri = calib_info['cam_intrinsic']
point_xyz = points[:, :3]
points_homo = np.hstack(
[point_xyz, np.ones(point_xyz.shape[0], dtype=np.float32).reshape((-1, 1))])
points_lidar = np.dot(points_homo, np.linalg.inv(cam_2_velo).T)
mask = points_lidar[:, 2] > 0
points_lidar = points_lidar[mask]
points_img = np.dot(points_lidar, cam_intri.T)
points_img_dict[cam_name] = points_img
return points_img_dict
.
Do you do additional processing after obtaining u, v values, such as undistortion?

semi-supervised learning with gtaug?

Hi, after reading the semi-supervised config file, I find for all semi-supervised method, you didn't use gt-aug augmentation method for labeled data, did you already do experiment and find gt-aug didn't give improvement in semi-supervised learning, or just for convenient。

EMA for 3D IoU Match

Hi, I was reading this project recently and I found that there is something different from the original implementation of the 3D IoU Match. In the paper of 3D IoU Match, the EMA was applied for updating the teacher model which was not used in your project. I also realized that in your paper ONCE, the description of the 3D IoU Match also missed the EMA part in figure3. I was wondering whether this is the reason for the less competitive performance of the 3D IoU Match as mentioned in the paper ONCE. Anyway, thanks for this awesome work.

Download data with 10hz lidar sampling rate

Hi, thanks for your work with this dataset.

I'm currently working on lidar data in the ONCE training split dataset downloaded from here. There are 4961 labelled out of 9977 total frames in the training split.

From my understanding of the training split, the lidar data runs at 2FPS (which pertains to the 9977 frames since each timestamp difference is ~500ms), and annotations at 1FPS.

I'm currently working on a 3d object tracking approach with ONCE training dataset which would benefit from having higher frequency (10Hz) point cloud sampling rate. Is there anywhere I can download the full 10Hz frames for the training split?

centerpoints 使用DCN时, 反向传播出现不连续问题?

File "/home/xxx/workspace/ONCE_Benchmark/pcdet/ops/dcn/deform_conv.py", line 87, in backward
deform_conv_cuda.deform_conv_backward_parameters_cuda(
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

请问,是需要替换里边全部的view为reshape么?

About Imageset files

I have downloaded and unpacked the whole ONCE Dataset and have cloned this repo, but I still can't find the ImageSets folder and any description or information about it. How can I get the ImageSets folder? Thanks a lot!

Reproduce SECOND evaluation results

Hi, I trained SECOND, and the best evaluation is below
image
Which is about 1.5% higher than original benchmark. I am not sure if this fluctuation is normal?
I use 3 GPUs, and batch_size 4 per GPU, random seed is fixed to 666

Doubts on Noisy Student

Hi,

First of all, thank you for the great work of putting different semi-supervised methods on lidar point clouds into a single repo.

I have a question on the Noisy Student training. From the Noisy Student config https://github.com/PointsCoder/ONCE_Benchmark/blob/master/tools/cfgs/once_models/semi_learning_models/noisy_student_second_large.yaml, it does not seem to add dropout DP_RATIO into the model. But the Noisy Student paper suggests to add it. Not sure if I am missing something?

Also, the Noisy Student training seems to be for only 1-cycle, instead of 3-cycles as originally done in the paper. Could you please let me know if the multiple cycle experiment lowered the performance compared to only 1-cycle?

On comparing Noisy Student to Pseudo Labels config, it appears the only difference between the 2 being random augmentations of random_world_flip and random_world_rotation are not applied to Student model in Pseudo Labels. Could you please confirm if that's the only difference between these?

Looking forward to your reply.

Thank You !!
Anuj

When will the permanent phase start?

@PointsCoder Thank you for the excellent ONCE dataset and baseline. Now the 'ICCV 2021 Workshop SSLAD Track 2 - 3D Object Detection' competition is over. When will the permanent phase start, and when will the results of the test set be available? Thanks a lot.

3D IOU match, batch_dict['roi_ious'] or batch_dict['roi_scores']?

Hello, First of all, thank you for your excellent work.
In 3DIoUMatch-PVRCNN
line 245-246 ,265-276 and 291.

  if self.training:   # 245-246
      sem_scores = batch_dict['roi_scores'][index]

 if self.training: 275-276
                    final_sem_scores = torch.sigmoid(sem_scores[selected])

record_dict['pred_sem_scores'] = final_sem_scores

it use batch_dict['roi_scores'] but in here
you use a batch_dict['roi_ious'], I wonder what is the difference between them?

besides, it seems no filtering measure for each class score? code like

valid_inds = pseudo_score > conf_thresh.squeeze()

in 3DIoUMatch-PVRCNN. may I add it myself?

Thanks.

Annotation File

Hi All,
Thank you for this great repo!
To make this discussion easier lets take for example file 000080.json which lies under <PATH TO DATA>/data/once/data/000080.
When loading this file it contains 3 keys and under frames we can find the annotations. As far as I can see every 2nd entry does not contain annos field, does it suppose to be like that?
If not how should I handle it?

Cheers,
A

mean teacher's ap is lower than paper

mean teacher result ,which test on validation set and train on small unlabel set.
|AP@50 |overall |0-30m |30-50m |50m-inf |
|Vehicle |73.31 |84.82 |68.15 |52.92 |
|Pedestrian |31.88 |36.56 |27.27 |18.93 |
|Cyclist |61.41 |73.01 |54.92 |37.50 |
|mAP |55.53 |64.80 |50.12 |36.45 |

the ap is lower than the paper, especially the distance is 0-30m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.