The motivation behind this project arises from the absence of a readily available repository for converting 3D object detection LiDAR files into sparse depth maps. Existing repositories that utilize dense depth maps typically only offer results generated by specific depth completion networks, limiting their applicability to other networks. The key features of this repository include minimal dependencies and straightforward usage, making it easily accessible for users.
conda create -n kitti_sparse_depth python=3.10
conda activate kitti_sparse_depth
pip install numpy
pip install pillow
pip install opencv-python
pip install tqdm
And you can also build your enviroment through the following command:
conda env create -f kitti_sparse.yaml
You can download the KITTI 3D Object Detection benchmark from here. After unzipping, the data folder will be like this.
└── kitti
├── testing
│ ├── calib
│ ├── image_2
│ ├── velodyne
└── training
├── calib
├── image_2
├── label_2
├── velodyne
And you should link the dataset into the data
folder
cd data
ln -s $YOUR_KITTI_DADASET kitti
Now you can generate the sparse depth map of training dataset
python converter/lidar_to_depth.py --split training
And for testing dataset, use this command:
python converter/lidar_to_depth.py --split testing
The results will be saved in data/kitti/training or testing/depth_sparse
folder.
The code is modified from SFD, thanks for their great work!