GithubHelp home page GithubHelp logo

psmnet-fusionx3's Introduction

PSMNet-FusionX3 : LiDAR-Guided Deep Learning Stereo Dense Matching On Aerial Images

Introduction

The site is for the paper appear in CVPR2023 Photogrammetric Computer Vision Workshop.

The 8 minute presentation can be found here.

I also make a poster, but I did not participate in person because of visa issue, the poster can be found here.

Dataset

In the paper, we use two dataset with high dense LiDAR.

dataset Image GSD(cm) LiDAR density( $pt/m^2$ )
DublinCity 3.4 250-348
Toulouse2020 5 50

DublinCity Dataset

DublinCity is an open dataset, the original aerial and LiDAR point cloud can be downloaded, the origin dataset is very large.

*Origin DublinCity coverage*
Origin DublinCity coverage

Because the origin dataset use Terrasolid for the orientation, the origin orientation is not accurate, so the registration step is mondatory, the experiment area is shown in

*DublinCity coverage*
DublinCity experiment coverage

Toulouse2020 Dataset

Toulouse2020 is a dataset collected by IGN (French Mapping Agency) for AI4GEO project, the origin dataset is very large.

*Origin Toulouse2020 coverage*
Origin Toulouse2020 coverage

Because the whole area is too large, in order to registration the image and LiDAR, we select the center city of Toulouse, the experiment area is shown in

*Toulouse2020 coverage*
Toulouse2020 experiment coverage

Data set generation

The data is generated using our previous work, the detail introduction can also be found on Github. The training and testing splitting is :

*DublinCity coverage* *Toulouse2020 coverage*
DublinCity Toulouse2020

We will also publish the dataset for public use, because the original dataset is too large, at present, we will only publish the used training and testing dataset in the paper.

All the dataset are host by Zenodo, the download site is here.

Method

In the paper, we propose a method based on PSMNet1, based on the stereo work, like the work2 in computer vision, we use the TIN expansion for remote sensing data. The newwork is shown :

*PSMNet-FusionX3*
PSMNet-FusionX3

In the experiment, we compare our method with GCNet3, PSMNet1, GuidedStereo4, and GCNet-CCVNorm5. There is no official code for GCNet, so the GCNet result is from the code of GCNet-CCVNorm.

Code

PSMNet-FusionX3 and the methods compared in the paper are all provided here.

The pre-trained models will be also available.

For the other methods, because our dataset is different from the computer vision dataset, we will also put the revised code in this repository.

Guided Stereo Matching

We revise the official code to adopt to the remote sensing dataset, the detail can be found in folder.

GCNet-CCVNorm

We revise the official code to adopt to the remote sensing dataset, the detail can be found in folder.

PSMNet-FusionX3

We also release the code of our method, the detail can be found in folder.

Dataset proprocessing

Because the input guidance is sampled from the origin dense disparity, and for the TIN based interpolation, this is processed before training, the detail can be found in folder.

BibTeX Citation

@inproceedings{wu2023psmnet,
  title={PSMNet-FusionX3: LiDAR-Guided Deep Learning Stereo Dense Matching on Aerial Images},
  author={Wu, Teng and Vallet, Bruno and Pierrot-Deseilligny, Marc},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6526--6535},
  year={2023}
}

Feed Back

If you think you have any problem, contact Teng Wu [email protected]

Footnotes

  1. Chang, Jia-Ren, and Yong-Sheng Chen. "Pyramid stereo matching network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. 2

  2. Huang, Yu-Kai, et al. "S3: Learnable sparse signal superdensity for guided depth estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.

  3. Kendall, Alex, et al. "End-to-end learning of geometry and context for deep stereo regression." Proceedings of the IEEE international conference on computer vision. 2017.

  4. Poggi, Matteo, et al. "Guided stereo matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

  5. Wang, Tsun-Hsuan, et al. "3d lidar and stereo fusion using stereo matching network with conditional cost volume normalization." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.

psmnet-fusionx3's People

Contributors

whuwuteng avatar

Stargazers

 avatar  avatar Tim Iles avatar Ishan avatar

Watchers

 avatar

psmnet-fusionx3's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.