[CVPR2024] The official implementation of "MoCha-Stereo: Motif Channel Attention Network for Stereo Matching".
MoCha-Stereo: Motif Channel Attention Network for Stereo Matching
Ziyang Chen†, Wei Long†, He Yao†, Yongjun Zhang✱,Bingshu Wang, Yongbin Qin, Jia Wu
CVPR 2024
Correspondence: [email protected]; [email protected]✱
Grateful to Prof. Wenting Li, Prof. Huamin Qu, and anonymous reviewers for their comments on this work.
Demo.mp4
@inproceedings{chen2024mocha,
title={MoCha-Stereo: Motif Channel Attention Network for Stereo Matching},
author={Chen, Ziyang and Long, Wei and Yao, He and Zhang, Yongjun and Wang, Bingshu and Qin, Yongbin and Wu, Jia},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={27768--27777},
year={2024}
}
- [CVPR2024] V1 version
- Paper
- Code of MoCha-Stereo
- V2 version
- Preprint manuscript
- Code of MoCha-V2
MoCha-Stereo will be released in this repository in July, 2024.
For researchers at Guizhou University, I have made the code available in our internal repository. Therefore, you do not need to contact me to get the code, just request access to the repository.
- This project borrows the code from IGEV, DLNR, RAFT-Stereo, GwcNet. We thank the original authors for their excellent works!
- Grateful to Prof. Wenting Li, Prof. Huamin Qu, and anonymous reviewers for their comments on this project.
- This project is supported by Science and Technology Planning Project of Guizhou Province, Department of Science and Technology of Guizhou Province, China (Project No. [2023]159).
- This project is supported by Natural Science Research Project of Guizhou Provincial Department of Education, China (QianJiaoJi[2022]029, QianJiaoHeKY[2021]022).