GithubHelp home page GithubHelp logo

tencentyouturesearch / adamatcher Goto Github PK

View Code? Open in Web Editor NEW
99.0 4.0 11.0 1.71 MB

Code for CVPR2023 paper:Adaptive Assignment for Geometry Aware Local Feature Matching

License: Other

Python 97.69% Shell 2.31%

adamatcher's Introduction

AdaMatcher: Adaptive Assignment for Geometry Aware Local Feature Matching


Adaptive Assignment for Geometry Aware Local Feature Matching Dihe Huang*, Ying Chen*, Yong Liu, Jianlin Liu, Shang Xu, Wenlong Wu, Yikang Ding, Fan Tang, Chengjie Wang CVPR 2023

network

Installation

For environment and data setup, please refer to LoFTR.

Run AdaMatcher

Download Pretrained model

We have provide pretrained model in megadepth dataset, you can download it from weights.

Download Datasets

You need to setup the testing subsets of ScanNet, MegaDepth and YFCC first from driven.

For the data utilized for training, we use the same training data as LoFTR does.

Megadepth validation

For different scales, you need edit megadepth_test_scale_1000.

# with shell script
bash ./scripts/reproduce_test/outdoor_ada_scale.sh

Reproduce the testing results for yfcc datasets

# with shell script
bash ./scripts/reproduce_test/yfcc100m.sh

Training

We train AdaMatcher on the MegaDepth datasets following LoFTR. And the results can be reproduced when training with 32gpus. Please run the following commands:

sh scripts/reproduce_train/outdoor_ada.sh

Acknowledgement

This repository is developed from LoFTR, and we are grateful to its authors for their implementation.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{Huang2023adamatcher,
  title={Adaptive Assignment for Geometry Aware Local Feature Matching},
  author={Dihe Huang, Ying Chen, Yong Liu, Jianlin Liu, Shang Xu, Wenlong Wu, Yikang Ding, Fan Tang, Chengjie Wang},
  journal={{CVPR}},
  year={2023}
}

adamatcher's People

Contributors

swordlicoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

adamatcher's Issues

Demo

Hi.

Thank you for sharing the great work.

Could you please share a demo on how to use the method on image pairs? I would highly appreciate if we can get a user-friendly code to directly match two images given the pre-trained weights.

Thank you!

Training script for HPatches dataset

Hi,

Thanks for releasing the code to public, I have a question that how do you train your model for HPatches? Did it train on HPatches or it was trained on MegaDepth then evaluated on HPatches.

Thanks,
Yongqing

Please add a demo script

Thanks for sharing the work, I believe it would be beneficial if you can add an image pair demo to the repo. Or if it is already contained, please point to it.

Cannot load weights to AdaMatcher

When I try to load the weights you provided to AdaMatcher by doing:
model.load_state_dict(torch.load(checkpoint_path)['state_dict'])

It gives me the following error, do you know what is causing it? Looks like the weight file is not matching

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AdaMatcher:
Missing key(s) in state_dict: "backbone.bn1.weight", "backbone.bn1.bias", "backbone.bn1.running_mean", "backbone.bn1.running_var", "backbone.layer1.0.conv1.weight", "backbone.layer1.0.conv2.weight", "backbone.layer1.0.bn1.weight", "backbone.layer1.0.bn1.bias", "backbone.layer1.0.bn1.running_mean", "backbone.layer1.0.bn1.running_var", "backbone.layer1.0.bn2.weight", "backbone.layer1.0.bn2.bias", "backbone.layer1.0.bn2.running_mean", "backbone.layer1.0.bn2.running_var", "backbone.layer1.1.conv1.weight", "backbone.layer1.1.conv2.weight", "backbone.layer1.1.bn1.weight", "backbone.layer1.1.bn1.bias", "backbone.layer1.1.bn1.running_mean", "backbone.layer1.1.bn1.running_var", "backbone.layer1.1.bn2.weight", "backbone.layer1.1.bn2.bias", "backbone.layer1.1.bn2.running_mean", "backbone.layer1.1.bn2.running_var", "backbone.layer2.0.conv1.weight", "backbone.layer2.0.conv2.weight", "backbone.layer2.0.bn1.weight", "backbone.layer2.0.bn1.bias", "backbone.layer2.0.bn1.running_mean", "backbone.layer2.0.bn1.running_var", "backbone.layer2.0.bn2.weight", "backbone.layer2.0.bn2.bias", "backbone.layer2.0.bn2.running_mean", "backbone.layer2.0.bn2.running_var", "backbone.layer2.0.downsample.0.weight", "backbone.layer2.0.downsample.1.weight", "backbone.layer2.0.downsample.1.bias", "backbone.layer2.0.downsample.1.running_mean", "backbone.layer2.0.downsample.1.running_var", "backbone.layer2.1.conv1.weight", "backbone.layer2.1.conv2.weight", "backbone.layer2.1.bn1.weight", "backbone.layer2.1.bn1.bias", "backbone.layer2.1.bn1.running_mean", "backbone.layer2.1.bn1.running_var", "backbone.layer2.1.bn2.weight", "backbone.layer2.1.bn2.bias", "backbone.layer2.1.bn2.running_mean", "backbone.layer2.1.bn2.running_var", "backbone.layer3.0.conv1.weight", "backbone.layer3.0.conv2.weight", "backbone.layer3.0.bn1.weight", "backbone.layer3.0.bn1.bias", "backbone.layer3.0.bn1.running_mean", "backbone.layer3.0.bn1.running_var", "backbone.layer3.0.bn2.weight", "backbone.layer3.0.bn2.bias", "backbone.layer3.0.bn2.running_mean", "backbone.layer3.0.bn2.running_var", "backbone.layer3.0.downsample.0.weight", "backbone.layer3.0.downsample.1.weight", "backbone.layer3.0.downsample.1.bias", "backbone.layer3.0.downsample.1.running_mean", "backbone.layer3.0.downsample.1.running_var", "backbone.layer3.1.conv1.weight", "backbone.layer3.1.conv2.weight", "backbone.layer3.1.bn1.weight", "backbone.layer3.1.bn1.bias", "backbone.layer3.1.bn1.running_mean", "backbone.layer3.1.bn1.running_var", "backbone.layer3.1.bn2.weight", "backbone.layer3.1.bn2.bias", "backbone.layer3.1.bn2.running_mean", "backbone.layer3.1.bn2.running_var", "backbone.conv1.weight", "backbone.layer3_outconv.weight", "backbone.layer2_outconv.weight", "backbone.layer2_outconv2.0.weight", "backbone.layer2_outconv2.1.weight", "backbone.layer2_outconv2.1.bias", "backbone.layer2_outconv2.1.running_mean", "backbone.layer2_outconv2.1.running_var", "backbone.layer2_outconv2.3.weight", "backbone.layer1_outconv.weight", "backbone.layer1_outconv2.0.weight", "backbone.layer1_outconv2.1.weight", "backbone.layer1_outconv2.1.bias", "backbone.layer1_outconv2.1.running_mean", "backbone.layer1_outconv2.1.running_var", "backbone.layer1_outconv2.3.weight", "feature_interaction.cas_module.block.0.weight", "feature_interaction.cas_module.block.0.bias", "feature_interaction.cas_module.block.2.weight", "feature_interaction.cas_module.block.2.bias", "feature_interaction.layers1.0.q_proj.weight", "feature_interaction.layers1.0.k_proj.weight", "feature_interaction.layers1.0.v_proj.weight", "feature_interaction.layers1.0.merge.weight", "feature_interaction.layers1.0.mlp.0.weight", "feature_interaction.layers1.0.mlp.2.weight", "feature_interaction.layers1.0.pre_norm_q.weight", "feature_interaction.layers1.0.pre_norm_q.bias", "feature_interaction.layers1.0.pre_norm_kv.weight", "feature_interaction.layers1.0.pre_norm_kv.bias", "feature_interaction.layers1.0.norm2.weight", "feature_interaction.layers1.0.norm2.bias", "feature_interaction.layers1.1.q_proj.weight", "feature_interaction.layers1.1.k_proj.weight", "feature_interaction.layers1.1.v_proj.weight", "feature_interaction.layers1.1.merge.weight", "feature_interaction.layers1.1.mlp.0.weight", "feature_interaction.layers1.1.mlp.2.weight", "feature_interaction.layers1.1.pre_norm_q.weight", "feature_interaction.layers1.1.pre_norm_q.bias", "feature_interaction.layers1.1.pre_norm_kv.weight", "feature_interaction.layers1.1.pre_norm_kv.bias", "feature_interaction.layers1.1.norm2.weight", "feature_interaction.layers1.1.norm2.bias", "feature_interaction.feature_embed.weight", "feature_interaction.decoder.layers.0.self_attn.q_proj.weight", "feature_interaction.decoder.layers.0.self_attn.q_proj.bias", "feature_interaction.decoder.layers.0.self_attn.k_proj.weight", "feature_interaction.decoder.layers.0.self_attn.k_proj.bias", "feature_interaction.decoder.layers.0.self_attn.v_proj.weight", "feature_interaction.decoder.layers.0.self_attn.v_proj.bias", "feature_interaction.decoder.layers.0.self_attn.merge.weight", "feature_interaction.decoder.layers.0.multihead_attn.q_proj.weight", "feature_interaction.decoder.layers.0.multihead_attn.q_proj.bias", "feature_interaction.decoder.layers.0.multihead_attn.k_proj.wet", "fine_module.attention.layers.0.pre_norm_kv.bias", "fine_module.attention.layers.0.norm2.weight", "fine_module.attention.layers.0.norm2.bias", "fine_module.attention.layers.1.q_proj.weight", "fine_module.attention.layers.1.k_proj.weight", "fine_module.attention.layers.1.v_proj.weight", "fine_module.attention.layers.1.merge.weight", "fine_module.attention.layers.1.mlp.0.weight", "fine_module.attention.layers.1.mlp.2.weight", "fine_module.attention.layers.1.pre_norm_q.weight", "fine_module.attention.layers.1.pre_norm_q.bias", "fine_module.attention.layers.1.pre_norm_kv.weight", "fine_module.attention.layers.1.pre_norm_kv.bias", "fine_module.attention.layers.1.norm2.weight", "fine_module.attention.layers.1.norm2.bias", "fine_module.down_proj.weight", "fine_module.down_proj.bias", "fine_module.merge_feat.weight", "fine_module.merge_feat.bias", "fine_module.heatmap_conv.0.weight", "fine_module.heatmap_conv.0.bias", "fine_module.heatmap_conv.1.weight", "fine_module.heatmap_conv.1.bias", "fine_module.heatmap_conv.3.weight", "fine_module.heatmap_conv.3.bias".

Unexpected key(s) in state_dict: "matcher.backbone.conv1.weight", "matcher.backbone.bn1.weight", "matcher.backbone.bn1.bias", "matcher.backbone.bn1.running_mean", "matcher.backbone.bn1.running_var", "matcher.backbone.bn1.num_batches_tracked", "matcher.backbone.layer1.0.conv1.weight", "matcher.backbone.layer1.0.conv2.weight", "matcher.backbone.layer1.0.bn1.weight", "matcher.backbone.layer1.0.bn1.bias", "matcher.backbone.layer1.0.bn1.running_mean", "matcher.backbone.layer1.0.bn1.running_var", "matcher.backbone.layer1.0.bn1.num_batches_tracked", "matcher.backbone.layer1.0.bn2.weight", "matcher.backbone.layer1.0.bn2.bias", "matcher.backbone.layer1.0.bn2.running_mean", "matcher.backbone.layer1.0.bn2.running_var", "matcher.backbone.layer1.0.bn2.num_batches_tracked", "matcher.backbone.layer1.1.conv1.weight", "matcher.backbone.layer1.1.conv2.weight", "matcher.backbone.layer1.1.bn1.weight", "matcher.backbone.layer1.1.bn1.bias", "matcher.backbone.layer1.1.bn1.running_mean", "matcher.backbone.layer1.1.bn1.running_var", "matcher.backbone.layer1.1.bn1.num_batches_tracked", "matcher.backbone.layer1.1.bn2.weight", "matcher.backbone.layer1.1.bn2.bias", "matcher.backbone.layer1.1.bn2.running_mean", "matcher.backbone.layer1.1.bn2.running_var", "matcher.backbone.layer1.1.bn2.num_batches_tracked", "matcher.backbone.layer2.0.conv1.weight", "matcher.backbone.layer2.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.