GithubHelp home page GithubHelp logo

killsking / ransac-flow Goto Github PK

View Code? Open in Web Editor NEW

This project forked from xishen0220/ransac-flow

0.0 1.0 0.0 7.5 MB

RANSAC-Flow: generic two-stageimage alignment

Home Page: http://imagine.enpc.fr/~shenx/RANSAC-Flow/

License: MIT License

Shell 0.04% Python 7.58% Jupyter Notebook 92.38%

ransac-flow's Introduction

RANSAC-Flow

Pytorch implementation of Paper "RANSAC-Flow: generic two-stageimage alignment"

[PDF] [Project webpage] [Demo]

teaser

If our project is helpful for your research, please consider citing :

@inproceedings{shen2020ransac,
          title={RANSAC-Flow: generic two-stage image alignment},
          author={Shen, Xi and Darmon, Francois and Efros, Alexei A and Aubry, Mathieu},
          booktitle={Arxiv},
          year={2020}
        }

Table of Content

1. Visual Results

1.1. Aligning Artworks (More results can be found in our project webpage)

Input Our Fine Alignment
Animation Avg Animation Avg
gif gif gif gif
gif gif gif gif

1.2. 3D recontruction 2-view geometry estimation (More results can be found in our project webpage)

Source Target 3D Reconstruction
gif gif gif

1.3. Texture transfer

Source Target Texture Transfer
gif gif gif

Other results (such as: aligning duplicated artworks, optical flow, localization etc.) can be seen in our paper.

2. Installation

2.1. Dependencies

Install Pytorch adapted to your CUDA version :

Other dependencies (tqdm, visdom, pandas, kornia, opencv-python) :

bash requirement.sh

2.2. Pre-trained models

Quick download :

cd model/pretrained
bash download_model.sh

For more details of the pre-trained models, see here

2.3. Datasets

Download the results of ArtMiner :

cd data/
bash Brueghel_detail.sh # Brueghel detail dataset (208M) : visual results, aligning groups of details

Download our training data here (~9G). It includes the validation and test data as well.

3. Quick Start

A quick start guide of how to use our code is available in demo.ipynb

notebook

4. Train

4.1. Generating training pairs

To run the training, we need pairs that are coarsely aligned. We provide a notebook to show how to generate the training pairs. Note that, we also provide our training pairs in here.

4.2. Reproducing the training on MegaDepth

The training data need to be downloaded from here and saved into ./data. The file structure is :

./RANSAC-Flow/data/MegaDepth
├── MegaDepth_Train/
├── MegaDepth_Train_Org/
├── Val/
└── Test/

As mentioned in the paper, the model trained on MegaDepth contains the following 3 different stages of training:

  • Stage 1 : we only trained the reconstruction loss. You can find the hyper-parameters in train/stage1.sh. You can run the training of this stage by :
cd train/ 
bash stage1.sh
  • Stage 2 : in this stage, we train jointly: reconstruction loss + cycle consistency of the flow. We started from the model trained in the stage 1. The hyper-parameters are in train/stage2.sh. You need to change the argument --resumePth to your model path. Once it is done, run:
cd train/ 
bash stage2.sh
  • Stage 3 : finally, we trained all the three losses together: reconstruction loss + cycle consistency of the flow + matchability loss. We started from the model trained in the stage 2. The hyper-parameters are in train/stage3.sh. You need to change the argument --resumePth to your model path. Once it is done, run:
cd train/ 
bash stage3.sh

4.3. Fine-tuning on your own dataset

If you want to conduct fine-tuning on your own dataset. It is recommended to start from our MegaDepth trained model. You can see all the arguments of training by :

cd train/ 
python train.py --help

If you don't need to predict the matchability, you can set the weight of the matchability loss to 0 (--eta 0 in the train.py), and set your path of images (--trainImgDir). Please refer to train/stage2.sh for other arguments.

In case of predicting matchability, you need to tune the weight of the matchability loss (argument --eta in the train.py) depending on the dataset.

5. Evaluation

The evaluation of different tasks can be seen in the following files:

ransac-flow's People

Contributors

xishen0220 avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.