conda create -n fmnet python=3.8 # create new viertual environment
conda activate fmnet
conda install pytorch cudatoolkit -c pytorch # install pytorch
pip install -r requirements.txt # install other necessary libraries via pip
To train and test datasets used in this paper, please download the datasets from the this link and put all datasets under ../data/
├── data
├── FAUST_r
├── FAUST_a
├── SCAPE_r
├── SCAPE_a
├── SHREC19_r
├── TOPKIDS
├── SMAL_r
├── DT4D_r
├── SHREC20
├── SHREC16
├── SHREC16_test
We thank the original dataset providers for their contributions to the shape analysis community, and that all credits should go to the original authors.
For data preprocessing, we provide preprocess.py to compute all things we need. Here is an example for FAUST_r.
python preprocess.py --data_root ../data/FAUST_r/ --no_normalize --n_eig 200
To train the model on a specified dataset.
python train.py --opt options/train/faust.yaml
You can visualize the training process in tensorboard.
tensorboard --logdir experiments/
To test the model on a specified dataset.
python test.py --opt options/test/faust.yaml
The qualitative and quantitative results will be saved in results folder.
An example of texture transfer is provided in texture_transfer.py
python texture_transfer.py
You can find the pre-trained models on SURREAL-5k dataset in checkpoints for reproducibility.
The implementation of DiffusionNet is based on the official implementation.
The framework implementation is adapted from Unsupervised Deep Multi Shape Matching.