GithubHelp home page GithubHelp logo

jaedukseo / faceshifter Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mindslab-ai/faceshifter

0.0 1.0 0.0 99.13 MB

Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)

License: BSD 3-Clause "New" or "Revised" License

Dockerfile 1.53% Python 98.47%

faceshifter's Introduction

FaceShifter โ€” Unofficial PyTorch Implementation

issueBadge starBadge repoSize lastCommit

Unofficial Implementation of FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping with Pytorch-Lightning. In the paper, there are two networks for full pipe-line, AEI-Net and HEAR-Net. We only implement the AEI-Net, which is main network for face swapping.

Datasets

Preparing Data

You need to download and unzip:

Preprocess Data

Preprocessing code is mainly based on Nvidia's FFHQ preprocessing code. You may modify our preprocess with multi-processing functions to finish pre-processing step much faster.

# build docker image from Dockerfile
docker build -t dlib:0.0 ./preprocess
# run docker container from image
docker run -itd --ipc host -v /PATH_TO_THIS_FOLDER/preprocess:/workspace -v /PATH_TO_THE_DATA:/DATA -v /PATH_TO_SAVE_DATASET:/RESULT --name dlib --tag dlib:0.0
# attach
docker attach dlib
# preprocess with dlib
python preprocess.py --root /DATA --output_dir /RESULT

Training

Configuration

There is yaml file in the config folder. They must be edited to match your training requirements (dataset, metadata, etc.).

  • config/train.yaml: Configs for training AEI-Net.
    • Fill in the blanks of: dataset_dir, valset_dir
    • You may want to change: batch_size for GPUs other than 32GB V100, or chkpt_dir to save checkpoints in other disk.

Using Docker

We provide a Dockerfile for easier training environment setup.

docker build -t faceshifter:0.0 .
docker run -itd --ipc host --gpus all -v /PATH_TO_THIS_FOLDER:/workspace -v /PATH_TO_DATASET:/DATA --name FS --tag faceshifter:0.0
docker attach FS

Pre-trained Arcface

During the training process, pre-trained Arcface is required. We provide our pre-trained Arcface model; you can download at this link

Command

To train the AEI-Net, run this command:

python aei_trainer.py -c <path_to_config_yaml> -g <gpus> -n <run_name>
# example command that might help you understand the arguments:
# train from scratch with name "my_runname"
python aei_trainer.py -c config/train.yaml -g 0 -n my_runname

Optionally, you can resume the training from previously saved checkpoint by adding -p <checkpoint_path> argument.

Monitoring via Tensorboard

The progress of training with loss values and validation output can be monitored with Tensorboard. By default, the logs will be stored at log, which can be modified by editing log.log_dir parameter at config yaml file.

tensorboard --log_dir log --bind_all # Scalars, Images, Hparams, Projector will be shown.

Inference

To inference the AEI-Net, run this command:

python aei_inference.py --checkpoint_path <path_to_pre_trained_file> --target_image <path_to_target_image_file> --source_image <path_to_source_image_file> --output_path <path_to_output_image_file> --gpu_num <number of gpu>
# example command that might help you understand the arguments:
# train from scratch with name "my_runname"
python aei_inference.py --checkpoint_path chkpt/my_runname/epoch=0.ckpt --target_image target.png --source_image source.png --output_path output.png --gpu_num 0

We probived colab example. You can use it with your own trained weight.

Results

Comparison with results from original paper

Figure in the original paper

Our Results

Reminds you that we only implement the AEI-Net, and the results in the original paper were generated by AEI-Net and HEAR-Net.

We will soon release the FaceShifter in our cloud API service, maum.ai

License

BSD 3-Clause License.

Implementation Author

Changho Choi @ MINDs Lab, Inc. ([email protected])

Paper Information

@article{li2019faceshifter,
  title={Faceshifter: Towards high fidelity and occlusion aware face swapping},
  author={Li, Lingzhi and Bao, Jianmin and Yang, Hao and Chen, Dong and Wen, Fang},
  journal={arXiv preprint arXiv:1912.13457},
  year={2019}
}

faceshifter's People

Contributors

m00-m00 avatar usingcolor avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.