GithubHelp home page GithubHelp logo

macuper / w-plus-adapter-id Goto Github PK

View Code? Open in Web Editor NEW

This project forked from csxmli2016/w-plus-adapter

0.0 0.0 0.0 44.89 MB

w-plus-adapter

License: Other

Shell 0.85% C++ 1.45% Python 84.87% Cuda 8.05% Jupyter Notebook 4.79%

w-plus-adapter-id's Introduction

When StyleGAN Meets Stable Diffusion:
a ${\mathcal{W}_+}$ Adapter for Personalized Image Generation

Xiaoming Li, Xinyu Hou, Chen Change Loy

S-Lab, Nanyang Technological University

Paper | Project Page

We propose a $\mathcal{W}_+$ adapter, a method that aligns the face latent space $\mathcal{W}_+$ of StyleGAN with text-to-image diffusion models, achieving high fidelity in identity preservation and semantic editing.

Given a single reference image (thumbnail in the top left), our $\mathcal{W}_+$ adapter not only integrates the identity into the text-to-image generation accurately but also enables modifications of facial attributes along the $\Delta w$ trajectory derived from StyleGAN. The text prompt is ``a woman wearing a spacesuit in a forest''.

TODO

  • Release the source code and model.
  • Extend to more diffusion models.

Installation

conda create -n wplus python=3.8
conda activate wplus
pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
BASICSR_EXT=True pip install basicsr

Inference for in-the-wild Images (Stage 2)

Step 0: download the weights

If you encounter errors about StyleGAN that are not easy to solve, you can create a new environment and use a lower torch version, e.g., 1.12.1+cu113. You can refer to installation of our MARCONet

python script/download_weights.py

Step 1: get e4e vector from real-world face images

For in the wild face image:

CUDA_VISIBLE_DEVICES=0 python ./script/ProcessWildImage.py -i ./test_data/in_the_wild -o ./test_data/in_the_wild_Result -n

For aligned face image:

CUDA_VISIBLE_DEVICES=0 python ./script/ProcessWildImage.py -i ./test_data/aligned_face -o ./test_data/aligned_face_Result
# Parameters:
-i: input path
-o: save path
-n: need alignment like FFHQ. This is for in-the-wild images.
-s: blind super-resolution using PSFRGAN. This is for low-quality face images

Step 2: Stable Diffusion Generation Using Our $\mathcal{W}_+$ Adapter.

  • The base model supports many pre-trained stable diffusion models, like runwayml/stable-diffusion-v1-5, dreamlike-art/dreamlike-anime-1.0 and Controlnet, without any training. See the details in the test_demo.ipynb
  • You can control the parameter of residual_att_scale to balance the identity preservation and text alignment.

Attributes Editing Examples (Stage 2):

- Prompt: 'a woman wearing a red shirt in a garden'
- Seed: 23
- e4e_path: ./test_data/e4e/1.pth

Emotion Editing

Lipstick Editing

Roundness Editing

Eye Editing using Animate Model of dreamlike-anime-1.0

ControlNet using control_v11p_sd15_openpose

Inference for Face Images (Stage 1)

See test_demo_stage1.ipynb

Attributes Editing Examples (Stage 1):

Face Image Inversion and Editing

Training

Training Data for Stage 1:

  • face image
  • e4e vector
./train_face.sh

Training Data for Stage 2:

  • face image
  • e4e vector
  • background mask
  • in-the-wild image
  • in-the-wild face mask
  • in-the-wild caption
./train_wild.sh

For more details, please refer to the ./train_face.py and ./train_wild.py

Others

You can convert the pytorch_model.bin to wplus_adapter.bin by running:

python script/transfer_pytorchmodel_to_wplus.py

Failure Case Analyses

Since our $\mathcal{W}_+$ adapter affects the results by using the format of residual cross-attention, the final performance relies on the original results of stable diffusion. If the original result is not good, you can manually adjust the prompt or seed to get a better result.

License

This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

Acknowledgement

This project is built based on the excellent IP-Adapter. We also refer to StyleRes, FreeU and PSFRGAN.

Citation

@article{li2023w-plus-adapter,
author = {Li, Xiaoming and Hou, Xinyu and Loy, Chen Change},
title = {When StyleGAN Meets Stable Diffusion: a $\mathcal{W}_+$ Adapter for Personalized Image Generation},
journal = {arXiv preprint arXiv: 2311.17461},
year = {2023}
}

w-plus-adapter-id's People

Contributors

csxmli2016 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.