GithubHelp home page GithubHelp logo

omriav / blended-latent-diffusion Goto Github PK

View Code? Open in Web Editor NEW
509.0 48.0 32.0 10.08 MB

Official implementation for "Blended Latent Diffusion" [SIGGRAPH 2023]

Home Page: https://omriavrahami.com/blended-latent-diffusion-page/

License: MIT License

Python 9.53% Shell 0.07% Jupyter Notebook 90.41%
deep-learning multimodal multimodal-deep-learning text-guided-manipulation text-to-image text-to-image-synthesis computer-vision diffusion diffusion-models generative-model

blended-latent-diffusion's Introduction

Blended Latent Diffusion [SIGGRAPH 2023]

Blended Latent Diffusion

Omri Avrahami, Ohad Fried, Dani Lischinski

Abstract: The tremendous progress in neural image generation, coupled with the emergence of seemingly omnipotent vision-language models has finally enabled text-based interfaces for creating and editing images. Handling generic images requires a diverse underlying generative model, hence the latest works utilize diffusion models, which were shown to surpass GANs in terms of diversity. One major drawback of diffusion models, however, is their relatively slow inference time. In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask. Our solution leverages a recent text-to-image Latent Diffusion Model (LDM), which speeds up diffusion by operating in a lower-dimensional latent space. We first convert the LDM into a local image editor by incorporating Blended Diffusion into it. Next we propose an optimization-based solution for the inherent inability of this LDM to accurately reconstruct images. Finally, we address the scenario of performing local edits using thin masks. We evaluate our method against the available baselines both qualitatively and quantitatively and demonstrate that in addition to being faster, our method achieves better precision than the baselines while mitigating some of their artifacts

Applications

Background Editing

Text Generation

Multiple Predictions

Alter an Existing Object

Add a New Object

Scribble Editing

Installation

Install the conda virtual environment:

$ conda env create -f environment.yaml
$ conda activate ldm

Usage

New ๐Ÿ”ฅ - Stable Diffusion Implementation

You can use the newer Stable Diffusion implementation based on Diffusers library. For that, you need to install PyTorch 2.1 and Diffusers via the following commands:

$ conda install pytorch==2.1.0 torchvision==0.16.0  pytorch-cuda=11.8 -c pytorch -c nvidia
$ pip install -U diffusers==0.19.3
  • For using Stable Diffusion XL (requires a stronger GPU), use the following script:
$ python scripts/text_editing_SDXL.py --prompt "a stone" --init_image "inputs/img.png" --mask "inputs/mask.png"

You can use smaller --batch_size in order to save GPU memory.

  • For using Stable Diffusion v2.1, use the following script:
$ python scripts/text_editing_SD2.py --prompt "a stone" --init_image "inputs/img.png" --mask "inputs/mask.png"

Old - Latent Diffusion Model Implementation

For using the old implementation, based on the Latent Diffusion Model (LDM), you need first to download the pre-trained weights (5.7GB):

$ mkdir -p models/ldm/text2img-large/
$ wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt

If the above link is broken, you can use this Google drive mirror.

Then, editing the image may require two steps:

Step 1 - Generate initial predictions

$ python scripts/text_editing_LDM.py --prompt "a pink yarn ball" --init_image "inputs/img.png" --mask "inputs/mask.png"

The predictions will be saved in outputs/edit_results/samples.

You can use a larger batch size by specifying --n_samples to the maximum number that saturates your GPU.

Step 2 (optional) - Reconstruct the original background

If you want to reconstruct the original image background, you can run the following:

$ python scripts/reconstruct.py --init_image "inputs/img.png" --mask "inputs/mask.png" --selected_indices 0 1

You can choose the specific image indices that you want to reconstruct. The results will be saved in outputs/edit_results/samples/reconstructed_optimization.

Citation

If you find this project useful for your research, please cite the following:

@article{avrahami2023blendedlatent,
        author = {Avrahami, Omri and Fried, Ohad and Lischinski, Dani},
        title = {Blended Latent Diffusion},
        year = {2023},
        issue_date = {August 2023},
        publisher = {Association for Computing Machinery},
        address = {New York, NY, USA},
        volume = {42},
        number = {4},
        issn = {0730-0301},
        url = {https://doi.org/10.1145/3592450},
        doi = {10.1145/3592450},
        journal = {ACM Trans. Graph.},
        month = {jul},
        articleno = {149},
        numpages = {11},
        keywords = {zero-shot text-driven local image editing}
}

@InProceedings{Avrahami_2022_CVPR,
        author    = {Avrahami, Omri and Lischinski, Dani and Fried, Ohad},
        title     = {Blended Diffusion for Text-Driven Editing of Natural Images},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        month     = {June},
        year      = {2022},
        pages     = {18208-18218}
}

Acknowledgements

This code is based on Latent Diffusion Models.

blended-latent-diffusion's People

Contributors

omriav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blended-latent-diffusion's Issues

how to achieve the Scribble-guided editing

I am very interested about the Scribble-guided editing which is posed in the paper, but I am confused how to use those code to evalutate it. Could you give me some instruction, pls?

Workflow for scribble editing

Hi,

I would like to use your code to make small adjustments on existing images based on a rough scetch.
The scribble functionality that is described seems to be exactly what I am looking for, but how do I use it?
If I change the original image and provide a mask, how do I make sure the inpainted colors are used and not random noise?

Kind regards,
Ties

Experiments with DeepFloyd IF?

I was wondering if you found that this approach also worked with DeepFloyd IF, or is it specific to stable diffusion?

Mismatched tensors when using stable diffusion implementation

When running the code with text_editing_stable_diffusion.py, I am getting the following error when using the sample image and mask:
FutureWarning: Accessing config attribute in_channels directly via 'UNet2DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet2DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
(batch_size, self.unet.in_channels, height // 8, width // 8),
Traceback (most recent call last):
File "/nfshomes/jianing/project_files/blended-latent-diffusion/scripts/text_editing_stable_diffusion.py", line 167, in
results = bld.edit_image(
^^^^^^^^^^^^^^^
File "/nfshomes/jianing/CAAR/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/nfshomes/jianing/project_files/blended-latent-diffusion/scripts/text_editing_stable_diffusion.py", line 127, in edit_image
noise_source_latents = self.scheduler.add_noise(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nfshomes/jianing/CAAR/lib/python3.11/site-packages/diffusers/schedulers/scheduling_ddim.py", line 468, in add_noise
noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (84) must match the size of tensor b (64) at non-singleton dimension 3

what is the training data

Hi. I wonder which dataset you are using. Also, during training, how to get the location of the mask that is aligned with the text description? Thanks!

wrong result with the demo

python scripts/text_editing.py --prompt "text "ECCV"" --init_image "inputs/img.png" --mask "inputs/mask.png"
When I use your demo to generate the text "ECCV", the "text" is also generated in the img, which is different in your paper. Could please tell me the reason ??? thanks a lot.

The link for the pretrained weight is not working

Hello.

Thank you for sharing great work with community.

I tried to download pre-trained weight but I got time-out error.

So, I tried to visit your lab website but it is also not working.

Would you please check this issue?

Thank you.

Best,
CK

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.