GithubHelp home page GithubHelp logo

tencentarc / brushnet Goto Github PK

View Code? Open in Web Editor NEW
1.0K 45.0 93.0 27.92 MB

The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"

Home Page: https://tencentarc.github.io/BrushNet/

License: Other

Makefile 0.02% Python 99.90% Dockerfile 0.08%
diffusion diffusion-models image-inpainting text-to-image

brushnet's Issues

Compare with the 9-channel controlnet-inpainting model

Dear developer,

Thank you for your wonderful work.

image

The paper seems to only compare 4-channel controlnet-inpainting.
Have you ever compared the 9-channel controlnet-inpainting model?
What I mean is that the input of UNet is 9-channel.

Best wishes.

strength option

There is an option like strength, what I want is for the masked part to take the color of the original image.

why mask not worked?

i have used the mask to generate background, but the model change my foreground !
13b69ddd0d08c
dfbada49e0035

the result is
output20240426061019
you can find the words in foreground is changed!

Uncencored

Humm I got "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed."

How to remove the filter or can someone make a fork without the filter

Flexible Control params not support ?

Model/Pipeline/Scheduler description

In section 4.3 of the paper, I saw that you mentioned blend operation, but I didn’t see them in the code?

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

BrushNet with other ControlNet models.

Thanks for your great work!
But i wanted to make sure about one thing.
Is it possible to use BurshNet inpainting model with other ControlNet models such as Canny, Segmentation etc to get the better outputs or other purposes?

cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers'

Describe the bug

Running examples/brushnet/app_brushnet.py fails with this error message:
File "/media/iwoolf/tenT/BrushNet/./examples/brushnet/app_brushnet.py", line 10, in
from diffusers import StableDiffusionBrushNetPipeline, BrushNetModel, UniPCMultistepScheduler
ImportError: cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers' (/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/diffusers/init.py)

I have tried uninstalling and re-installing diffusers, but it made no difference.
I'm running Ubuntu 22.04.4LTS

Reproduction

python examples/brushnet/app_brushnet.py
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:485: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:342: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:342: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
Traceback (most recent call last):
File "/media/iwoolf/tenT/BrushNet/examples/brushnet/app_brushnet.py", line 10, in
from diffusers import StableDiffusionBrushNetPipeline, BrushNetModel, UniPCMultistepScheduler
ImportError: cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers' (/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/diffusers/init.py)

Logs

Instructions, please?

System Info

  • diffusers version: 0.27.2
  • Platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35
  • Python version: 3.9.19
  • PyTorch version (GPU?): 2.2.2+cu121 (True)
  • Huggingface_hub version: 0.22.2
  • Transformers version: 4.39.3
  • Accelerate version: 0.20.3
  • xFormers version: 0.0.25.post1

Who can help?

@yiyixuxu @DN6 @sayakpaul

mask value problem

Code

I've noticed a detail that during training, the mask uses INTER_CUBIC and the latent dimension of the mask has continuous values between 0 and 1. However, during inference, it always uses discrete values of 0 and 1. Is there a problem with this?

Often get all zero image when inpainting mask is small

Thank you for opening source this work and the result is wonderful!
But I met met a problem when I tried to edit cartoon images:

  • Checkpoint: AnythingV5_Ink.
  • Brushnet model: random.
  • Prompt: 1girl, 1boy with mouth open (or closed)
  • Image and mask:
image

I often get a result that is all zeros:
image

Adjusting prompt seems no use.
When I change to another checkpoint, like dreamshaper_v8, the result is reasonable(But not cartoon style I want).

So, why for AnythingV5_Ink, the results are often all zeros. Seems that the model is collapsed.
Do you guys have any idea on how to avoid this?

HuggingFace Demo Fails to Load

Describe the bug

When I visit the HuggingFace demo page here: https://huggingface.co/spaces/TencentARC/BrushNet

It fails to load with the saying Runtime error Memory limit exceeded (46Gi).

Reproduction

Simply visit: https://huggingface.co/spaces/TencentARC/BrushNet

Logs

===== Application Startup at 2024-04-02 14:01:10 =====

Installing correct gradio version...
Found existing installation: gradio 3.50.2
Uninstalling gradio-3.50.2:
  Successfully uninstalled gradio-3.50.2
Collecting gradio==3.50.0
  Downloading gradio-3.50.0-py3-none-any.whl (20.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.3/20.3 MB 100.7 MB/s eta 0:00:00
Requirement already satisfied: websockets<12.0,>=10.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (11.0.3)
Requirement already satisfied: packaging in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (24.0)
Requirement already satisfied: pyyaml<7.0,>=5.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (6.0.1)
Requirement already satisfied: aiofiles<24.0,>=22.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (23.2.1)
Requirement already satisfied: jinja2<4.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.1.3)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (1.10.14)
Requirement already satisfied: ffmpy in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.3.2)
Requirement already satisfied: orjson~=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.10.0)
Requirement already satisfied: matplotlib~=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.8.3)
Requirement already satisfied: python-multipart in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.0.9)
Requirement already satisfied: gradio-client==0.6.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.6.1)
Requirement already satisfied: semantic-version~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.10.0)
Requirement already satisfied: pandas<3.0,>=1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.2.1)
Requirement already satisfied: huggingface-hub>=0.14.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.22.1)
Requirement already satisfied: httpx in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.27.0)
Requirement already satisfied: altair<6.0,>=4.2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (5.2.0)
Requirement already satisfied: pillow<11.0,>=8.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (9.5.0)
Requirement already satisfied: typing-extensions~=4.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (4.10.0)
Requirement already satisfied: uvicorn>=0.14.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.29.0)
Requirement already satisfied: pydub in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.25.1)
Requirement already satisfied: numpy~=1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (1.26.4)
Requirement already satisfied: importlib-resources<7.0,>=1.3 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (6.4.0)
Requirement already satisfied: fastapi in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.110.0)
Requirement already satisfied: markupsafe~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.1.5)
Requirement already satisfied: requests~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.31.0)
Requirement already satisfied: fsspec in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio-client==0.6.1->gradio==3.50.0) (2024.2.0)
Requirement already satisfied: toolz in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio==3.50.0) (0.12.1)
Requirement already satisfied: jsonschema>=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio==3.50.0) (4.21.1)
Requirement already satisfied: filelock in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from huggingface-hub>=0.14.0->gradio==3.50.0) (3.13.3)
Requirement already satisfied: tqdm>=4.42.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from huggingface-hub>=0.14.0->gradio==3.50.0) (4.66.2)
Requirement already satisfied: zipp>=3.1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from importlib-resources<7.0,>=1.3->gradio==3.50.0) (3.18.1)
Requirement already satisfied: kiwisolver>=1.3.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (1.4.5)
Requirement already satisfied: contourpy>=1.0.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (1.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (2.9.0.post0)
Requirement already satisfied: fonttools>=4.22.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (4.50.0)
Requirement already satisfied: cycler>=0.10 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (0.12.1)
Requirement already satisfied: pyparsing>=2.3.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (3.1.2)
Requirement already satisfied: tzdata>=2022.7 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio==3.50.0) (2024.1)
Requirement already satisfied: pytz>=2020.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio==3.50.0) (2024.1)
Requirement already satisfied: idna<4,>=2.5 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (3.6)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (3.3.2)
Requirement already satisfied: certifi>=2017.4.17 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (2024.2.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (2.2.1)
Requirement already satisfied: click>=7.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from uvicorn>=0.14.0->gradio==3.50.0) (8.0.4)
Requirement already satisfied: h11>=0.8 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from uvicorn>=0.14.0->gradio==3.50.0) (0.14.0)
Requirement already satisfied: starlette<0.37.0,>=0.36.3 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from fastapi->gradio==3.50.0) (0.36.3)
Requirement already satisfied: sniffio in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (1.3.1)
Requirement already satisfied: httpcore==1.* in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (1.0.5)
Requirement already satisfied: anyio in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (4.3.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (2023.12.1)
Requirement already satisfied: referencing>=0.28.4 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (0.34.0)
Requirement already satisfied: rpds-py>=0.7.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (0.18.0)
Requirement already satisfied: attrs>=22.2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (23.2.0)
Requirement already satisfied: six>=1.5 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio==3.50.0) (1.16.0)
Requirement already satisfied: exceptiongroup>=1.0.2 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from anyio->httpx->gradio==3.50.0) (1.2.0)
Installing collected packages: gradio
Successfully installed gradio-3.50.0

[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: pip install --upgrade pip
Installing Finished!

System Info

I ran this on the web

Who can help?

No response

Prompt details about evaluating on EditBench

Hi, what an amazing work!
I found that you evaluated BrushNet on editbench in your paper. In the editbench's annotation file, there are several kinds of prompts: prompt_full, prompt_mask-simple, prompt_mask-rich and the like. So which prompt did you use when evaluating?

Is it possible to guide inpainting using controlnet?

Model/Pipeline/Scheduler description

Hi, I am trying to inpaint a person in specific pose. Is it possible to use controlnet with this repo?

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

adapt to automatic11111

thanks for your great work. Do you have a plan to adjust the brushnet to sd-webui-automatic1111111 ?

Random brush model release ?

Model/Pipeline/Scheduler description

Hi,

Thank you for releasing the 1.5 model for segmented inpainting. It's working pretty great but it's not working as well for masks that don't follow segmentations. Do you plan on releasing the random brush model as well ?

Thanks

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

weight question

Describe the bug

I downloaded the weights you guys provided, but there seems to be some issues with the weights

ValueError: Cannot load <class 'diffusers.models.brushnet.BrushNetModel'> from /model/BrushNet/unet because the following keys are missing:
brushnet_up_blocks.11.weight, brushnet_up_blocks.4.bias, brushnet_up_blocks.10.bias, brushnet_down_blocks.9.bias, brushnet_up_blocks.6.weight, brushnet_down_blocks.3.bias, brushnet_down_blocks.5.bias, brushnet_up_blocks.7.bias, brushnet_up_blocks.3.bias, brushnet_up_blocks.12.weight, brushnet_down_blocks.8.weight, brushnet_up_blocks.13.weight, brushnet_down_blocks.6.bias, brushnet_up_blocks.9.bias, brushnet_down_blocks.11.bias, brushnet_down_blocks.5.weight, brushnet_down_blocks.4.bias, brushnet_mid_block.bias, brushnet_down_blocks.2.weight, brushnet_down_blocks.3.weight, brushnet_down_blocks.1.bias, brushnet_down_blocks.7.bias, brushnet_down_blocks.0.bias, brushnet_up_blocks.12.bias, brushnet_up_blocks.7.weight, brushnet_up_blocks.8.bias, brushnet_up_blocks.2.bias, brushnet_up_blocks.14.weight, brushnet_up_blocks.0.weight, brushnet_up_blocks.11.bias, brushnet_up_blocks.3.weight, brushnet_up_blocks.1.bias, brushnet_up_blocks.6.bias, brushnet_up_blocks.4.weight, brushnet_down_blocks.4.weight, brushnet_up_blocks.0.bias, brushnet_down_blocks.0.weight, brushnet_up_blocks.2.weight, brushnet_up_blocks.10.weight, brushnet_mid_block.weight, brushnet_up_blocks.13.bias, brushnet_up_blocks.1.weight, brushnet_up_blocks.14.bias, brushnet_down_blocks.7.weight, brushnet_up_blocks.8.weight, brushnet_up_blocks.9.weight, brushnet_down_blocks.1.weight, brushnet_down_blocks.10.weight, conv_in_condition.bias, brushnet_down_blocks.8.bias, conv_in_condition.weight, brushnet_up_blocks.5.bias, brushnet_down_blocks.2.bias, brushnet_down_blocks.6.weight, brushnet_down_blocks.11.weight, brushnet_down_blocks.10.bias, brushnet_down_blocks.9.weight, brushnet_up_blocks.5.weight.
Please make sure to pass low_cpu_mem_usage=False and device_map=None if you want to randomly initialize those weights or else make sure your checkpoint file is correct.

Reproduction

realisticVisionV60B1_v51VAE

Logs

No response

System Info

  • diffusers version: 0.27.0
  • Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.17
  • Python version: 3.8.18
  • PyTorch version (GPU?): 1.12.1+cu116 (True)
  • Huggingface_hub version: 0.21.4
  • Transformers version: 4.38.2
  • Accelerate version: 0.20.3
  • xFormers version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

Try to install brushnet into my local PC, but met with clip installation problem

Just following the instruction here to install it step by step. During the pip install -r requirements.txt. It stopped when installing clip, and then the following issue appeared:

(brushnet) PS D:\brushnet\examples\brushnet> pip install -r requirements.txt
Requirement already satisfied: torchvision in c:\users\dennis.conda\envs\brushnet\lib\site-packages (from -r requirements.txt (line 1)) (0.18.0+cu121)
Collecting transformers>=4.25.1 (from -r requirements.txt (line 2))
Using cached transformers-4.40.2-py3-none-any.whl.metadata (137 kB)
Collecting ftfy (from -r requirements.txt (line 3))
Using cached ftfy-6.2.0-py3-none-any.whl.metadata (7.3 kB)
Collecting tensorboard (from -r requirements.txt (line 4))
Using cached tensorboard-2.16.2-py3-none-any.whl.metadata (1.6 kB)
Collecting datasets (from -r requirements.txt (line 5))
Using cached datasets-2.19.1-py3-none-any.whl.metadata (19 kB)
Collecting Pillow==9.5.0 (from -r requirements.txt (line 6))
Using cached Pillow-9.5.0-cp310-cp310-win_amd64.whl.metadata (9.7 kB)
Collecting opencv-python (from -r requirements.txt (line 7))
Using cached opencv_python-4.9.0.80-cp37-abi3-win_amd64.whl.metadata (20 kB)
Collecting imgaug (from -r requirements.txt (line 8))
Using cached imgaug-0.4.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting accelerate==0.20.3 (from -r requirements.txt (line 9))
Using cached accelerate-0.20.3-py3-none-any.whl.metadata (17 kB)
Collecting image-reward (from -r requirements.txt (line 10))
Using cached image_reward-1.5-py3-none-any.whl.metadata (12 kB)
Collecting hpsv2 (from -r requirements.txt (line 11))
Using cached hpsv2-1.2.0-py3-none-any.whl.metadata (17 kB)
Collecting torchmetrics (from -r requirements.txt (line 12))
Using cached torchmetrics-1.4.0.post0-py3-none-any.whl.metadata (19 kB)
Collecting open-clip-torch (from -r requirements.txt (line 13))
Using cached open_clip_torch-2.24.0-py3-none-any.whl.metadata (30 kB)
Collecting gradio==3.50.0 (from -r requirements.txt (line 14))
Using cached gradio-3.50.0-py3-none-any.whl.metadata (17 kB)
Collecting segment_anything (from -r requirements.txt (line 15))
Using cached segment_anything-1.0-py3-none-any.whl.metadata (487 bytes)
Collecting clip (from -r requirements.txt (line 17))
Using cached clip-0.2.0.tar.gz (5.5 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "C:\Users\Dennis\AppData\Local\Temp\pip-install-d5yr6j82\clip_379ab9db544e420aa4978ca9d021dd6e\setup.py", line 29, in
long_description=open('README.rst').read() + '\n\n' +
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1428: illegal multibyte sequence
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details..

What can I do next step? Thank you.

请问GPU显存需求多少

这是一个很棒的框架,可以应用在很多方面!
不过在论文中的实现细节上,我看到是用来8块V100GPU,请问使用的GPU显存是多大呢?如果只有24G显存是否能够训练?

Finetune SDXL

Thanks for your good work. I use the offered SDXL weights to finetune with my own data, but it seems the loss doesn't converge and I wonder whether the offered weights are trained on 1024 resolution. I test the finetuned model and it cannot learn the style of the training data. Do you have any advice? Thx

remove object

Hi, thanks for your code. I have a question, if I want to remove an object (for example I want to remove this blueberry), how should I write the prompt? If I wrote "remove the blueberry" it might not work well.
image

Create Custom BrushData set.

Thanks for your novel work!
I would like to fine-tune your model on my custom dataset. But in this repo there have no detail information about how to create my own BurshData set.
I have downloaded few files from and when i have extracted i have seen something like this:
brushdata

So i am quite confused how can i convert my (image,masks,caption) to this format!
I will really appreciate if you give me some informations about it, Thanks!!

Can you provide the code for data preprocessing?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].

I noticed you preprocessed the images and generated several features for each image, could you provide the preprocessing codes? Thanks a lot!

Describe the solution you'd like.
A clear and concise description of what you want to happen.

Describe alternatives you've considered.
A clear and concise description of any alternative solutions or features you've considered.

Additional context.
Add any other context or screenshots about the feature request here.

why brushnet control need noise_latent as part of condition ?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].

Describe the solution you'd like.
A clear and concise description of what you want to happen.

Describe alternatives you've considered.
A clear and concise description of any alternative solutions or features you've considered.

Additional context.
Add any other context or screenshots about the feature request here.

Question about training

Your job is great! I have some questions about training epochs. I want to train BrushNet on my own data, and I see the default training epoch is 10000. And I also see the config.json in your offered model weights:
random_mask_brushnet_ckpt: "runs/logs/brushnet_randommask/checkpoint-100000"
segmentation_mask_brushnet_ckpt: "runs/logs/brushnet_segmask/checkpoint-550000"
And it seems that other models also corresponds to different training epochs.

So generally I can use 10000 training epochs or I need to choose based on the loss values? In the paper, it says: "BrushNet and all ablation models are trained for 430 thousands steps on 8 NVIDIA Tesla V100 GPUs, which takes around 3 days". For my situation, I think my training seems to take much longer time than that.

Thx for your reply.

when to release checkpoint (sdv1.5)?

Model/Pipeline/Scheduler description

hello! Thanks for your excellent work. I want to ask when to release the checkpoint? Thanks a lot :-)

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

The object added has a strange shape

rabbit
bird
When I tried the huggingface demo, I found that the shape of the object was always very close to the shape of the mask, and it was very strange, why

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.