GithubHelp home page GithubHelp logo

uminosachi / sd-webui-inpaint-anything Goto Github PK

View Code? Open in Web Editor NEW
923.0 14.0 84.0 3.5 MB

Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.

License: Apache License 2.0

Python 97.99% JavaScript 2.01%
ai-art anything diffusers diffusion generative-art gradio huggingface huggingface-diffusers image-generation image2image

sd-webui-inpaint-anything's People

Contributors

khatamnejad avatar sj-si avatar uminosachi avatar vwuerz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-inpaint-anything's Issues

Impossible to get it showing up on the TABS, i already tried everything.

Already tried:

#31

#2

And no errors on the console:

venv "C:\Users\ZeroCool22\Desktop\Auto\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Installing requirements
[Auto-Photoshop-SD] Attempting auto-update...
[Auto-Photoshop-SD] switch branch to extension branch.
checkout_result: Your branch is up to date with 'origin/master'.

[Auto-Photoshop-SD] Current Branch.
branch_result: * master

[Auto-Photoshop-SD] Fetch upstream.
fetch_result:
[Auto-Photoshop-SD] Pull upstream.
pull_result: Already up to date.



Fetching updates for midas...
Checking out commit for midas with hash: 1645b7e...



Launching Web UI with arguments: --xformers --api --vae-path C:\Users\ZeroCool22\Desktop\Auto\models\Stable-diffusion\vae-ft-mse-840000-ema-pruned.ckpt
C:\Users\ZeroCool22\Desktop\Auto\venv\lib\site-packages\pkg_resources\__init__.py:123: PkgResourcesDeprecationWarning: llow is an invalid version and will not be supported in a future release
  warnings.warn(
python_server_full_path:  C:\Users\ZeroCool22\Desktop\Auto\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\Users\ZeroCool22\Desktop\Auto\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
2023-06-08 18:39:00,086 - ControlNet - INFO - ControlNet v1.1.220
ControlNet preprocessor location: C:\Users\ZeroCool22\Desktop\Auto\extensions\sd-webui-controlnet\annotator\downloads
2023-06-08 18:39:00,170 - ControlNet - INFO - ControlNet v1.1.220
Loading weights [c26f4c4227] from C:\Users\ZeroCool22\Desktop\Auto\models\Stable-diffusion\icbinpICantBelieveIts_v7.safetensors
Creating model from config: C:\Users\ZeroCool22\Desktop\Auto\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights from commandline argument: C:\Users\ZeroCool22\Desktop\Auto\models\Stable-diffusion\vae-ft-mse-840000-ema-pruned.ckpt
Applying optimization: xformers... done.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 11.2s (import torch: 3.1s, import gradio: 0.8s, import ldm: 0.3s, other imports: 0.9s, load scripts: 1.9s, create ui: 3.7s, gradio launch: 0.1s).
Textual inversion embeddings loaded(4): bad-hands-5, bad-image-v2-39000, bad_prompt_version2, EasyNegative
Textual inversion embeddings skipped(3): nartfixer, nfixer, nrealfixer
Model loaded in 4.5s (load weights from disk: 0.2s, create model: 0.3s, apply weights to model: 0.8s, apply half(): 1.0s, load VAE: 0.3s, move model to device: 0.6s, load textual inversion embeddings: 1.2s).

AUTO's version:

commit baf6946e06249c5af9851c60171692c44ef633e0 (HEAD -> master, tag: v1.3.2, origin/release_candidate, origin/master, origin/HEAD)
Merge: b6af0a38 6f754ab9

Screenshot_2

Screenshot_3

Screenshot_5

Segmented image disappears in a split second

Whenever i run segement anything, it does work exactly like it should but the image on right side only comes up for some milliseconds and then it's gone. I did reinstall the extension, disabled all other extensions and restarted webui but still same problem. Any solution?

Error without any info.

image

input_image: (512, 512, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth

512x512 image, 24gb vram, no errors in cmd.

Cant run it offline, can we run it offline in automatic1111?

While clicking Run Inpainting without internet connection, it does not work because it tries to connect to huggingface even when the necessary files have already been downloaded.

Can this be fixed? I noticed this as I have been using mobile connecting tethered to my desktop and using my phone to connect to automatic1111.

Logs:

Traceback (most recent call last):
File "C:\Users\ADMIN\sstable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000020C95EBCF40>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\ADMIN\table-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/stable-diffusion-2-inpainting (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000020C95EBCF40>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "C:\Users\ADMINstable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ADMIN\seait.v0.1.2\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\ADMIN\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 357, in run_inpaint
pipe = StableDiffusionInpaintPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 604, in from_pretrained
info = model_info(
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\hf_api.py", line 1603, in model_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\ADMIN\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/stable-diffusion-2-inpainting (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000020C95EBCF40>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

lama cleaner

It can't find lama cleaner, it tells me in read the readme to install manually. Any ideas on what could be occurring.

Can't find the inpaint anything in my local webui

Dear

Firstly, thank you for your open source and sharing! I installed this extension on my local webui according to the instructions in the document, but after restarting the webui, I couldn't find it on my webui interface. I would like to ask what may be the cause of this

best wish

Can't run segment anything

2023-07-09 05:44:30,912 - Inpaint Anything - INFO - input_image: (838, 672, 3) uint8
2023-07-09 05:44:31,675 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_b_01ec64.pth
2023-07-09 05:44:32,432 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1

Cleaner tab not working

  • I installed lama in the venv
  • I select the image and the mask
  • when I click any of the options in Cleaner tab nothing happens (tried with Lama and all the other options in the dropdown)
    no errors in the command window, nor in the gradio interface

the extension work in terms of creating Segment-anything masking, but the inpainting and cleaner components don't seem to be working, so maybe something with the setup is off.

Cuda out of Memory Issue

I used the smallest model (sam_vit_b_01ec64.pth) and I disabled all extensions but still get the same error. How to fix that ? Any idea ?
Screenshot_31

ControlNet inpaint model is not available. Requires the ControlNet-v1-1 inpaint model in the extensions\sd-webui-controlnet\models directory.

Hi,

I have the control_v11p_sd15_inpaint.pth in the stable-diffusion-webui\extensions\sd-webui-controlnet\models but still get this error. I suspect I need the control_v11p_sd15_inpaint_fp16.safetensors like shown in the new Nerdy Rodent video, but all provided like point to the control_v11p_sd15_inpaint.pth file here

https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

Best, Matthias

Cuda out of memory and not freeing memory after error

I tried to run large model and then base model, but they both only return "cuda ouf of memory" error. On top of that they are not freeing taken memory and not resetting segmentation process, so I have to restart both server and interface to recover from error.
How to make it work without error?

"Replace anything" functionality

Hi, thanks for this great extension! One of my main use cases is the "Replace Anything" feature which changes the background https://github.com/geekyutao/Inpaint-Anything#-replace-anything. Any chance this will be added/is easy to add? (I saw a related issue for "Remove Anything" but that's the inverse and is implemented differently.)

I think this would be useful for a lot of people, who are just trying to use the vanilla inpaint function for this right now!

Error loading

Getting errors for this one. First time I've ever had errors with an extension. Disabling it makes the error go away of course.

Error loading script: main.py
Traceback (most recent call last):
File "/home/ubuntu/stable-diffusion-webui/modules/scripts.py", line 257, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/ubuntu/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/ubuntu/stable-diffusion-webui/extensions/sd-webui-inpaint-anything/scripts/main.py", line 6, in
from diffusers import StableDiffusionInpaintPipeline, DDIMScheduler
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/init.py", line 55, in
from .pipelines import (
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/init.py", line 46, in
from .paint_by_example import PaintByExamplePipeline
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/paint_by_example/init.py", line 13, in
from .pipeline_paint_by_example import PaintByExamplePipeline
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py", line 29, in
from ..stable_diffusion import StableDiffusionPipelineOutput
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/init.py", line 81, in
from .pipeline_stable_diffusion_pix2pix_zero import StableDiffusionPix2PixZeroPipeline
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py", line 23, in
from transformers import (
ImportError: cannot import name 'BlipForConditionalGeneration' from 'transformers' (/home/ubuntu/miniconda3/lib/python3.10/site-packages/transformers/init.py)

Error on latest Development version of A111 - (Reporting in case it may prevent an upcoming problem)

Thanks again for creating such an amazing extension!

I wanted to report an issue running the extension on the latest development branch of A1111. I have both the current commit and also the latest development version as the latter fixed an unrelated issue so I've switched to that. "Inpaint Anything" works fine on the current version, but using the Dev version (available here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/dev) I receive the following error when trying to "Run Segment Anything":

(2 different attempts to run w/ different images that both work fine in the current main branch of A1111)

input_image: (540, 960, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 236, in run_sam
sam_masks = sam_mask_generator.generate(input_image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 163, in generate
mask_data = self._generate_masks(image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 206, in _generate_masks
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 251, in _process_crop
keep_by_nms = batched_nms(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms
return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\jit_trace.py", line 1220, in wrapper
return fn(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick
keep = nms(boxes_for_nms, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

input_image: (1024, 1024, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 236, in run_sam
sam_masks = sam_mask_generator.generate(input_image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 163, in generate
mask_data = self._generate_masks(image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 206, in _generate_masks
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 251, in _process_crop
keep_by_nms = batched_nms(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms
return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\jit_trace.py", line 1220, in wrapper
return fn(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick
keep = nms(boxes_for_nms, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]


I realize this might not be an issue you would address before there's an official merge of the development commits to the main branch, but I wanted to give a heads up incase it's an error you may potentially have to face in the near future. It'd be great if there is an easy solution (as I could use a "fix" with the dev build I'm running), but regardless I wanted to mention. Please close if this is "out of scope" at this point.

Thanks!!

(BTW I did try toggling the new "Enable offline network Inpainting" option in settings and that didn't resolve the error.

[Bug] - Extension does not work without XFormers installed - (Instructions suggest it should be optional)

The instructions on the main page seem to indicate the XFormers is optional,, however the extension errors on generating if XFormers is not installed. Many users who have upgraded to pytorch 2.0+ no longer use XFormers and instead use SDP (which is part of pytorch 2.0+).

If the extension does require XFormers please just clarify that. At the moment the error below appears when trying to run on the current commit of A1111 w/o XFormers installed and nothing is generated. I have used the extension w/o issues on a different venv/install on a different drive that does have XFormers installed.

Error w/o XFormers:


Python 3.10.9 | packaged by conda-forge | (main, Jan 11 2023, 15:15:40) [MSC v.1916 64 bit (AMD64)]
Version: v1.3.0
Commit hash: 20ae71faa8ef035c31aa3a410b707d792c8203a3
Installing requirements

Installing frame2frame requirement: opencv-python

Launching Web UI with arguments: --opt-sdp-attention --no-half-vae --opt-channelslast --skip-torch-cuda-test --skip-version-check --ckpt-dir e:\Stable Diffusion Checkpoints
No module 'xformers'. Proceeding without it.
ControlNet v1.1.202
ControlNet v1.1.202
Loading weights [6ce0161689] from E:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Create LRU cache (max_size=16) for preprocessor results.
Create LRU cache (max_size=16) for preprocessor results.
Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Create LRU cache (max_size=16) for preprocessor results.
Startup time: 6.6s (import torch: 1.2s, import gradio: 0.8s, import ldm: 0.4s, other imports: 0.6s, list SD models: 0.1s, load scripts: 2.4s, create ui: 0.5s, gradio launch: 0.6s).
Loading VAE weights specified in settings: E:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying optimization: sdp... done.
Textual inversion embeddings loaded(0):
Model loaded in 3.1s (load weights from disk: 0.3s, create model: 1.1s, apply weights to model: 0.5s, apply channels_last: 0.2s, apply half(): 0.3s, load VAE: 0.4s, move model to device: 0.4s).
input_image: (1039, 1920, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
runwayml/stable-diffusion-inpainting
vae\diffusion_pytorch_model.safetensors not found
Fetching 16 files: 100%|███████████████████████████████████████████████████████████████████████| 16/16 [00:00<?, ?it/s]
Traceback (most recent call last):
File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 365, in run_inpaint
pipe.enable_xformers_memory_efficient_attention()
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1080, in enable_xformers_memory_efficient_attention
self.set_use_memory_efficient_attention_xformers(True, attention_op)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1096, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 212, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 215, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 212, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "E:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\cross_attention.py", line 125, in set_use_memory_efficient_attention_xformers
raise ModuleNotFoundError(
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers

reload everything after everytime i used inpainting

thank you for you work, this one works terrific!!! much better than the old workflow by use SAM

The problem I fond is that every time after i used inpainting, the whole webui reload everything again, even though it doesn't take me too much, but still a bit annoying~ is this my problem or this plugin has some other settings i don't know?

1

微信截图_20230611012204

sd-webui-inpaint-anything RuntimeError

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
QQ截图20230706181402

suggested improvements

First of all congratulations on the extension.
I had been waiting for something like this for a long time but since I don't know how to program I couldn't do it.
Since I see that you are capable of working with the tool, I am going to ask you what values to add a plugin that I have been waiting for for a long time.
Being able to add the photo of what you want to add

https://huggingface.co/spaces/adirik/ChangeIt

image

Now change the dress for... these sneakers

image

and also see the way if you can add something of this
https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example
https://github.com/Fantasy-Studio/Paint-by-Example

image

THansk!!

ERROR - FIND was unable to find an engine to execute this computation

I got following errors when I click 'Run Segment Anything'

2023-07-02 09:24:33,263 - Inpaint Anything - INFO - input_image: (1380, 1000, 3) uint8
2023-07-02 09:24:36,724 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
2023-07-02 09:24:38,271 - Inpaint Anything - ERROR - FIND was unable to find an engine to execute this computation
Loading weights [c6fe5d4a3e] from /home/james/stable-diffusion-webui/models/Stable-diffusion/Realistic_Vision_V2.0/realisticVisionV20_v20NoVAE.ckpt
Creating model from config: /home/james/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: /home/james/stable-diffusion-webui/models/VAE/Anything-V3.0.vae.pt
Applying attention optimization: xformers... done.
No Image data blocks found.
No Image data blocks found.
No Image data blocks found.
Model loaded in 4.1s (create model: 0.7s, apply weights to model: 1.5s, apply half(): 0.5s, load VAE: 0.5s, move model to device: 0.6s, load textual inversion embeddings: 0.3s).

Can someone help me explain this problem?

Error on loading

I'm using a webui repackage app, after install sb-webui-inpaint-anything extension, restart the webui and then get the error msg below:

Error loading script: main.py
Traceback (most recent call last):
File "C:\novelai-webui-aki-v2\modules\scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\novelai-webui-aki-v2\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\novelai-webui-aki-v2\extensions\sd-webui-inpaint-anything\scripts\main.py", line 6, in
from diffusers import StableDiffusionInpaintPipeline, DDIMScheduler, UniPCMultistepScheduler
ImportError: cannot import name 'UniPCMultistepScheduler' from 'diffusers' (C:\novelai-webui-aki-v2\py310\lib\site-packages\diffusers_init
.py)

Seems it's the name conflict with 'diffusers', can't solve by myself, pls check and help, thx~

Error loading script: inpaint_anything

The extension is working yesterday without any problem.
But today I start the WebUI and it get error from inpaint anything.
The WebUI start but the inpaint anything tab is missing.

Here is the error :

Error loading script: inpaint_anything.py
Traceback (most recent call last):
  File "C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\modules\scripts.py", line 263, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 20, in <module>
    from lama_cleaner.model_manager import ModelManager
  File "C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\venv\lib\site-packages\lama_cleaner\model_manager.py", line 8, in <module>
    from lama_cleaner.model.controlnet import ControlNet
  File "C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\venv\lib\site-packages\lama_cleaner\model\controlnet.py", line 7, in <module>
    from diffusers import ControlNetModel
ImportError: cannot import name 'ControlNetModel' from 'diffusers' (C:\Users\XXXXXXX\Downloads\stable-diffusion-webui\venv\lib\site-packages\diffusers\__init__.py)

I remember I have updated extension last night.
It seem that the update of controlnet or inpaint anything, break the installation.

Thanks all for help

PS: I have tried delete the inpaint anything extension folder and reinstall it again but it does not fix the problem

Models download locations unclear

I'm looking for the location were the models are stored for this plugin. The reason being I have an unstable internet connection (mobile data) that requires a manual manual intervention every 2GB. So the connection is lost temporary. I'm not able to download models >2GB using the plugin itself because when it disconnects it needs to restart the download. using an external download manager I can manually download the models because it can resume downloads after connection failure.

I found the SAM models are stored here (The readme says models are stored in the models dir, this is not helpful I assumed the main models directory because the 'model' folder inside the plugin folder is only created after the download is started (please make this more clear in the readme):
sd.webui\webui\extensions\sd-webui-inpaint-anything\models

by default the sam_vit_l_0b3195.pth gets selected I would like to have an option to change the default to sam_vit_h_4b8939.pth

Inpainting Models location???
I can not find out how to download the inpainting models manually and were to place them. Can someone help me with this?

It looks like these are downloaded to:
C:\Users\admin\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-inpainting\snapshots\6ba40839c3c171123b2b863d16caf023e297abb9\text_encoder

This folder contains a symbolic link to:
C:\Users\admin.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-inpainting\blobs

I really don't like that the inpainting models seem to be stored outside the sd.webui folder. This makes the webui no longer portable...
If possible please change this

ImportError: Cannot import name 'unload model weights'

inpaintAnything.py fails to load because of this:

Error loading script: inpaint_anything.py Traceback (most recent call last): File "C:\Users\urani\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "C:\Users\urani\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\urani\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 39, in <module> from modules.sd_models import unload_model_weights, reload_model_weights, get_closet_checkpoint_match ImportError: cannot import name 'unload_model_weights' from 'modules.sd_models' (C:\Users\urani\stable-diffusion-webui\modules\sd_models.py)

Every inpainting model 'works' but gives a *.safetensors not found error

I have this extension working now - really cool stuff.
But every single Inpainting Model shows an error loading the vae or unet or text_encoder safetensors file. Some are missing and only have the bin file, yes, but even if I find the safetensors and put it into the vae folder for the inpainting model, it still says the file can't be found.
These are the messages appearing when I run the extension on different models - just 2 lines for each one almost immediately after Run Inpainting is clicked:

runwayml/stable-diffusion-inpainting
text_encoder\model.safetensors not found

stabilityai/stable-diffusion-2-inpainting
vae\diffusion_pytorch_model.safetensors not found

Uminosachi/dreamshaper_5-inpainting
vae\diffusion_pytorch_model.safetensors not found

saik0s/realistic_vision_inpainting
vae\diffusion_pytorch_model.safetensors not found

The saik0s one for example has a version with all the safetensors available here:
https://huggingface.co/saik0s/realistic_vision_inpainting/discussions/2

I downloaded all 3 safetensors (vae, unet, text_encoder) and placed them in the respective directories.
I still get the errors.

Any ideas?

How Can I Pass Environment Variables to Avoid Memory Errors?

When I load a picture and press "Run Segment Anything," I get this error

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 3.29 GiB already allocated; 0 bytes free; 3.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I assumed that this meant that I would need to change environment variables. So I tried adding this to webui-user.bat

set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128

That didn't work, so after more googling I tried...

set COMMANDLINE_ARGS=--xformers --no-half --autolaunch --lowvram --always-batch-cond-uncond --opt-split-attention
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

...but with both I get the same error. The " Tried to allocate 1024.00 MiB" part makes me think that the variable isn't actually getting set and that I need to do it somewhere else.

I noticed that setting it to 16 caused an error on startup claiming that 20 is the minimum allowed. That tells me that the variable is being set properly at startup, but it isn't being passed to SAM for some reason. So I tried setting it as a system wide environment variable and removing that line from webui-user.bat. I get the same behaviour where the A1111 recognizes the change on startup, but SAM is ignoring it.

I also tried adding...

import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "garbage_collection_threshold:0.6,max_split_size_mb:128"

at the beginning of Inpaint Anything's extension folder "main.py" and still got the same error. Same results with editing the segment anything venv "init.py" and "automatic_mask_generator.py".

I also tried restarting multiple times and deleting the folder and re-installing everything without a change.

So I'm wondering if it there is some other way to change the limit for this extension specifically.

doesn't really apply or do anything

so when I run inpaint-anything - it' snot that "nothing" happens... something is happening...but it's really not having an effect.
I selected white jeans, put in prompt as "red jeans"... it made the mask (I could even send mask to img2img/inpaint) and it works...
but within in paint-anything... if I have prompt "red jeans"... it just makes a weird blob.

I tried this image... selected the wall.. just jars on shelves on red well....
selected wall... and prompt was "white wall".... and I get this.

Screenshot 2023-06-20 at 1 59 51 AM

any help would be appreciated.

about connection Error sos

raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None))

all function run successfully except when i click run inpaining

can't download any inpainting model

when i clicked "run inpainting" button, it started to download some files, which were some shortcuts instead of the real files, and then an "OSError" showed, what can i do? can i just get the links of the files instead of downloading for the control panel?

Need a stop button

It's very easy to run out of vram when using a too large image, even if using a smaller sam model.
Since it takes forever to complete when running out of vram it would be nice if there was a stop button so we could stop the action instead of completely shutting down Stable Diffusion.

add a resize image option

Im getting next error and im guessing is cause the size of the image, with a resize image option this should be solved.

Model loaded in 4.5s (create model: 0.3s, apply weights to model: 2.7s, apply half(): 0.6s, move model to device: 0.9s).Unloaded weights 0.0s.

saik0s/realistic_vision_inpainting
Using sampler DPM2 a Karras
resize: (3088, 2316) -> (3088, 2316)
center_crop: (3088, 2316) -> (3088, 2312)
25%|████████████████████▊ | 5/20 [01:07<03:16, 13.07s/it]Loading weights [61dcb253a3] from C:\stable-diffusion-webui\models\Stable-diffusion\urpmv12Inpainting-inpainting.safetensors
Creating model from config: C:\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
LatentInpaintDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.54 M params.
95%|█████████████████████████████████████████████████████████████████████████████▉ | 19/20 [04:03<00:12, 12.27s/it]Applying optimization: xformers... done.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [04:09<00:00, 12.47s/it]
Model loaded in 183.7s (create model: 0.6s, apply weights to model: 5.7s, apply half(): 1.1s, load VAE: 2.3s, move model to device: 161.4s, load textual inversion embeddings: 12.4s).
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 549, in run_inpaint
output_image = pipe(**pipe_args_dict).images[0]
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 898, in call
image = self.decode_latents(latents)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 528, in decode_latents
image = self.vae.decode(latents).sample
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\autoencoder_kl.py", line 191, in decode
decoded = self._decode(z).sample
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\autoencoder_kl.py", line 178, in _decode
dec = self.decoder(z)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\vae.py", line 238, in forward
sample = up_block(sample)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1994, in forward
hidden_states = upsampler(hidden_states)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 157, in forward
hidden_states = self.conv(hidden_states)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 415, in lora_Conv2d_forward
return torch.nn.Conv2d_forward_before_lora(self, input)
File "C:\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 753, in lyco_Conv2d_forward
return torch.nn.Conv2d_forward_before_lyco(self, input)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.41 GiB (GPU 0; 11.99 GiB total capacity; 10.00 GiB already allocated; 0 bytes free; 11.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Inpaining Model Question

Where are the inpainting models being saved to?

I would like to add a custom inpaining model to the list of '''model_ids'''. Would this be possible?

No "Inpaint Anything" Tab after installation

I've followed the instructions for installation, through both the "Install from URL" method, as well as installing from the available list. I get no errors during installation, but even after applying, restarting UI, and even restarting the entire A1111 program, there is no "Install from URL" tab that shows up.

I've turned off most other extensions, and am using the updated versions of A1111 and controlnet: https://imgur.com/ljkw3hu

[2 Feature Requests] - Save to default image directory option / Send mask to IMG2IMG "inpaint upload" (for custom model use)

Hello! First off thanks for your time/effort creating such a useful app & extension. I've used SAM before w/ a different extensions (that's great in its own right), but I find what you've done here to be quicker for my personal workflow.

I found out about your extension from a post on the SD reddit forum (in case you weren't aware it had been the subject of a popular thread there yesterday) : https://old.reddit.com/r/StableDiffusion/comments/13szvlj/inpaint_anything_uses_segment_anything_cool_a1111/

As for my feature requests:

  1. Option to save images & masks to the default image folder in settings. With the images currently being saved to a folder in the webui extension path it's cumbersome to navigate over there just to move those images back w/ the rest of my generations. It'd be great if perhaps you added a settings page in the main app setting where it'd be possible to change the folder for saving images & masks.

  2. A button in the "Inpaint Anything" tab to send the generated mask to the normal A1111 inpainting page (specifically inpaint upload where you can add an image as a mask). This would allow for custom models. Many of us train models w/ Dreambooth or have downloaded custom models which aren't selectable in the "Inpaint Anything" tab.

Sending the mask to Controlnet, or incorporating the Controlnet model for inpainting would be a 3rd request - but I would assume that'd be a difficult undertaking. The other two are hopefully possible, and not too time consuming to implement if you find them acceptable for your project.

Thanks for considering!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.