GithubHelp home page GithubHelp logo

sd-1click-colab's People

Contributors

nolanaatama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-1click-colab's Issues

No 'Create Embedding' tab in training

image

There should be create embedding tab along with create hypernetwork, idk if it's a settings thing or something missing in the sd-webui you're using..

not a issue but you can add this models?

RuntimeError

RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install.

Suggestion: More useful commit messages

I know this isn't my repository and you're of course free to run it how you choose, but at the moment the commit messages are extremely unhelpful. Most new commits are just labelled "Add files via upload", which makes the changes to the repo incredibly opaque. Telling the difference between a new model being added vs. the UI being updated vs. a new plugin being added requires delving into each commit, rather than the commit message telling you at a glance what has been changed. Again, this is your repo and you're free to ignore this suggestion, but it would be helpful to see what each commit is doing.

Merge pull request

Hi, can you please merge my pull request?
I can bring it up to date if you need

It fixes links and adds direct colab link to each file

MeinaMix

Is there anyway that MeinaMix would be updated by version 9 and 10?

Another strange error suddenly appeared for LoRA

Running on local URL: http://127.0.0.1:7860/
Running on public URL: https://632e3dd3-a8bb-402b.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "launch.py", line 360, in
start()
File "launch.py", line 355, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 223, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Killing tunnel 127.0.0.1:7860 <> https://632e3dd3-a8bb-402b.gradio.live/

Merge Models using our own google drive models

Hi all:

I have been trying to change the mergemodels.ipynb file to allow me to load my own google drive models for mixing/merging with other models. However I am running into a problem in the 2nd cell execution, I cannot seem to get the google drive model to load (loading weight code does not execute). Models from huggingface can be loaded with no problems. I checked to make sure that my google drive model can be shared, that anyone with the link will be able to access it. The google drive model can be loaded for generating images through the loadgdrivemodel.ipynb so the model has no problems.

I made the code changes to mergemodel.ipynb by copying the first 2 sections/cells' code from loadgdrivemodel.ipynb, and pasting those onto the mergemodels.ipynb and saved the new file as mergemodelsgdrive.ipynb, please take a look to see if any code below is wrong, or if something is missing such that the gdrive model does not get loaded:

{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU",
"gpuClass": "standard"
},
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3xdPbMujWYJ8"
},
"outputs": [],
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sBbcB4vwj_jm"
},
"outputs": [],
"source": [
"!pip install --upgrade fastapi==0.90.0\n",
"!git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui\n",
"\n",
"########## paste our models code below .. \n",
"\n",
"# model 1 (change to our desired model code from google drive)\n",
"!curl -Lo examp.ckpt /content/drive/MyDrive/stable diffusion/sdmodels/examplemodel.ckpt\n",
"!mv "/content/examp.ckpt" "/content/stable-diffusion-webui/models/Stable-diffusion"\n",
"\n",
"# model 2 (change to our desired model code (below is code for standard SD2.1))\n",
"!curl -Lo SD21768.ckpt https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt\n",
"!mv "/content/SD21768.ckpt" "/content/stable-diffusion-webui/models/Stable-diffusion"\n",
"\n",
"# OPTIONAL model 3 (remove '#' from the beginning of both lines below if we want to merge 3 models and use the 'Add difference' option. Change both line to our desired model code (below is code for HassanBlend 1.4))\n",
"# !curl -Lo hb.ckpt https://huggingface.co/hassanblend/hassanblend1.4/resolve/main/HassanBlend1.4-Pruned.ckpt\n",
"# !mv "/content/hb.ckpt" "/content/stable-diffusion-webui/models/Stable-diffusion"\n",
"\n",
"########## paste our models code above ..\n",
"\n",
"%cd /content/stable-diffusion-webui\n",
"!git checkout 11d432d # temporary fix\n",
"!sed -i 's/else "cpu"/else devices.get_optimal_device()/g' modules/shared.py\n",
"!sed -i "s/'cpu'/devices.get_optimal_device()/g" modules/extras.py\n",
"!COMMANDLINE_ARGS="--share --lowvram --disable-safe-unpickle" REQS_FILE="requirements.txt" python launch.py"
]
},

The rest of the code is not pasted here, as I have not made a single change to the original mergemodels.ipynb, maybe the problem lies within those lines below. However, I would be surprised since those codes relate to cell 3, 4 for testing the merged models and renaming the merged models and saving the new model back into google drive, etc.

During the execution of the 1st cell to connect to google drive, everything runs fine, just like how we use loadgdrivemodel.ipynb, I am able to see my gdrive model directory and models, and I can copy the path from the colab interface left panel.

During the execution of the 2nd cell, I see the errors very early on:

curl: (3) URL using bad/illegal format or missing URL
mv: cannot stat '/content/examp.ckpt': No such file or directory

This is the reason why the google drive model: examp.ckpt does not get loaded later on, it is somehow considered a bad URL or missing URL, but I copied the link from the colab interface on the left side, expanding from drive/my drive/etc....right click to copy the path, exactly the same way we do to load our own google drive model using the loadgdrivemodel.ipynb, so the link path cannot be the problem. Again, I also made sure that the file is not restricted on google drive itself.

Therefore, I suspect that I am missing some code when I copied and pasted from loadgdrivemodel.ipynb, so that the altered mergegdrivemodels.ipynb cannot load the link properly.

Can anyone figure out how to correct my code above?

gdrive_LoRA.ipynb it is throwing many errors

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1145 100 1145 0 0 8419 0 --:--:-- --:--:-- --:--:-- 8481
Warning: Failed to create the file
Warning: /content/sd-webui/models/Lora/rebuildOfEvangelion_v4b-26.safetensors:
Warning: No such file or directory
0 28.9M 0 15844 0 0 92116 0 0:05:29 --:--:-- 0:05:29 92116
curl: (23) Failed writing body (0 != 15844)
python3: can't open file '/content/launch.py': [Errno 2] No such file or directory

WebUI generate button is unresponsive after some generations

Whenever I try to generate something that takes longer, The Generate button doesn't respond. This issue was gone in the version prior to actual, but it has returned. Is there any way to downgrade the webui?

Also, a suggestion: add civitai browser

ControlNet not working

google colab

Error loading script: api.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/api.py", line 11, in
from scripts import external_code, global_state
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/external_code.py", line 5, in
from scripts import global_state
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

Error loading script: batch_hijack.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/batch_hijack.py", line 7, in
from scripts import external_code
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/external_code.py", line 5, in
from scripts import global_state
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

Error loading script: controlnet.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/controlnet.py", line 12, in
from scripts import global_state, hook, external_code, processor, batch_hijack, controlnet_version, utils
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

2023-06-17 23:34:34,649 - ControlNet - INFO - ControlNet v1.1.224
Error loading script: external_code.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/external_code.py", line 5, in
from scripts import global_state
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

Error loading script: global_state.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

Error loading script: xyz_grid_support.py
Traceback (most recent call last):
File "/content/microsoftexcel/modules/scripts.py", line 263, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/microsoftexcel/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/xyz_grid_support.py", line 7, in
from scripts.global_state import update_cn_models, cn_models_names, cn_preprocessor_modules
File "/content/microsoftexcel/extensions/microsoftexcel-controlnet/scripts/global_state.py", line 160, in
os.makedirs(cn_models_dir, exist_ok=True)
File "/usr/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/content/microsoftexcel/models/ControlNet'

New issue, not generating public links

Using abyssorangemix3A2,
it only generates a local url, which doesn't open a UI anyway.
attempted changing remotemoe to cloudflared, no difference.
the output doesnt look any different from usual...

Pickle data was truncated error

What doest that mean? First time seeing this.

File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1002, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: pickle data was truncated

RuntimeError: Cannot add middleware after an application has started

Hi, first of all, thanks for these amazing tools. I've been using them for around a week with great results.

After testing a few, dreamlike is the one I enjoy the most for my goals. I don't know if I'm the only one, but using img-to-img (I use my own designs to get more precise results) the most I can go is 1024x1024 and 80 Sampling Steps. Even with that config it usually freezes, the image never appears and I have to close the window and open gradio link again. Maybe someone knows how to solve this or at least access the produced files?

The main problem is I can't start it anymore. I tried to use another account and to download the file again and start from scratch, but after gradio link shows, this message appears below:

"This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "launch.py", line 360, in
start()
File "launch.py", line 355, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 223, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Killing tunnel 127.0.0.1:7860 <> https://6fca7fd8-7a19-4606.gradio.live"

If I try to use the link, "No interface is running right now". Anyone knows how to solve this? As it is a very simple process, I don't know where the problem is... I followed this tutorial from Nolan Aatama and it worked until this morning: https://www.youtube.com/watch?v=AX9Cl7LR4Sg

Repository license?

Currently this repository is unlicensed. From GitHub's own licensing docs:

You're under no obligation to choose a license. However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work. If you're creating an open source project, we strongly encourage you to include an open source license.

which, as far as I understand, forbids any kind of open source contributions from the community (such as pull requests), as copyright laws forbid others from creating derivative works.

I just wanted to know whether this is an intentional decision, to keep the contents of this repo strictly under copyright, or whether you'd be open to including an open source license?

Gradio link does not appear anymore

I suppose Colab has proceed to end its support for the SD web-ui.
But first, does anyone actually have the same problem as I do?
The gradio link to launch the SD web-ui does not appear anymore because I am not on Pro.

Perhaps it is time for a downgrade to use interactive notebook compute or rent/buy a GPU.

Traceback (most recent call last)

i keep getting this error, i didn't got it yesterday, but now, everytime it give me an error in the place of URLs, i get something like this:
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/microsoftexcel/launch.py:38 in │
│ │
│ 35 │
│ 36 │
│ 37 if name == "main": │
│ ❱ 38 │ main() │
│ 39 │
│ │
│ /content/microsoftexcel/launch.py:34 in main │
│ │
│ 31 │ if args.test_server: │
│ 32 │ │ configure_for_tests() │
│ 33 │ │
│ ❱ 34 │ start() │
│ 35 │
│ 36 │
│ 37 if name == "main": │
│ │
│ /content/microsoftexcel/modules/launch_utils.py:330 in start │
│ │
│ 327 │
│ 328 def start(): │
│ 329 │ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'W │
│ ❱ 330 │ import webui │
│ 331 │ if '--nowebui' in sys.argv: │
│ 332 │ │ webui.api_only() │
│ 333 │ else: │
│ │
│ /content/microsoftexcel/webui.py:28 in │
│ │
│ 25 startup_timer = timer.Timer() │
│ 26 │
│ 27 import torch │
│ ❱ 28 import pytorch_lightning # noqa: F401 # pytorch_lightning should be │
│ 29 warnings.filterwarnings(action="ignore", category=DeprecationWarning, │
│ 30 warnings.filterwarnings(action="ignore", category=UserWarning, module= │
│ 31 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/init.py:34 in │
│ │
│ │
│ 31 │ _logger.addHandler(logging.StreamHandler()) │
│ 32 │ _logger.propagate = False │
│ 33 │
│ ❱ 34 from pytorch_lightning.callbacks import Callback # noqa: E402 │
│ 35 from pytorch_lightning.core import LightningDataModule, LightningModule │
│ 36 from pytorch_lightning.trainer import Trainer # noqa: E402 │
│ 37 from pytorch_lightning.utilities.seed import seed_everything # noqa: E │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/init
│ .py:25 in │
│ │
│ 22 from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoin │
│ 23 from pytorch_lightning.callbacks.model_summary import ModelSummary │
│ 24 from pytorch_lightning.callbacks.prediction_writer import BasePredictio │
│ ❱ 25 from pytorch_lightning.callbacks.progress import ProgressBarBase, RichP │
│ 26 from pytorch_lightning.callbacks.pruning import ModelPruning │
│ 27 from pytorch_lightning.callbacks.quantization import QuantizationAwareT │
│ 28 from pytorch_lightning.callbacks.rich_model_summary import RichModelSum │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/progress │
│ /init.py:22 in │
│ │
│ 19 │
│ 20 """ │
│ 21 from pytorch_lightning.callbacks.progress.base import ProgressBarBase │
│ ❱ 22 from pytorch_lightning.callbacks.progress.rich_progress import RichProg │
│ 23 from pytorch_lightning.callbacks.progress.tqdm_progress import TQDMProg │
│ 24 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/progress │
│ /rich_progress.py:20 in │
│ │
│ 17 from datetime import timedelta │
│ 18 from typing import Any, Dict, Optional, Union │
│ 19 │
│ ❱ 20 from torchmetrics.utilities.imports import _compare_version │
│ 21 │
│ 22 import pytorch_lightning as pl │
│ 23 from pytorch_lightning.callbacks.progress.base import ProgressBarBase │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name '_compare_version' from
'torchmetrics.utilities.imports'
(/usr/local/lib/python3.10/dist-packages/torchmetrics/utilities/imports.py)

not start model

HEAD is now at 0cc0ee1b Merge pull request #7945 from w-e-w/fix-image-downscale
Cloning into 'embeddings'...
remote: Enumerating objects: 32, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 32 (delta 4), reused 0 (delta 0), pack-reused 19
Unpacking objects: 100% (32/32), 4.50 KiB | 329.00 KiB/s, done.
python3: can't open file '/content/stable-diffusion-webui/launch.py': [Errno 2] No such file or directory

the model does not start

SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)

Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/img2img.py", line 115, in img2img
image = ImageOps.exif_transpose(image)
File "/usr/local/lib/python3.9/dist-packages/PIL/ImageOps.py", line 590, in exif_transpose
exif = image.getexif()
File "/usr/local/lib/python3.9/dist-packages/PIL/Image.py", line 1455, in getexif
self._exif.load(exif_info)
File "/usr/local/lib/python3.9/dist-packages/PIL/Image.py", line 3719, in load
self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
File "/usr/local/lib/python3.9/dist-packages/PIL/TiffImagePlugin.py", line 507, in init
raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\x" not valid)

still having errors; does not load model

Tried using abyssorangemix3a3 and in the ui, the only option for the model is v1-5-pruned-emaonly.safetensors

Tried to manually install the correct model and others, but when trying to switch in the UI, it returns "HeaderTooLarge" and stays on the emaonly.

Edit: This is just a problem on the individual abyssorangemix3a3, just realised the newer ones added work fine. Possibly delete the individual one to remove the faulty clutter.

New error on launching tunnel

I have updated all the various torch versions to the latest and I have checked the log that there are no errors prior to launching the web-ui tunnel.
Here is the error after attempting to launch the webui tunnel:
src/tcmalloc.cc:283] Attempt to free invalid pointer

I will try again later and tomorrow.

New error in launch.py

╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/microsoftexcel/launch.py:38 in │
│ │
│ 35 │
│ 36 │
│ 37 if name == "main": │
│ ❱ 38 │ main() │
│ 39 │
│ │
│ /content/microsoftexcel/launch.py:34 in main │
│ │
│ 31 │ if args.test_server: │
│ 32 │ │ configure_for_tests() │
│ 33 │ │
│ ❱ 34 │ start() │
│ 35 │
│ 36 │
│ 37 if name == "main": │
│ │
│ /content/microsoftexcel/modules/launch_utils.py:330 in start │
│ │
│ 327 │
│ 328 def start(): │
│ 329 │ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'W │
│ ❱ 330 │ import webui │
│ 331 │ if '--nowebui' in sys.argv: │
│ 332 │ │ webui.api_only() │
│ 333 │ else: │
│ │
│ /content/microsoftexcel/webui.py:28 in │
│ │
│ 25 startup_timer = timer.Timer() │
│ 26 │
│ 27 import torch │
│ ❱ 28 import pytorch_lightning # noqa: F401 # pytorch_lightning should be │
│ 29 warnings.filterwarnings(action="ignore", category=DeprecationWarning, │
│ 30 warnings.filterwarnings(action="ignore", category=UserWarning, module= │
│ 31 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/init.py:34 in │
│ │
│ │
│ 31 │ _logger.addHandler(logging.StreamHandler()) │
│ 32 │ _logger.propagate = False │
│ 33 │
│ ❱ 34 from pytorch_lightning.callbacks import Callback # noqa: E402 │
│ 35 from pytorch_lightning.core import LightningDataModule, LightningModule │
│ 36 from pytorch_lightning.trainer import Trainer # noqa: E402 │
│ 37 from pytorch_lightning.utilities.seed import seed_everything # noqa: E │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/init
│ .py:25 in │
│ │
│ 22 from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoin │
│ 23 from pytorch_lightning.callbacks.model_summary import ModelSummary │
│ 24 from pytorch_lightning.callbacks.prediction_writer import BasePredictio │
│ ❱ 25 from pytorch_lightning.callbacks.progress import ProgressBarBase, RichP │
│ 26 from pytorch_lightning.callbacks.pruning import ModelPruning │
│ 27 from pytorch_lightning.callbacks.quantization import QuantizationAwareT │
│ 28 from pytorch_lightning.callbacks.rich_model_summary import RichModelSum │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/progress │
│ /init.py:22 in │
│ │
│ 19 │
│ 20 """ │
│ 21 from pytorch_lightning.callbacks.progress.base import ProgressBarBase │
│ ❱ 22 from pytorch_lightning.callbacks.progress.rich_progress import RichProg │
│ 23 from pytorch_lightning.callbacks.progress.tqdm_progress import TQDMProg │
│ 24 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/progress │
│ /rich_progress.py:20 in │
│ │
│ 17 from datetime import timedelta │
│ 18 from typing import Any, Dict, Optional, Union │
│ 19 │
│ ❱ 20 from torchmetrics.utilities.imports import _compare_version │
│ 21 │
│ 22 import pytorch_lightning as pl │
│ 23 from pytorch_lightning.callbacks.progress.base import ProgressBarBase │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name '_compare_version' from
'torchmetrics.utilities.imports'
(/usr/local/lib/python3.10/dist-packages/torchmetrics/utilities/imports.py)

ControlNet models disappeared

I´ve downloaded the new version of the "Chilloutmix" model but in the ControlNet panel the models doesn´t appear, only the preprocessors

Selecting controlnet models to download

Hey Nolan,

I want to download only some of the controlnet models, not all of them. Is there a way to do that? The reason is that they take up too much space in Google Drive and Colab Temp Storage.

Problem Runing

Greetings, I'm having problems running Acertainthing.ipynb, this is what appears:

Cloning into 'stable-diffusion-webui'...
remote: Enumerating objects: 13437, done.
remote: Total 13437 (delta 0), reused 0 (delta 0), pack-reused 13437
Receiving objects: 100% (13437/13437), 23.50 MiB | 12.91 MiB/s, done.
Resolving deltas: 100% (9625/9625), done.
Cloning into '/content/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser'...
remote: Enumerating objects: 143, done.
remote: Counting objects: 100% (27/27), done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 143 (delta 10), reused 17 (delta 9), pack-reused 116
Receiving objects: 100% (143/143), 41.07 KiB | 3.16 MiB/s, done.
Resolving deltas: 100% (48/48), done.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 47.4/47.4 MB 16.4 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 890.2/890.2 MB 131.3 MB/s eta 0:00:01tcmalloc: large alloc 1112711168 bytes == 0x38f0c000 @ 0x7fbc8bb42615 0x5d6f4c 0x51edd1 0x51ef5b 0x4f750a 0x4997a2 0x55cd91 0x5d8941 0x4997a2 0x55cd91 0x5d8941 0x4997a2 0x55cd91 0x5d8941 0x4997a2 0x55cd91 0x5d8941 0x4997a2 0x55cd91 0x5d8941 0x4997a2 0x5d8868 0x4997a2 0x55cd91 0x5d8941 0x49abe4 0x55cd91 0x5d8941 0x4997a2 0x55cd91 0x5d8941
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 890.2/890.2 MB 1.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.0/21.0 MB 72.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 317.1/317.1 MB 5.1 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 849.3/849.3 KB 44.7 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 557.1/557.1 MB 3.2 MB/s eta 0:00:00
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.14.1+cu116 requires torch==1.13.1, but you have torch 1.13.0 which is incompatible.
torchtext 0.14.1 requires torch==1.13.1, but you have torch 1.13.0 which is incompatible.
torchaudio 0.13.1+cu116 requires torch==1.13.1, but you have torch 1.13.0 which is incompatible.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1139 100 1139 0 0 5781 0 --:--:-- --:--:-- --:--:-- 5811
100 4067M 100 4067M 0 0 65.1M 0 0:01:02 0:01:02 --:--:-- 55.8M
/content/stable-diffusion-webui
Note: checking out '11d432d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

HEAD is now at 11d432d add refresh buttons to checkpoint merger
Python 3.8.16 (default, Dec 7 2022, 01:12:13)
[GCC 7.5.0]
Commit hash: 11d432d92d63660c516540dcb48faac87669b4f0
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into repositories/stable-diffusion-stability-ai...
Cloning Taming Transformers into repositories/taming-transformers...
Cloning K-diffusion into repositories/k-diffusion...
Cloning CodeFormer into repositories/CodeFormer...
Cloning BLIP into repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments: --share --gradio-debug --medvram --disable-safe-unpickle --xformers
Traceback (most recent call last):
File "launch.py", line 295, in
start()
File "launch.py", line 286, in start
import webui
File "/content/stable-diffusion-webui/webui.py", line 12, in
from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call
File "/content/stable-diffusion-webui/modules/call_queue.py", line 7, in
from modules import shared
File "/content/stable-diffusion-webui/modules/shared.py", line 13, in
import modules.interrogate
File "/content/stable-diffusion-webui/modules/interrogate.py", line 9, in
from torchvision import transforms
File "/usr/local/lib/python3.8/dist-packages/torchvision/init.py", line 5, in
from torchvision import datasets, io, models, ops, transforms, utils
File "/usr/local/lib/python3.8/dist-packages/torchvision/datasets/init.py", line 1, in
from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel
File "/usr/local/lib/python3.8/dist-packages/torchvision/datasets/_optical_flow.py", line 11, in
from ..io.image import _read_png_16
File "/usr/local/lib/python3.8/dist-packages/torchvision/io/init.py", line 8, in
from ._load_gpu_decoder import _HAS_GPU_VIDEO_DECODER
File "/usr/local/lib/python3.8/dist-packages/torchvision/io/_load_gpu_decoder.py", line 1, in
from ..extension import _load_library
File "/usr/local/lib/python3.8/dist-packages/torchvision/extension.py", line 107, in
_check_cuda_version()
File "/usr/local/lib/python3.8/dist-packages/torchvision/extension.py", line 80, in _check_cuda_version
raise RuntimeError(
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.6. Please reinstall the torchvision that matches your PyTorch install.

Something is not working again, something with the header, yesterday all was fine

Installing pycloudflared

Launching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension- --gradio-queue --remotemoe
2023-03-10 17:48:08.942515: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-10 17:48:11.816804: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia
2023-03-10 17:48:11.817220: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia
2023-03-10 17:48:11.817252: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
remote.moe detected, trying to connect...
Warning: Permanently added 'remote.moe,159.69.126.209' (ECDSA) to the list of known hosts.
Calculating sha256 for /content/stable-diffusion-webui/models/Stable-diffusion/abyssorangemix3A1.safetensors: f36668ddf22403a332f978057d527cf285b01468bc3431b04094a7bafa6aba59
Loading weights [f36668ddf2] from /content/stable-diffusion-webui/models/Stable-diffusion/abyssorangemix3A1.safetensors
loading stable diffusion model: SafetensorError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/webui.py", line 111, in initialize
modules.sd_models.load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 383, in load_model
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 238, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 217, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.9/dist-packages/safetensors/torch.py", line 99, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Stable diffusion model failed to load, exiting

How work this command

Hi i have a question how work this command to download the embeddings?
import shutil
shutil.rmtree('/content/stable-diffusion-webui/embeddings')

1 click colab vinteprotogenmix.ipynb error

I am running into these errors during the launch of this model: vinteprotogenmix.ipynb

"Exception: Error while deserializing header: HeaderTooLarge
Stable diffusion model failed to load, exiting".

The process stops at those lines above, no links to notebooks generated. Anyone else is running into this problem?

I am using other models, and they are fine. I have not tested every model, but the 5 or so ones I use are all fine.

Load from gdrive

Can i load model/lora directly from my Gdrive, if i want to change model in the process or load my "homebrewed" lora?

High resolution Output issue

when I changed the width and height in the sampling method to the max, the GUI stops showing the preview and the entire screen looks as if the execution is stopped. Instead of the interrupt button, it would be back with the generate button. i have noticed this only when i set the height and width to the max. Also, I have a question, would the image be saved in the log only if I click the sav button or when the diffusion finished the execution?

image

Pycairo Wheel issue.... I haven't ever seen this before

I doesn't matter what notebook I'm running, I keep getting this error. I've tested random notebooks that are currently updated in the repository in the last few days and still getting the same error. For example just try the notebook for Deliberate or anything else and you'll get this error when running the first cell or additiona LORA cells. Everything still seems to function okay, so I'm not sure if this actually has an impact or not.
My intial searching around and troubleshooting was unable to solve the issue. Please advise.

Here is a snippet of the cell output starting before the error and extending a bit beyond:


Building wheel for svglib (setup.py): finished with status 'done'
Created wheel for svglib: filename=svglib-1.5.1-py3-none-any.whl size=30919 sha256=c635b05d7513e6f9698677489e49c8179e8985f6044307871a4a4ad8d9105630
Stored in directory: /root/.cache/pip/wheels/56/9f/90/f37f4b9dbf82987a24ae14f15586e96715cb669a4710b3b85d
Building wheel for pycairo (pyproject.toml): started
Building wheel for pycairo (pyproject.toml): finished with status 'error'
Successfully built svglib
Failed to build pycairo

stderr: error: subprocess-exited-with-error

× Building wheel for pycairo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects

Warning: Failed to install svglib, some preprocessors may not work.
Installing sd-webui-controlnet requirement: fvcore

Installing diffusers


If you could offer some assitance or help explain what's up, I would appreciate it!

Thanks Nolan!

  • Daryl

Colab keeps "Completing" without my input

I don't see any errors, but when I attempt to open the public URL, it soon "completes", but does not disconnect from the GPU. I've had this issue for the past few days.

issue : PyTorch & torchvision

got this bug just this morning :

RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions.

Insights from mobile user

  1. Hate the cropped image view in full page image viewer in this update, prefer the sd-webui.
  2. Always have problem with accidental change value of slider while scroling, maybe add lock button or just wrap all input parameters.
  3. I dont know if still present in this update, but the PNG INFO was generated irrelevant value, not know because too long parameter with newline or because jpeg or anything else.

That's it i think.. thankyou btw for make it responsive for mobile user

so this is what is wrong

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 0cc0ee1b Merge pull request #7945 from w-e-w/fix-image-downscale
Cloning into 'embeddings'...
remote: Enumerating objects: 44, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 44 (delta 8), reused 0 (delta 0), pack-reused 19
Unpacking objects: 100% (44/44), 5.96 KiB | 1017.00 KiB/s, done.
Filtering content: 100% (19/19), 929.03 KiB | 769.00 KiB/s, done.
Python 3.9.16 (main, Dec 7 2022, 01:11:51)
[GCC 9.4.0]
Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Installing gfpgan
Installing clip
Installing open_clip
Installing xformers
Cloning Stable Diffusion into repositories/stable-diffusion-stability-ai...
Cloning Taming Transformers into repositories/taming-transformers...
Cloning K-diffusion into repositories/k-diffusion...
Cloning CodeFormer into repositories/CodeFormer...
Cloning BLIP into repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Installing pycloudflared

Installing sd-webui-controlnet requirement: svglib

Launching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension- --gradio-queue --remotemoe
Traceback (most recent call last):
File "/content/stable-diffusion-webui/launch.py", line 361, in
start()
File "/content/stable-diffusion-webui/launch.py", line 352, in start
import webui
File "/content/stable-diffusion-webui/webui.py", line 15, in
from modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints
File "/content/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py", line 6, in
from modules import shared, ui_extra_networks, sd_models
File "/content/stable-diffusion-webui/modules/shared.py", line 12, in
import modules.interrogate
File "/content/stable-diffusion-webui/modules/interrogate.py", line 11, in
from torchvision import transforms
File "/usr/local/lib/python3.9/dist-packages/torchvision/init.py", line 6, in
from torchvision import datasets, io, models, ops, transforms, utils
File "/usr/local/lib/python3.9/dist-packages/torchvision/datasets/init.py", line 1, in
from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel
File "/usr/local/lib/python3.9/dist-packages/torchvision/datasets/_optical_flow.py", line 12, in
from ..io.image import _read_png_16
File "/usr/local/lib/python3.9/dist-packages/torchvision/io/init.py", line 8, in
from ._load_gpu_decoder import _HAS_GPU_VIDEO_DECODER
File "/usr/local/lib/python3.9/dist-packages/torchvision/io/_load_gpu_decoder.py", line 1, in
from ..extension import _load_library
File "/usr/local/lib/python3.9/dist-packages/torchvision/extension.py", line 107, in
_check_cuda_version()
File "/usr/local/lib/python3.9/dist-packages/torchvision/extension.py", line 80, in _check_cuda_version
raise RuntimeError(
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install.

Possible to add an image with each model?

Not sure how difficult that would be, bit it would be very helpful in figuring out which people want to run if there was a single image with each, using the same prompt and seed. Sorry if that's a lot to ask or not possible. Cheers

Issue: Gradio related error

Running the launch.py with the command line arguments results in errors from Gradio which causes the Error popup to show on almost every setting.

Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1018, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 947, in postprocess_data prediction_value = postprocess_update_dict( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 371, in postprocess_update_dict update_dict = block.get_specific_update(update_dict) File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 257, in get_specific_update specific_update = cls.update(**generic_update) TypeError: update() got an unexpected keyword argument 'multiline'

A person on the r/StableDiffusion subreddit is also having a similar issue: https://www.reddit.com/r/StableDiffusion/comments/12bk7sd/do_anyone_know_to_solve_this_issue_while_using_sd/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.