GithubHelp home page GithubHelp logo

wasasquatch / easydiffusion Goto Github PK

View Code? Open in Web Editor NEW
104.0 9.0 17.0 2.59 MB

Easy Diffusion is an advanced Stable Diffusion Notebook with a feature rich image processing suite.

License: MIT License

Jupyter Notebook 85.84% Python 14.16%
google-colab-notebook stable-diffusion latent-diffusion text-to-image

easydiffusion's Introduction

Stability.AI Easy Diffusion visitors

Stability.AI Easy Diffusion is a Google Colab Notebook designed to be a relatively easy to use all-in-one suite for stable diffusion. It is being improved with features frequently.

Features

  • Text to Image Stable Diffusion
  • Image to Image Stable Diffusion
  • Inpainting Stable Diffusion
  • Enable or Disable NSFW Filtering (Gaussian Blurred Images)
  • Cached Pipes - Cache pipes to disk for faster loading between pipe types.
  • Optional Attention Slicing
  • Define a HF Model not listed by selecting the MODEL_ID text and writing in your own model ID, or select from the drop down arrow list of predefined models.
  • Stable Diffusion Concept support
  • Use local, or remote init images or mask images.
  • Use a text file of init images, masks, or prompts to do batches.
    • PROMPT_FILE - Supports line repetition (useful with NSP or random prompts). Use ^#| at the beginning of your prompt line to define how many times a line should repeat. For example ^5|A cat on a park bench would repeat A cat on a park bench for 5 batches and X iterations.
  • Noodle Soup Prompts support for prompts.
  • Random word support in [word1|word2|word3] syntax, or multiple words in [^#|word1|word2|word3] format, where #` is a number determining how many random words from the list to use.
  • Negative prompt support built into normal prompts, denote negative prompts at the end of your prompt with --. For example Positive prompt here--Negative prompt here
  • Recursive Evolution - Feed your diffusion result into img2img for evolution. Can be used to create cool animations like below:
        
  • Image Upscaling (Easy Diffusion can be used as a image processor by enabling SKIP_DIFFUSION_RUN)
    • GOBIG - Slice a upscaled image up into small tiles to diffuse with img2img, and then compile the final upscaled result. Useful for adding diffusion details at higher resolutions where VRAM wouldn't allow.
    • IMG2IMG - Basic Image to Image upscaling, very VRAM intensive
    • GFPGAN - GFPGAN Face Enhancement
    • CodeFormer - CodeFormer Face Enhancement + Real-ESRGAN
    • Real-SRGAN - Real-ESRGAN Super Resolution
  • Image Processing
    • Sharpening
    • Chromatic Aberration
    • Median Filter
    • Depth Output
    • Fake Depth of Field
    • Tileable Texture Output
  • CLIP Interrogate
    • Interrogate diffusion results with various CLIP models
    • Interrogate batch images without diffusion (with SKIP_DIFFUSION_RUN)

Easy Diffusion is maintained by WASasquatch (WAS#0263)

Stablity.AI Model Terms of Use

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.

The CreativeML OpenRAIL License specifies:

  1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content

  2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license

  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)

Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license

easydiffusion's People

Contributors

wasasquatch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easydiffusion's Issues

NSFW filter returns black screen even when disabled.

Current development branch, fails to function from either NSFW init image or from text generations with potential for NSFW output.
Outputs solid black screen regardless of weither or not filter is selected.

TypeError: img2img() missing 1 required positional argument: 'image'

in
1462 raise SystemExit('\33[33mExecution interrupted by user.\33[0m')
1463 except Exception as e:
-> 1464 raise e
1465 finally:
1466 clean_env()

2 frames

in diffuse_run()
558 image = pipeout.images[0]
559 else:
--> 560 pipeout = pipe.img2img(prompt=PROMPT, negative_prompt=NEG_PROMPT, num_inference_steps=STEPS, init_image=init, strength=INIT_SCALE, guidance_scale=SCALE, generator=gen_seed)
561 image = pipeout.images[0]
562 else:

TypeError: img2img() missing 1 required positional argument: 'image'

cannot import name 'PROTOCOL_TLS' from 'urllib3.util.ss

20 from urllib3.exceptions import SSLError as URLLib3SSLError
21 from urllib3.util.retry import Retry
---> 22 from urllib3.util.ssl_ import (
23 DEFAULT_CIPHERS,
24 OP_NO_COMPRESSION,

ImportError: cannot import name 'PROTOCOL_TLS' from 'urllib3.util.ssl_' (/usr/local/lib/python3.8/dist-packages/urllib3/util/ssl_.py)


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

error running the code

Main branch, standard settings used. When running the following error appear.
Tried to solve with cleaning cache, re-download models, etc. but didn't work.
(Dev is working)

image

Does not Recognize External Thunderbolt Radeon GPU, Neural Compute Stick, or internal Intel OneAPI GPU

Not sure where to start and this may land nowhere: but in the process of discovery here I am. This post gets rather long so GPT4 summarizes it as: Summary: The issue aims to enhance Easy Diffusion's functionality by enabling it to recognize and utilize all available devices and cores in a system, including CPUs, NVIDIA and AMD GPUs, Compute Sticks, and Intel OneAPI GPUs. The current behavior only detects CPU cores or NVIDIA GPUs. The author seeks a solution that can manage multiple backends and virtual environments, such as Anaconda or Docker, to improve performance and avoid "out of memory" situations. Although their Python skills are limited, the author is willing to contribute by testing potential solutions.

Hello! I'm here to explore the possibility of enhancing Easy Diffusion's functionality to better utilize all available devices and cores in a system.

Expected behavior: Easy Diffusion should recognize all devices and use them collectively to avoid "out of memory" situations, and to fully utilize all available cores during a session, rather than just one or the other.

Current behavior: Easy Diffusion only recognizes CPU cores or the NVIDIA GPU. AMD GPU, Compute Stick, and Intel OneAPI GPU are not detected.

Hardware Setup:

Core i7-9750H 6 Core / 12 Thread CPU (Comet Lake 14nm AVX2 CPU)
32GB DDR4-2666MHz
Integrated Intel UHD 630 CFL GT2 GPU with 24EU (sharing 32GB system RAM w/Core i7) (GFX9/GFX9.5)
Intel Neural Compute Stick 2 with 12 "shave cores" and 4GB VPURAM (Movidius Myriad cores, soon to be integrated into 14th Gen CPUs)
NVIDIA RTX 2080 Max-Q w/8GB GDDR5 VRAM (internal)
AMD Vega Frontier Edition 16GB GDDR5 VRAM via Razer Thunderbolt enclosure

Operating Environment:

I am using EndeavourOS rolling release (downstream of Arch Linux, but Arch is being installed as we speak for testing). The system has all necessary toolkits and SDKs packages-list-foreign.txt
packages-list-native.txt installed for each GPU, including:

  • CUDA
  • TensorRT
  • OptiX
  • HIP-Runtime-AMD and all associated HIP packages (for ROCm)
  • All Intel Level Zero and OneAPI packages, compilers, runtimes, and headers
  • OpenVINO and drivers for the neural compute stick
  • OpenCV
  • OpenVDK
  • OSPRay
  • OpenVKL
  • OpenIMPI
  • OpenMPI with HIP backend

Upcoming or Existing Practical Examples of Diffusion for On-Device Distributed Workloads / Personal HSA Systems:

  • OpenVINO + OneAPI: Upcoming 14th Generation Intel CPUs with integrated Movidius and Xe (Arc-based) graphics cores, along with additional add-on Intel Arc GPUs
  • ROCm/HIP/OpenSYCL + CUDA: Ryzen 7000-series CPU paired with NVIDIA GPU and/or Radeon GPUs
  • OneAPI+OpenVINO+HIP/ROCm: 14th Generation Intel CPU (or 13th generation + compute stick) paired with AMD graphics solutions

Summary:

I previously posted this issue on the Arch4Edu group, which focuses on creating custom packages. The response suggested using Anaconda or Docker for each virtual environment (CPU, neural compute, CUDA, ROCm, OneAPI, etc.) and an interface to link them together. Unfortunately, my Python skills are limited, and I cannot develop a unified, multi-platform, multi-SDK, Easy Diffusion backend. However, I can contribute by testing!

Link to Arch4Edu Github

Also of Note:

If this mysterious "neural fabric" is created to make Easy Diffusion work on multiple platforms on one PC via virtual environments connected via this "interlink" that has yet to exist: it could also apply to a multi-device on-network or WAN distributed Easy Diffusion session.

Please install the `accelerate` library to use Diffusers with PyTorch

This began today.

ImportError                               Traceback (most recent call last)
[<ipython-input-2-c3af660a79a1>](https://localhost:8080/#) in <module>
   1797     raise e
   1798 except BaseException as e:
-> 1799     raise e
   1800 finally:
   1801     if CLEAR_SETUP_LOG: clear()

1 frames
[/usr/local/lib/python3.7/dist-packages/diffusers/__init__.py](https://localhost:8080/#) in <module>
     22 if is_torch_available() and not is_accelerate_available():
     23     error_msg = "Please install the `accelerate` library to use Diffusers with PyTorch. You can do so by running `pip install diffusers[torch]`. Or if torch is already installed, you can run `pip install accelerate`."  # noqa: E501
---> 24     raise ImportError(error_msg)
     25 
     26 

ImportError: Please install the `accelerate` library to use Diffusers with PyTorch. You can do so by running `pip install diffusers[torch]`. Or if torch is already installed, you can run `pip install accelerate`.

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

Numpy

F6DBA3F0-247B-4D3C-B9E8-D80E09773C00

been having issues all day.

previous resolved issue related to JSON resolved now doing this.

Ongoing Waifu Diffusion Difficulties

When attempting to use this today to continue a project from yesterday, I got a new error:

---------------------------------------------------------------------------

JSONDecodeError                           Traceback (most recent call last)

[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    276             # Load config dict
--> 277             config_dict = cls._dict_from_json_file(config_file)
    278         except (json.JSONDecodeError, UnicodeDecodeError):

11 frames

JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 1 (char 43)


During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)

[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    277             config_dict = cls._dict_from_json_file(config_file)
    278         except (json.JSONDecodeError, UnicodeDecodeError):
--> 279             raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.")
    280 
    281         return config_dict

OSError: It looks like the config file at '/content/Stable_Diffusion/model_cache/models--hakurei--waifu-diffusion/snapshots/a96d3a1ce046c3d2b3e4f0059dcd735aeb9a2672/unet/config.json' is not a valid JSON file.

I had created a new session, initialized the environment, and simply attempted to run a diffusion using the exact same settings I tried during my last run yesterday (including a random seed, hence me wanting to do so). I've tried telling the system to download the models to GDrive, recache-ing the pipes, not cache-ing the pipes, doing a soft reset of the environment, doing a hard reset of the environment (by deleting the session and trying everything again from scratch), getting a new copy of the sheet from Github and copying settings... and none of it's worked.

Changing to generic Stable Diffusion (v1.4) seems to work, but runs into the obvious problem of running my generations on the wrong model.

The worksheet -- neither my GDrive copy nor the Git repository -- hasn't changed in this time. Maybe it's a change in one of the external repositories? If so, it should (theoretically) be a simple fix... but one I lack the knowledge or background to make myself.

"Out of Memory" error with CUDA.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 7.79 GiB total capacity; 4.95 GiB already allocated; 1.60 GiB free; 5.15 GiB reserved in total by PyTorch) If reserved memory is >>                       
allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 

RTX 2070 8GB MAX-Q


Which documentation am I supposed to read to set up appropriate memory management? I can't seem to find it for some reason. Unsure about which settings to change and where.

longer prompts?

how can i use longer prompts?

sometimes i get the message "The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens"

I can't use files on my storage disk/newbie has problems.

First of all, thanks WASasquatch for having made this absolutely fantastic tool! I have an AMD card, and I was worried I wouldn't be able to use this wonderful tool. Thanks to you I can use it!
I just found a few issues, probably due to me being a newbie:

Problem1
I tried to give the program the txt. file position on my storage disk, yet it doesn't read it. I am doing something wrong? I tried to copy-paste all contents and doesn't go either way.
This also happens when I try to save the images to my local harddisk. What I am doing wrong?

Problem2
I want to use this model https://github.com/harubaru/waifu-diffusion , the model is currently stored in my local disk. Is there a way for me to use it? What is this model ID? And how can I find it?

Thanks again for making this great. You really saved AMD users! I will continue studying this tool in the coming days (and understand what all those options actually mean.)

Dxdiag just in case:
Dxdiag.txt

Questions:
Do you I use Img2Img?

If you need more images/explanation feel free to ask me.

New KeyError: 'sample'` error

Not sure what is going on. I get this issue with the main branch, dev branch, and the last branch I copied on 9-30. They all give me this error.
Everything loads fine, it even starts the first iteration and finishes the progress of steps but throws this at the end where it usually would display the image and go to the next in the batch.
The 577 below seems to be the culprit causing it, but I cannot work out what it should be (or what change elsewhere is needed to fix it)

100%
51/51 [00:09<00:00, 9.79it/s]
Deleting pipeline...
✅ Diffusion completed in 14s
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
[<ipython-input-4-95b729196338>](https://localhost:8080/#) in <module>
   1489                     raise SystemExit('\33[33mExecution interrupted by user.\33[0m')
   1490                 except Exception as e:
-> 1491                     raise e
   1492                 finally:
   1493                     clean_env()

3 frames
[<ipython-input-4-95b729196338>](https://localhost:8080/#) in <module>
   1402                     mask_path = None
   1403                     if IMAGE_UPSCALER is not 'GOBIG':
-> 1404                         result, path = diffuse_run()
   1405                     else:
   1406                         if init is not None:

[<ipython-input-4-95b729196338>](https://localhost:8080/#) in diffuse_run()
    577                     **image = pipeout["sample"][0]**
    578         except BaseException as e:
--> 579             raise e
    580         finally:
    581             if pipeout and pipeout['nsfw_content_detected'][0] and ENABLE_NSFW_FILTER:

[<ipython-input-4-95b729196338>](https://localhost:8080/#) in diffuse_run()
    575                 else:
    576                     pipeout = pipe(prompt=PROMPT, negative_prompt=NEG_PROMPT, num_inference_steps=STEPS, width=int(WIDTH), height=int(HEIGHT), guidance_scale=SCALE, generator=gen_seed)
--> 577                     image = pipeout["sample"][0]
    578         except BaseException as e:
    579             raise e

[/usr/local/lib/python3.7/dist-packages/diffusers/utils/outputs.py](https://localhost:8080/#) in __getitem__(self, k)
     86         if isinstance(k, str):
     87             inner_dict = {k: v for (k, v) in self.items()}
---> 88             return inner_dict[k]
     89         else:
     90             return self.to_tuple()[k]

KeyError: 'sample'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.