GithubHelp home page GithubHelp logo

Comments (17)

misobarisic avatar misobarisic commented on August 23, 2024

The issue seems to be a temporary regression within gradio or the webui itself. Could you try changing the last line of run_webui to this !COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

image

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

This works for me

def run_webui():
  #@markdown Choose the vae you want
  vae = "Anime (Anything 4)" #@param ["Anime (Anything 3)", "Anime (Anything 4)", "Anime (Waifu Diffusion 1.4)", "Stable Diffusion", "None"]
  
  if vae == "Anime (Anything 3)":
    !wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt"
  if vae == "Anime (Anything 4)":
    !wget -c https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt"
  if vae == "Anime (Waifu Diffusion 1.4)":
    !wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt"
  if vae == "Stable Diffusion":
    !wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt"

  %cd {root_dir}/stable-diffusion-webui/
  !COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

i think the problem is with !git clone https://github.com/acheong08/stable-diffusion-webui
it's full but not stable and camenduru's it's stable but can't add autotag nor Lora and acheong08's it's unstable but it's both lora and autotag
I'm sorry the code you edited and added is awesome it's stable and awesome if yes I'll recommend to my friends about your model it's awesome

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

Mine clones the latest commit from the A1111 webui repo 🤔

From what I can see, acheong08 has not pushed any commits of his own recently. He's been merging upstream changes

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

I have removed --gradio-queue since somebody else also reported an issue right after I pushed the fix.

image

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing#scrollTo=fAsaOpxoT-PC
This model I tweaked to my liking, it's quite stable, but if possible, I still hope your model is updated to help stabilize the image export so it won't fail without image.
Thank you for your model hope to have more models

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

image
image
image
image
Even after creating an image, it still can't be retrieved, but the old image has been created

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

I am not experiencing such an issue. Hmm

Can you see the images in the gallery tab?

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

my error times

  • When doing it over and over again, it will fail and can't create an image, forcing you to reset the web or break the session
    +when height or width >600
  • when creating too many images usually 4 or more will get an error sometimes it will be fine with 6 images but more will definitely have problems
  • many quotes create pictures but no pictures like the one I sent
    if yes can you try to check my colab link to see if it can be fixed
    https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing#scrollTo=fAsaOpxoT-PC
    image
    image

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

A111#6898

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

I could reproduce the issue using your notebook and it was solved by adding --gradio-queue to launch args

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

I've pushed a new commit using gradio queue with tag complete extensions checkbox and lora as well. Test it out

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

Error completing request
Arguments: ('task(mwkjn3p8ul6fi5w)', 'masterpiece, best quality, twintails, wide sleeves, hands on hips, hand on hip, breasts, 1girl, dress, solo, clothing cutout, thighhighs, cleavage, chinese clothes, rating:safe, pelvic curtain, mole on breast, large breasts, mole on thigh , black hair, china dress, blush, smile, cleavage cutout, short hair, bare shoulders, blue sky, looking at viewer, covered navel, no panties, focused, upright, thigh-high, opposite, volumetric light, good light,, masterpiece, best quality, very detailed, wallpaper 8k cg unity extremely detailed, illustrations,((beautifully detailed) face) ), best quality, (((super detailed ))) , high quality, high resolution illustrations, high resolution , side light, ((best illustration)), high resolution, illustration, absurd, super detailed, intricate detail, perfect , highly detailed eyes ,yellow eyes, perfect light, (CG:1.2 color is extremely detailed),((bangs covering one eye))', 'nsfw, loli, small breasts, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, worst quality, low quality, (worst quality, low quality, extra digits, loli, loli face:1.3)', [], 32, 16, False, False, 2, 2, 7, 324660525.0, -1.0, 0, 0, 0, False, 648, 584, False, 0.7, 2, 'Latent', 0, 0, 0, 0, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 476, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 614, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/stable-diffusion-webui/modules/processing.py", line 809, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 447, in launch_sampling
return func()
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 553, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 350, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [tensor[a:b]], "c_concat": [image_cond_in[a:b]]})
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
x = block(x, context=context[i])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 259, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 309, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 203, in memory_efficient_attention
return _memory_efficient_attention(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 299, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 315, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/dispatch.py", line 95, in _dispatch_fw
return _run_priority_list(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/dispatch.py", line 70, in _run_priority_list
raise NotImplementedError(msg)
image
hey bro it's not even better

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

The UI at least starts, doesn't throw an error because --gradio-queue is present. This is an issue with xformers then. I noticed your initial version of the notebook you sent used xformers 0.0.15 whereas mine was updated to 0.0.16 recently. Might be worth trying to use the previous version then.

!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp38-cp38-linux_x86_64.whl -->
!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+e163309.d20230103-cp38-cp38-linux_x86_64.whl

Or you could just disable xformers since I cannot guarantee it will work.

from diffusion-ui.

tranthai2k2 avatar tranthai2k2 commented on August 23, 2024

I don't know why after I press run in drive it works very smoothly and has a very good image but I haven't pressed it before but it doesn't show the image
If possible, you can adjust it to save only images, don't run in drive, it saves all the files to the drive, so it's a bit heavy
it stays the same unless run in drive
image

from diffusion-ui.

misobarisic avatar misobarisic commented on August 23, 2024

The issue is still out of my control. Though I will add an option for just saving images to gdrive nonetheless.

from diffusion-ui.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.