GithubHelp home page GithubHelp logo

haoming02 / sd-webui-resharpen Goto Github PK

View Code? Open in Web Editor NEW
67.0 67.0 5.0 757 KB

An Extension for Automatic1111 Webui that increases/decreases the details of images

License: MIT License

Python 100.00%
stable-diffusion-webui stable-diffusion-webui-plugin

sd-webui-resharpen's Introduction

Profile
  • ๐Ÿ‘‹ Hi, Iโ€™m @Haoming02
  • ๐Ÿ’ผ Professional Unity Developer
  • ๐Ÿค– AI Art Enthusiast
  • ๐Ÿ“ซ Contact: [email protected]
Socials
Accumulated Stars โœจ

Stars

sd-webui-resharpen's People

Contributors

haoming02 avatar w-e-w avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

sd-webui-resharpen's Issues

Error: AttributeError: 'KDiffusionSampler' object has no attribute 'trajectory_enable'

Getting error after clicking Generate in SDWebUI:
AttributeError: 'KDiffusionSampler' object has no attribute 'trajectory_enable'

Steps to recreate:
Load SDWebUI.
Disable ReSharpen
Error is present.

Error is absent if ReSharpen is enabled.
After first successful generation you can disable it.
Deleting extension fixes issue.
I am not the only one with the issue, see last replies here:
AUTOMATIC1111/stable-diffusion-webui#15588

SDWebUI Version: v1.9.4
Windows
Installed extensions:
a1111-sd-webui-tagcomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git
multidiffusion-upscaler-for-automatic1111 | https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git
sd-webui-cd-tuner | https://github.com/hako-mikan/sd-webui-cd-tuner.git
sd-webui-controlnet | https://github.com/Mikubill/sd-webui-controlnet.git
sd-webui-freeu | https://github.com/ljleb/sd-webui-freeu
sd-webui-incantations | https://github.com/v0xie/sd-webui-incantations.git
sd-webui-infinite-image-browsing | https://github.com/zanllp/sd-webui-infinite-image-browsing.git
sd-webui-inpaint-anything | https://github.com/Uminosachi/sd-webui-inpaint-anything.git
sd-webui-kohya-hiresfix | https://github.com/wcde/sd-webui-kohya-hiresfix.git
sd-webui-lama-cleaner-masked-content | https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content.git
sd-webui-mosaic-outpaint | https://github.com/Haoming02/sd-webui-mosaic-outpaint.git
sd-webui-resharpen | https://github.com/Haoming02/sd-webui-resharpen.git
sd-webui-segment-anything | https://github.com/continue-revolution/sd-webui-segment-anything.git
sd-webui-tabs-extension | https://github.com/Haoming02/sd-webui-tabs-extension.git
sd-webui-yandere-inpaint-masked-content | https://github.com/light-and-ray/sd-webui-yandere-inpaint-masked-content.git
sd_civitai_extension | https://github.com/civitai/sd_civitai_extension
stable-diffusion-webui-dataset-tag-editor | https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor.git
webui-fooocus-prompt-expansion | https://github.com/power88/webui-fooocus-prompt-expansion.git

ZeroDivisionError: division by zero error when using HiRes fix

Getting error during HiRes fix with HiRes fix with any checkbox selected other than Flat for HiRes.

ReSharpen version:
f79e14d

Error:
Traceback (most recent call last):
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\processing.py", line 845, in process_images
res = process_images_inner(p)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\processing.py", line 1344, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\processing.py", line 1429, in sample_hr_pass
samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 172, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 172, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\sampling.py", line 555, in sample_dpmpp_sde
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\extensions\sd-webui-resharpen\scripts\resharpen.py", line 26, in hijack_callback
d["x"] += delta * apply_scaling(
File "S:\StabilityMatrix\Packages\Stable Diffusion WebUI\extensions\sd-webui-resharpen\scripts\res_scaling.py", line 8, in apply_scaling
ratio = float(current_step / total_steps)
ZeroDivisionError: division by zero
f79e14d6

SDWebUI Version: v1.9.4
Windows
Installed extensions:
a1111-sd-webui-tagcomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git
multidiffusion-upscaler-for-automatic1111 | https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git
sd-webui-cd-tuner | https://github.com/hako-mikan/sd-webui-cd-tuner.git
sd-webui-controlnet | https://github.com/Mikubill/sd-webui-controlnet.git
sd-webui-freeu | https://github.com/ljleb/sd-webui-freeu
sd-webui-incantations | https://github.com/v0xie/sd-webui-incantations.git
sd-webui-infinite-image-browsing | https://github.com/zanllp/sd-webui-infinite-image-browsing.git
sd-webui-inpaint-anything | https://github.com/Uminosachi/sd-webui-inpaint-anything.git
sd-webui-kohya-hiresfix | https://github.com/wcde/sd-webui-kohya-hiresfix.git
sd-webui-lama-cleaner-masked-content | https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content.git
sd-webui-mosaic-outpaint | https://github.com/Haoming02/sd-webui-mosaic-outpaint.git
sd-webui-resharpen | https://github.com/Haoming02/sd-webui-resharpen.git
sd-webui-segment-anything | https://github.com/continue-revolution/sd-webui-segment-anything.git
sd-webui-tabs-extension | https://github.com/Haoming02/sd-webui-tabs-extension.git
sd-webui-yandere-inpaint-masked-content | https://github.com/light-and-ray/sd-webui-yandere-inpaint-masked-content.git
sd_civitai_extension | https://github.com/civitai/sd_civitai_extension
stable-diffusion-webui-dataset-tag-editor | https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor.git
webui-fooocus-prompt-expansion | https://github.com/power88/webui-fooocus-prompt-expansion.git

Path Error in resharpen.py with StableDiffusionWebUI Forge โ€“ Suggestion for __init__.py Placement

Dear Haoming02,

Thank you for creating the "sharpness" control extension for StableDiffusionWebUI. I tried using this extension with StableDiffusionWebUI Forge and encountered a path error in resharpen.py.

I was wondering if placing the init.py file in the specified locations might resolve the error, reducing the likelihood of others encountering the same issue. The specified locations are sd-webui-resharpen and scripts.

Your consideration would be greatly appreciated. Thank you in advance.

'NoneType' object is not iterable when changing resolution

The extension works as expected until I change the resolution to anything other than 1024x1024. It happens even with all other extensions disabled. Here is the error:

Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) =  16281.34912109375
[Memory Management] Model Memory (MB) =  0.0
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15257.34912109375
Moving model(s) has taken 0.03 seconds
  0%|                                                                                                                                                                                                                | 0/28 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "N:\AI\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "N:\AI\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "N:\AI\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "N:\AI\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "N:\AI\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "N:\AI\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "N:\AI\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "N:\AI\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "N:\AI\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "N:\AI\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 88, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 620, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 447, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint
    return func(*inputs)
  File "N:\AI\stable-diffusion-webui-forge\ldm_patched\ldm\modules\attention.py", line 552, in _forward
    n = p(n, extra_options)
  File "N:\AI\stable-diffusion-webui-forge\extensions\sd-forge-couple\scripts\attention_couple.py", line 67, in attn2_output_patch
    mask_downsample = get_mask(
  File "N:\AI\stable-diffusion-webui-forge\extensions\sd-forge-couple\scripts\attention_masks.py", line 27, in get_mask
    mask_downsample = mask_downsample.view(num_conds, num_tokens, 1).repeat_interleave(
RuntimeError: shape '[3, 4160, 1]' is invalid for input of size 768
shape '[3, 4160, 1]' is invalid for input of size 768
*** Error completing request
*** Arguments: ('task(vqkdx3uwi27upfo)', <gradio.routes.Request object at 0x0000020E10B9F940>, '1girl\n1boy', '', [], 28, 'Euler a', 1, 1, 7, 1024, 1032, False, 0.45, 1.3, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 2797109398, False, -1, 0, 0, 0, 0.0, 4, 512, 512, True, 'None', 'None', 0, True, 'Horizontal', 'None', '', 'Basic', [['0:0.5', '0.0:1.0', '1.0'], ['0.5:1.0', '0.0:1.0', '1.0']], False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "N:\AI\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

Resharpen seems to stop batching

When I increase my batch count from 1 in txt2img when enabling resharpen I get "HINT: We don't support broadcasting, please use expand yourself before calling memory_efficient_attention". I only seem to get this with this extension enabled.
Tell a lie something else is causing this maybe just coincidence it was when I installed Resharpen.

Use via API?

Hi,
First, This is one of absolute best extensions i've discovered so far!
Congratulation on achieving this.

Now, is it possible to use this extension via the API or is it only usable through the WebUi?

[Feature request] Additional parameters for adding noise

First steps for initial rendering and latent hires fix are really sensitive for noise injection.
Lasts steps could be left "sane" to make some "housecleaning"
Could you please add parameters such as "start step" and "stop step" separately for initial and hires.fix?

Their behavior might be the next:
if value is floating (0.5 as an example) it means that its real value is 0.5*NumberOfSteps
if value is integer (2 as an example) it meaning is literal, so 2 steps

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.