GithubHelp home page GithubHelp logo

v0xie / sd-webui-incantations Goto Github PK

View Code? Open in Web Editor NEW
124.0 124.0 8.0 27.86 MB

Enhance Stable Diffusion image quality, prompt following, and more through multiple implementations of novel algorithms for Automatic1111 WebUI.

License: GNU General Public License v3.0

Python 97.03% Jupyter Notebook 2.97%
extension stable-diffusion-webui stable-diffusion-webui-plugin

sd-webui-incantations's People

Contributors

drhead avatar furkangozukara avatar huchenlei avatar myusufy avatar v0xie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

sd-webui-incantations's Issues

Question on generation speed

Hi,

not a bug, but a question: is the generation speed supposed to be halved when using this? I went from 2:15 for four 1024x1024 SDXL images at 40 steps (in one batch) to 4:29. Is this normal?

'CFGDenoiserParams' object has no attribute 'denoiser'

Got those errors for each step when running "Perturbed Attention Guidance". Just got this extension installed, so can't say it worked for me before...
Breaks for both SD15 and SDXL models.
I disabled all optional extensions in text2img, but if this can't be reproduced, I can attempt to disable other installed extensions (though that's a bit of a pain).

0%| | 0/24 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py
Traceback (most recent call last):
File "M:\ML\1111\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
c.callback(params)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py", line 141, in
cfg_denoise_lambda = lambda callback_params: self.on_cfg_denoiser_callback(callback_params, pag_params)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py", line 278, in on_cfg_denoiser_callback
pag_params.denoiser = params.denoiser
AttributeError: 'CFGDenoiserParams' object has no attribute 'denoiser'


*** Error executing callback cfg_denoised_callback for M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py
Traceback (most recent call last):
File "M:\ML\1111\stable-diffusion-webui\modules\script_callbacks.py", line 224, in cfg_denoised_callback
c.callback(params)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py", line 142, in
cfg_denoised_lambda = lambda callback_params: self.on_cfg_denoised_callback(callback_params, pag_params)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\pag.py", line 331, in on_cfg_denoised_callback
cond_in = catenate_conds([tensor, uncond])
File "M:\ML\1111\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 13, in catenate_conds
return torch.cat(conds)
TypeError: expected Tensor as element 0 in argument 0, but got NoneType


4%|███▍ | 1/24 [00:00<00:19, 1.20it/s]*** Error executing.....

Let me know if there is anything I can help with testing.

Forge: AttributeError: 'DiffusionEngine' object has no attribute 'network_layer_mapping'

Got this error as soon as I installed the extension, and also every time I generate something, even if none of the features are enabled. Of course, they do not work if enabled either.

*** Error running postprocess_batch: G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\incantation_base.py
Traceback (most recent call last):
File "G:\webui_forge_cu121_torch21\webui\modules\scripts.py", line 851, in postprocess_batch
script.postprocess_batch(p, *script_args, images=images, **kwargs)
File "G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\incantation_base.py", line 91, in postprocess_batch
m.module.postprocess_batch(p, *self.m_args(m, *args), **kwargs)
File "G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\t2i_zero.py", line 213, in postprocess_batch
self.t2i0_postprocess_batch(p, *args, **kwargs)
File "G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\t2i_zero.py", line 216, in t2i0_postprocess_batch
self.unhook_callbacks()
File "G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\t2i_zero.py", line 224, in unhook_callbacks
cross_attn_modules = self.get_cross_attn_modules()
File "G:\webui_forge_cu121_torch21\webui\extensions\sd-webui-incantations\scripts\t2i_zero.py", line 408, in get_cross_attn_modules
nlm = m.network_layer_mapping
File "G:\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'DiffusionEngine' object has no attribute 'network_layer_mapping'

warning: interrogate_return_ranks

I receive this warning using Pony SDXL model

image

incantations - warning: interrogate_return_ranks should be enabled for Deepbooru Interrogate to work

It seems like interrogation works but it sets 1.0 strength to every tag.

Seek for Incantations - multi word bracket error ()

When I run "Seek for Incantations", I get "IndexError: list index out of range" exception.
I was able to narrow it down to being related to having bracket () around multiple words in modes without "Deepbooru Interrogate" checked.
"Append Generated Caption" didn't affect the error occurence.

Example prompts that work correctly:
green forest
(green) forest
green (forest)
(green) (forest)

Prompt that throw exception:
(green forest)
(green forest:1.2)

Brackets around multi token single word are fine.

100%|██████████████████████████████████████████████████████████████████████████████████| 24/24 [00:04<00:00, 5.45it/s]
*** Error running postprocess_batch: M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incantation_base.py
Traceback (most recent call last):
File "M:\ML\1111\stable-diffusion-webui\modules\scripts.py", line 758, in postprocess_batch
script.postprocess_batch(p, *script_args, images=images, **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incantation_base.py", line 91, in postprocess_batch
m.module.postprocess_batch(p, *self.m_args(m, *args), **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incant.py", line 426, in postprocess_batch
return self.incant_postprocess_batch(p, *args, **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incant.py", line 454, in incant_postprocess_batch
batch_mask_prompts.append(self.mask_prompt(incant_params.gamma, matches, caption, incant_params.word))
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incant.py", line 540, in mask_prompt
masked_prompt = re.sub(repl_regex, word_repl, masked_prompt)
File "C:\Programy_MR\Python_3_10_6\lib\re.py", line 209, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "C:\Programy_MR\Python_3_10_6\lib\re.py", line 303, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Programy_MR\Python_3_10_6\lib\sre_compile.py", line 788, in compile
p = sre_parse.parse(p, flags)
File "C:\Programy_MR\Python_3_10_6\lib\sre_parse.py", line 955, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Programy_MR\Python_3_10_6\lib\sre_parse.py", line 444, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "C:\Programy_MR\Python_3_10_6\lib\sre_parse.py", line 843, in _parse
raise source.error("missing ), unterminated subpattern",
re.error: missing ), unterminated subpattern at position 2


*** Error running before_process_batch: M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incantation_base.py
Traceback (most recent call last):
File "M:\ML\1111\stable-diffusion-webui\modules\scripts.py", line 726, in before_process_batch
script.before_process_batch(p, *script_args, **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incantation_base.py", line 83, in before_process_batch
m.module.before_process_batch(p, *self.m_args(m, *args), **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incant.py", line 276, in before_process_batch
self.incant_before_process_batch(p, *args, **kwargs)
File "M:\ML\1111\stable-diffusion-webui\extensions_sd-webui-incantations\scripts\incant.py", line 308, in incant_before_process_batch
masked_prompts = self.stage_1.masked_prompt[mask_idx]
IndexError: list index out of range

Great promise; new toy! Hard Prompts Made Easy

v0xie it's only now I'm noticing that a lot of my favourite extensions from A1111 have been implemented, contributed to or even outright designed by you, so seeing them all come together with other good ideas is very exciting. Thank you and good luck to you and your co-conspirators.

I submit a neglected but (I think?) relevant method for your attention:

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery
https://doi.org/10.48550/arXiv.2302.03668

Exists as an unlisted (heartbreaking!) extension for A1111 as the mighty Pez Dispenser.
You will also find a link to the reference code in both the paper above and the extension here:
https://github.com/r0mar0ma/sd-webui-pez-dispenser

I'm submitting this just assuming you'd be interested to see it. As I don't use A1111 but SD.Next instead, (where fearsome Pez Dispenser yet lives), I'm not really invested in this repo beyond just seeing good work come together.

I do miss CADS a great deal though... that algorithm is genuine magic. Are diffusers implementations of your own methods plausible (i.e matters of semantics and not design philosophy)? A1111 just does so many silly silly things that I just couldn't cope with its inconsistency any more. But work like yours is a serious loss for those making the switch. Don't you deserve torch.compile(), v0xie? Don't we all?

better interface for T2I-Zero?

From the small amount of experimentation I've done with T2I-Zero, it seems like it has a fair amount of potential, but right now it's really hard to use since I have to manually count tokens in the prompt and fix them every time I change the prompt.

I would think that the ideal method of control would be something that parses some part of the prompt as a marker like how sd-dynamic-prompts works (and this wouldn't be nearly as complex as that since this either has it applied to a token or not applied). There's possibly alternative ways to do it that might involve custom UI elements which may or may not be easier to implement (stable-diffusion-webui-tokenizer would be where I'd start for that).

edit: also somewhat related, I don't think there's any distinction between the positive and negative prompt, so t2i0 seems to always be applied to the negative prompt. This will also cause a device-side assert if the positive prompt is longer than the negative prompt and padding for the negative prompt is not enabled (this really needs to be replaced with cross attention masking on A1111's side so this isn't an issue, a lot of extensions have problems with this). Regardless of whether it crashes this does need some way of filtering the negative prompt out which may itself require forced unbatching of cond/uncond.

The extension throws an error on startup.

Hello

I just installed the extension and it throws an error on startup:


*** Error loading script: incantation_base.py
Traceback (most recent call last):
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\incantation_base.py", line 11, in
from scripts.incant import IncantExtensionScript
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\incant.py", line 14, in
from modules.sd_samplers_cfg_denoiser import pad_cond
ModuleNotFoundError: No module named 'modules.sd_samplers_cfg_denoiser'

*** Error loading script: pag.py
Traceback (most recent call last):
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "F:\Projects\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\pag.py", line 8, in
from modules import script_callbacks, patches
ImportError: cannot import name 'patches' from 'modules' (unknown location)

Sounds like a dependency is missing. Anyone know how to fix this?

Thanks!

PAG doesn't work with AND

Prompt a AND b

    Traceback (most recent call last):
      File "E:\stable-diffusion-webui\modules\script_callbacks.py", line 341, in cfg_denoised_callback
        c.callback(params)
      File "E:\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\pag.py", line 289, in <lambda>
        cfg_denoised_lambda = lambda callback_params: self.on_cfg_denoised_callback(callback_params, pag_params)
      File "E:\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\pag.py", line 503, in on_cfg_denoised_callback
        pag_x_out = params.inner_model(x_in, sigma_in, cond=conds)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\stable-diffusion-webui\modules\sd_models_xl.py", line 44, in apply_model
        return self.model(x, t, cond)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "E:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "E:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 987, in forward
        assert y.shape[0] == x.shape[0]
    AssertionError

---
ERROR:scripts.cfg_combiner:Exception in combine_denoised_pass_conds_list - 'NoneType' object is not subscriptable
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\cfg_combiner.py", line 244, in new_combine_denoised
    pag_delta = x_out[cond_index] - pag_x_out[i]
TypeError: 'NoneType' object is not subscriptable
ERROR:scripts.cfg_combiner:Exception in combine_denoised_pass_conds_list - 'NoneType' object is not subscriptable
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\extensions\sd-webui-incantations\scripts\cfg_combiner.py", line 244, in new_combine_denoised
    pag_delta = x_out[cond_index] - pag_x_out[i]
TypeError: 'NoneType' object is not subscriptable

Feature Requets: Support for HiDiffusers? (Generating images well above their base resolution)

Hey! Love what you're doing here. I noticed a developer of HiDiffusion was seeking help incorporating their code for generating high res images above what models generally can (similar to Deep Shrink/Resadapter but apparently better) into the various SD UIs.

If you have time and would consider either adding HiDiffusers to Incantations or even working on a standalone extension please reach out to him on Reddit: https://old.reddit.com/r/StableDiffusion/comments/1cbaxsu/introducing_hidiffusion_increase_the_resolution/l12zql5/?context=3

SCFG stuck in 'on' mode after AttributeError

S-CFG cannot be disabled if it encounters AttributeError and forces full restart of webui.

    Traceback (most recent call last):
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "C:\Stable_Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 985, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 1343, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 223, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 223, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Stable_Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 128, in sample_euler
        denoised = model(x, sigma_hat * s_in, **extra_args)
      File "C:\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 281, in forward
        denoised = self.combine_denoised(x_out, conds_list, uncond, cond_scale)
      File "C:\Stable_Diffusion\stable-diffusion-webui\extensions\4-sd-webui-incantations\scripts\cfg_combiner.py", line 110, in <lambda>
        pass_conds_func = lambda *args, **kwargs: combine_denoised_pass_conds_list(
      File "C:\Stable_Diffusion\stable-diffusion-webui\extensions\4-sd-webui-incantations\scripts\cfg_combiner.py", line 276, in combine_denoised_pass_conds_list
        return new_combine_denoised(*args)
      File "C:\Stable_Diffusion\stable-diffusion-webui\extensions\4-sd-webui-incantations\scripts\cfg_combiner.py", line 215, in new_combine_denoised
        rate = scfg_combine_denoised(
      File "C:\Stable_Diffusion\stable-diffusion-webui\extensions\4-sd-webui-incantations\scripts\scfg.py", line 421, in scfg_combine_denoised
        if mask_t.shape[2:] != model_delta_norm.shape[2:]:
    AttributeError: 'NoneType' object has no attribute 'shape'

Issue at install (Transformers?)

I was really excited to dive into yet another of your awesome offerings... However, running into an issue after install. System and SD WebUI details:

System
OS: Windows 10
CPU: Intel i7
GPU: Nvidia RTX 3070

SD WebUI
Version: 1.7.0
Python: 3.11.8
Torch: 2.2.0+cu121
xformers: N/A
Gradio: 3.41.2

Traceback

incant.py: done in 0.021s
*** Error loading script: incantation_base.py
    Traceback (most recent call last):
      File "D:\AIA\Tools\SDUI\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\AIA\Tools\SDUI\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\incantation_base.py", line 13, in <module>
        from scripts.p2hp import P2HP
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\p2hp.py", line 8, in <module>
        from scripts.prompt_optim_utils import optimize_prompt
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\prompt_optim_utils.py", line 32, in <module>
        from sentence_transformers.util import (semantic_search,
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\__init__.py", line 3, in <module>
        from .datasets import SentencesDataset, ParallelSentencesDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\__init__.py", line 1, in <module>
        from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\DenoisingAutoEncoderDataset.py", line 5, in <module>
        from transformers.utils.import_utils import is_nltk_available, NLTK_IMPORT_ERROR
    ImportError: cannot import name 'is_nltk_available' from 'transformers.utils.import_utils' (D:\AIA\Tools\SDUI\venv\Lib\site-packages\transformers\utils\import_utils.py)

---
  incantation_base.py: done in 0.057s
*** Error loading script: p2hp.py
    Traceback (most recent call last):
      File "D:\AIA\Tools\SDUI\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\AIA\Tools\SDUI\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\p2hp.py", line 8, in <module>
        from scripts.prompt_optim_utils import optimize_prompt
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\prompt_optim_utils.py", line 32, in <module>
        from sentence_transformers.util import (semantic_search,
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\__init__.py", line 3, in <module>
        from .datasets import SentencesDataset, ParallelSentencesDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\__init__.py", line 1, in <module>
        from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\DenoisingAutoEncoderDataset.py", line 5, in <module>
        from transformers.utils.import_utils import is_nltk_available, NLTK_IMPORT_ERROR
    ImportError: cannot import name 'is_nltk_available' from 'transformers.utils.import_utils' (D:\AIA\Tools\SDUI\venv\Lib\site-packages\transformers\utils\import_utils.py)

---
  p2hp.py: done in 0.012s
*** Error loading script: prompt_optim_utils.py
    Traceback (most recent call last):
      File "D:\AIA\Tools\SDUI\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\AIA\Tools\SDUI\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\AIA\Tools\SDUI\extensions\sd-webui-incantations\scripts\prompt_optim_utils.py", line 32, in <module>
        from sentence_transformers.util import (semantic_search,
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\__init__.py", line 3, in <module>
        from .datasets import SentencesDataset, ParallelSentencesDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\__init__.py", line 1, in <module>
        from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset
      File "D:\AIA\Tools\SDUI\venv\Lib\site-packages\sentence_transformers\datasets\DenoisingAutoEncoderDataset.py", line 5, in <module>
        from transformers.utils.import_utils import is_nltk_available, NLTK_IMPORT_ERROR
    ImportError: cannot import name 'is_nltk_available' from 'transformers.utils.import_utils' (D:\AIA\Tools\SDUI\venv\Lib\site-packages\transformers\utils\import_utils.py)

"Res adapter" - Would you consider a WebUI implementation/Extension?

Hello,

I was wondering if you'd consider doing a Webui implementation of "Res Adapter" which allows for great resolutions on v1.5 models (similar to Kohya's deep shrink), but also higher and lower resolutions for SDXL using their own custom trained models.

Their official comfy node was released w/in the past few days, but it doesn't seem like there are any plans to do an A1111/Forge extension.

Is Res Adapter something you'd potentially workout an extension for? I know I'd find it very useful.

Thanks for taking a look!

https://github.com/bytedance/res-adapter - Code
https://arxiv.org/abs/2403.02084 - Paper
https://github.com/blepping/ComfyUI-ApplyResAdapterUnet - An amateur developer released this comfy node prior to the official node release - they mentioned it on their official res-adapter page when they finally released their node this weekend.
https://github.com/jiaxiangc/ComfyUI-ResAdapter - Official Comfy Node
https://huggingface.co/jiaxiangc/res-adapter/tree/main - Models - The last one (SDXL up to 1536x1536) was released on March 30, 2024.

Errors in console using Multi-Concept T2I-Zero / Disable for Hi-res fix

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

Error: Width: 640, height: 360, Downscale width: 106, height: 60, Factor: 6, Max dims: 230400

image
image

I'm using latest dev build of a1111 and grogmix sdxl turbo model

Latest update caused an error on startup

WARNING:incantation_base.py:Module CFG Combiner does not implement get_xyz_axis_options
WARNING:scripts.incantation_base:Module CFG Combiner does not implement get_xyz_axis_options

Any idea why? Thanks!

Fully Incompatible (?) with Adetailer

I encountered an issue where PAG would apply normally on the first image generated in A1111, but errored out and didn't apply its effects after any subsequent generations. This was solved by adding ",incantations" to Settings>Adetailer>Script names to apply to ADetailer (separated by comma).

It should maybe be added to the readme or pinned somewhere in case another user experiences the same errors.

Disregard above, extension is fully incompatible with adetailer enabled. It causes PAG to error out after a few generations. Adetailer needs to be disabled for PAG to function properly.

I'd also like to note that, at least in my setup, PAG is fully compatible with Hypertile, Negpip(https://github.com/hako-mikan/sd-webui-negpip/), and Diffusion CG(https://github.com/Haoming02/sd-webui-diffusion-cg). All these are extensions which (if I understand correctly) affect the denoising process.

Currently on 1.9.0 A1111, all extensions updated to latest version as of posting, running a 1.5 checkpoint, all on a 2060 Super.

Seeing no changes despite selecting different CFG schedulers

Hello!

First off, thanks for the awesome plugin. Been having a lot of fun retrying my earliest awful prompts with PAG and seeing if they improve. I just updated the plugin for the CFG schedulers feature and then added the following A1111 PRs manually (but have them disabled):
AUTOMATIC1111/stable-diffusion-webui#15608 (skip cfg)
AUTOMATIC1111/stable-diffusion-webui#15607 (kl sampler)

When I tried testing though with the below xyz prompt, it looks like the cfg schedules have no impact at all as the x plot below shows. Shouldn't there be a larger variety? It seems like they are pixel exact?

an apple resting on a letter written in ink cursive, next to a quil pen
Steps: 30, Sampler: Euler a, Schedule type: Automatic, CFG scale: 7, Seed: 3056496578, Size: 512x512, Model hash: cc6cb27103, Model: v1-5-pruned-emaonly, VAE hash: 63aeecb90f, VAE: vae-ft-mse-840000-ema-pruned.vae.pt, PAG Active: True, PAG Scale: 0, PAG Start Step: 0, PAG End Step: 150, CFG Interval Enable: True, CFG Interval Schedule: Constant, CFG Interval Low: 0, CFG Interval High: 100, Downcast alphas_cumprod: True, Script: X/Y/Z plot, X Type: [PAG] CFG Schedule, X Values: "Constant,Clamp-Linear (c=4.0),Linear,Cosine,Clamp-Cosine (c=4.0)", Version: v1.9.0

Any guidance on this would help (no pun intended). Could the A1111 above PRs be having an impact? They worked fine before updating this plugin

cfg-scheduler-test

README needs reorganization

Needs better looking formatting
Anchor links to different sections
Sections for "What is this?", "How do I install this?", "Quickstart", etc.
Section for updates

API Payload

Hey,

very nice work.
I need to test this further.

Any way to use this extension and the 3 modules with the API ?

Thanks for your time !

Cheers 🥂

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.