GithubHelp home page GithubHelp logo

hako-mikan / sd-webui-negpip Goto Github PK

View Code? Open in Web Editor NEW
183.0 183.0 14.0 2.96 MB

Extension for Stable Diffusion web-ui enables negative prompt in prompt

License: GNU Affero General Public License v3.0

Python 100.00%

sd-webui-negpip's Introduction

  • ๐Ÿ”ญ Iโ€™m currently working on Generative AI
  • ๐ŸŒฑ Iโ€™m currently learning Python, Nural Networks, Generative AI
    GitHub Stats Card

sd-webui-negpip's People

Contributors

hako-mikan avatar tekitou898009890 avatar w-e-w avatar yaosiqian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-negpip's Issues

negpipใŒactiveใฎ้š›ใซControlnetใฎIp-adapterใŒๅŠนใ‹ใชใใชใ‚‹ใ‚ˆใ†ใงใ™

ใ„ใคใ‚‚ใŠไธ–่ฉฑใซใชใฃใฆใ„ใพใ™ใ€‚negpipใฎใŠ้™ฐใง่กจ็พใฎๅน…ใŒๅบƒใŒใฃใฆใ™ใ”ใๆฅฝใ—ใ„ใงใ™ใ€‚
ๆœ€่ฟ‘ControlnetใซIp-adapterใจใ„ใ†ใ€็”ปๅƒใ‚’ใƒ—ใƒญใƒณใƒ—ใƒˆใซใ™ใ‚‹ๆ˜จๆ—ฅใŒ่ฟฝๅŠ ใ•ใ‚ŒใŸใ‚“ใงใ™ใŒ
ใฉใกใ‚‰ใ‚‚ใƒ—ใƒญใƒณใƒ—ใƒˆใซๅคงใใๅฝฑ้Ÿฟใ™ใ‚‹ๆ‹กๅผตใ‚†ใˆใ‹ใ€IpadapterใŒๅ‡บๅŠ›ใ—ใŸใƒ—ใƒญใƒณใƒ—ใƒˆใŒใฉใ“ใ‹ใงใ‹ใๆถˆใ•ใ‚Œใฆใ—ใพใฃใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚
ใ‚จใƒฉใƒผใชใฉใฏๅ‡บๅŠ›ใ•ใ‚Œใฆใ„ใชใ„ใฎใงไธๆ€่ญฐใชใฎใงใ™ใŒใƒปใƒปใƒป่งฃๆฑบใ™ใ‚‹ๆ–นๆณ•ใฏ็„กใ„ใงใ—ใ‚‡ใ†ใ‹

ๅธธๆ™‚ใ‚ชใƒณใซใงใใ‚‹ใ‚ˆใ†ใซใ—ใŸใ„

ใ„ใคใ‚‚ใŠไธ–่ฉฑใซใชใฃใฆใŠใ‚Šใพใ™ใ€‚

ๆœ€ๆ–ฐใƒใƒผใ‚ธใƒงใƒณใงใฏใ€ใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใฎใƒ—ใƒญใƒณใƒ—ใƒˆใŒ็„กใ‘ใ‚Œใฐ็”Ÿๆˆใซใฏๅฝฑ้ŸฟใŒ็„กใ„ใ‚ˆใ†ใซใชใฃใฆใ„ใ‚‹ใจๆ€ใ„ใพใ™ใŒใ€ใใ‚Œใงใ—ใŸใ‚‰ๅธธๆ™‚ใ‚ชใƒณ๏ผˆใ‚ชใƒณใซใ—ใŸใพใพWebUIใ‚’็ต‚ไบ†ใ—ใŸใ‚ใจใ€ๆฌกๅ›ž่ตทๅ‹•ๆ™‚ใซใ‚ชใƒ•ใซใชใ‚‰ใชใ„๏ผ‰ใซใ—ใฆใŠใใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚
๏ผˆWebUIใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚นใ‚ฟใ‚คใƒซๆฉŸ่ƒฝใ‚’ๅคš็”จใ—ใฆใ„ใ‚‹ใฎใงใ™ใŒใ€ใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใฎๅ…ฅใฃใŸใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅ‘ผใณๅ‡บใ—ใฆไฝฟใ†้š›ใ€NegPiPใ‚’ใ‚ชใƒณใซใ—ๅฟ˜ใ‚Œใฆๅ‡บๅŠ›็ตๆžœใŒๅค‰ใ‚ใฃใฆๆ…Œใฆใ‚‹ใ“ใจใŒๅคšใ„ใŸใ‚๏ผ‰

LoRA BlockWeightๆ‹กๅผตใฏๅธธๆ™‚ใ‚ชใƒณใซใงใใ‚‹ใ‚ˆใ†ใซใชใฃใฆใ„ใ‚‹ใ‚ˆใ†ใชใฎใงใ€ใใ‚ŒใจๅŒใ˜ใซใชใ‚‹ใจใ‚ˆใ„ใ‹ใชใจๆ€ใ„ใพใ—ใŸใ€‚

api call

Can this plug-in be called with an api๏ผŸThanks

When I input: (girl:-1.5)

Error running process_batch: D:**\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:**\modules\scripts.py", line 469, in process_batch
script.process_batch(p, script_args, **kwargs)
File "D:*
*\extensions\sd-webui-negpip\scripts\negpip.py", line 98, in process_batch
self.conds, self.contokens = conddealer(np)
File "D:*
**\extensions\sd-webui-negpip\scripts\negpip.py", line 80, in conddealer
if start is None: start = cond[0][0].cond[0:1,:]
IndexError: list index out of range

Breaks with SD.Next

The code relies on something that is not present in modules.prompt_parser from SD.Next.

Backtrace:

12:49:51-811743 ERROR    Running script process batch:                          
                         extensions/sd-webui-negpip/scripts/negpip.py:          
                         AttributeError                                         
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /notebooks/automatic/modules/scripts.py:467 in process_batch                 โ”‚
โ”‚                                                                              โ”‚
โ”‚   466 โ”‚   โ”‚   โ”‚   โ”‚   args = p.per_script_args.get(script.title(), p.script_ โ”‚
โ”‚ โฑ 467 โ”‚   โ”‚   โ”‚   โ”‚   script.process_batch(p, *args, **kwargs)               โ”‚
โ”‚   468 โ”‚   โ”‚   โ”‚   except Exception as e:                                     โ”‚
โ”‚                                                                              โ”‚
โ”‚ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:149 in     โ”‚
โ”‚ process_batch                                                                โ”‚
โ”‚                                                                              โ”‚
โ”‚   148 โ”‚   โ”‚                                                                  โ”‚
โ”‚ โฑ 149 โ”‚   โ”‚   self.conds_all = calcconds(nip)                                โ”‚
โ”‚   150 โ”‚   โ”‚   self.unconds_all = calcconds(pin)                              โ”‚
โ”‚                                                                              โ”‚
โ”‚ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:141 in     โ”‚
โ”‚ calcconds                                                                    โ”‚
โ”‚                                                                              โ”‚
โ”‚   140 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   if targets:                                    โ”‚
โ”‚ โฑ 141 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   conds, contokens = conddealer(targets)     โ”‚
โ”‚   142 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   regionconds.append([region, conds, contoke โ”‚
โ”‚                                                                              โ”‚
โ”‚ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:114 in     โ”‚
โ”‚ conddealer                                                                   โ”‚
โ”‚                                                                              โ”‚
โ”‚   113 โ”‚   โ”‚   โ”‚   for target in targets:                                     โ”‚
โ”‚ โฑ 114 โ”‚   โ”‚   โ”‚   โ”‚   input = prompt_parser.SdConditioning([f"({target[0]}:{ โ”‚
โ”‚   115 โ”‚   โ”‚   โ”‚   โ”‚   cond = prompt_parser.get_learned_conditioning(shared.s โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
AttributeError: module 'modules.prompt_parser' has no attribute 'SdConditioning'

Seem not working for all checkpoints

Tested with your example and the results are random... Gothic become just not gothic and your other example magical dandy stay a women. The concept is interesting but in practice is just not working same as the example given.

support for Textual Embeddings?

Edited: I'm not sure if it's an actual fix, but It works fine after I added with devices.autocast():

                with devices.autocast():
                    cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)

=====

if you try to put negative weight on embedding, it will show error

      File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 268, in denoiser_callback
        for step, regions in self.hr_conds_all[0] if self.hrp and self.hr else  self.conds_all[0]:
    TypeError: 'NoneType' object is not subscriptable
   Traceback (most recent call last):
      File "H:\AItest\stable-diffusion-webui\modules\scripts.py", line 808, in process_batch
        script.process_batch(p, *script_args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 213, in process_batch
        self.conds_all = calcconds(nip)
      File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 205, in calcconds
        conds, contokens = conddealer(targets)
      File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 179, in conddealer
        cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
      File "H:\AItest\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "H:\AItest\stable-diffusion-webui\scripts\v2.py", line 36, in get_learned_conditioning_with_prior
        cond = ldm.models.diffusion.ddpm.LatentDiffusion.get_learned_conditioning_original(self, c)
      File "H:\AItest\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 276, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 331, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
        hidden_states = self.layer_norm1(hidden_states)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\AItest\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 545, in network_LayerNorm_forward
        return originals.LayerNorm_forward(self, input)
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 201, in forward
        return F.layer_norm(
      File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2546, in layer_norm
        return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: expected scalar type Float but found Half

Prompt editing go wrong

When negpip is enabled, prompt editing go wrong.

Prompt example:
[(Black:-1.0) 1girl:Tiger:999]

When negpip is disabled, of cource, the prompt "Tiger" doesn't affect the output image. This is correct behavior.
However, when negpip is enabled, the output images seems to be affected by the prompt "Tiger".

This issue also occurs without negative prompt, like "[1girl:Tiger:999]".

Console spam

I get this crazy console spam while generating with this extension; only enabling the extension doesn't cause this, it only happens by actively using it.
Running on version 1.6.0 python: 3.10.6 torch: 2.0.0+cu118 xformers: 0.0.20.dev526 gradio: 3.41.2
Extension version 7d4defb 2023-09-22
Untitled-1

Blacket parsing missed?

For example,

1girl, (open mouth), pink hair, (cat ear:-1)

This prompt with NegPiP results : (in console output)

[['open mouth, pink hair, cat ear', '-1']]
[]
NegPiP enable, Positive:[9],Negative:None

open mouth, pink hair is treated as negative.


ใ„ใคใ‚‚ไพฟๅˆฉใซๆดป็”จใ•ใ›ใฆใ„ใŸใ ใ„ใฆใŠใ‚Šใพใ™ใ€‚
ใ•ใฆใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใฎใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใ™ใ‚‹้ƒจๅˆ†ใ‚ˆใ‚Šๅ‰ใงๅผท่ชฟใ‚ซใƒƒใ‚ณใ‚’ไฝฟใฃใฆใ„ใ‚‹ใจใ€ใใฎๅ…ˆ้ ญไฝ็ฝฎใ‹ใ‚‰ใฎใƒ—ใƒญใƒณใƒ—ใƒˆใŒใ™ในใฆใƒžใ‚คใƒŠใ‚น่งฃ้‡ˆใ•ใ‚Œใฆใ—ใพใ†ใ‚ˆใ†ใชใฎใงใ™ใŒใ€ใ“ใ‚Œใฏไธๅ…ทๅˆใ‹ไป•ๆง˜ใ‹ๅˆคๆ–ญใงใใšIssue่ตท่‰ใ•ใ›ใฆใ„ใŸใ ใใพใ—ใŸใ€‚
ไป–ใฎๆ‹กๅผตๆฉŸ่ƒฝ้กžใฏใ‚ชใƒ•ใซใ—ใฆใ„ใ‚‹ใคใ‚‚ใ‚Šใชใฎใงใ™ใŒใ€ไธ‡ไธ€ใใฎใ‚ใŸใ‚Šใฎๅฝฑ้ŸฟใŒใ‚ใ‚Šใพใ—ใŸใ‚‰ใ”ๅฎน่ตฆใใ ใ•ใ„ใ€‚
ใ‚ˆใ‚ใ—ใใŠ้ก˜ใ„ใ„ใŸใ—ใพใ™ใ€‚

Crashes A1111 on M1 Mac (MPS error)

Only tested in img2img so far, will amend when I play with it later.

loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x6144x320xf32>' and 'tensor<320xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
./webui.sh: line 255: 42012 Abort trap: 6           "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"

WebUI was launched with --skip-torch-cuda-test --upcast-sampling --use-cpu interrogate

HiRes Fixๆ™‚ใ€้ซ˜่งฃๅƒๅบฆstepๆ•ฐใ‚’0ใซใ—ใฆใ„ใ‚‹ใจใ‚จใƒฉใƒผ๏ผŸ

ใ„ใคใ‚‚ใŠไธ–่ฉฑใซใชใฃใฆใŠใ‚Šใพใ™ใ€‚

WebUI 1.7.0๏ผ‹NegPiP ๆœฌๆ—ฅๆœ€ๆ–ฐ็‰ˆใงใ€HiRes Fixใ‚’ๆœ‰ๅŠนใซใ—ใ€้ซ˜่งฃๅƒๅบฆใงใฎstepๆ•ฐใ‚’0๏ผˆๅ…ƒๅ‡บๅŠ›ใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ‚นใƒ†ใƒƒใƒ—ๆ•ฐๆŒ‡ๅฎšใ‚’ใใฎใพใพไฝฟใ†๏ผ‰ใจใ—ใฆๅ‡บๅŠ›ใ™ใ‚‹ใจใ€ใ‚ณใƒณใ‚ฝใƒผใƒซใงไปฅไธ‹ใฎใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใฆใ„ใพใ—ใŸใ€‚

ใ“ใฎๅ ดๅˆใ€ๅ‡บๅŠ›่‡ชไฝ“ใฏ่กŒใ‚ใ‚Œใ‚‹ใฎใงใ™ใŒใ€ใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใƒ—ใƒญใƒณใƒ—ใƒˆใŒๅ…ฅใฃใฆใ„ใฆใ‚‚NegPiPใŒๅŠนใ‹ใชใใชใฃใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚

** Error running process_batch: C:\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui\modules\scripts.py", line 742, in process_batch
        script.process_batch(p, *script_args, **kwargs)
      File "C:\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 160, in process_batch
        if self.hrp: scheduled_hr_p = prompt_parser.get_learned_conditioning_prompt_schedules(p.hr_prompts,p.hr_second_pass_steps if p.hr_second_pass_steps > 0 else p.step)
    AttributeError: 'StableDiffusionProcessingTxt2Img' object has no attribute 'step'

WebUI 1.6.0ใ‚’ไฝฟใฃใฆใ„ใŸใจใใฏ็™บ็”Ÿใ—ใฆใ„ใชใ‹ใฃใŸ็—‡็Šถใจๆ€ใ‚ใ‚Œใพใ™ใ€‚

Incompatible with Regional Prompter

Hi,

This plugin is a gamechanger! However, it causes the following crash when used in conjunction with Regional Prompter:

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.

Let me know if more diagnostic info is required.

Extension fails when using Textual Embeddings with negative weights

I saw your tweet about Auto1111 not having the option to disable Negative Prompts in its pipeline, as this extension could perform that role instead. I thought to try that myself, using my usual negative prompts but instead with a negative weight in the positive prompts. When I tried using an embedding with a negative weight though, it spat out an error and all negatively weighted prompts were discarded in the resulting images, even those that weren't embeddings.

This isn't a feature or bug request though, more putting the info out there that this functionality isn't currently supported. I would have posted this as a discussion if the repository had one. It'd be great if this supports embeddings in the future, but it's a plenty great extension as it is. I was only curious if using this with negative_hand could have improved hand generation.

Can just put it in the settings?

It's a great extension, I keep it open at all times, so for me it just takes up more space in the t2i and i2i interfaces.
I'd like to get a way to hide it or just move it to settings and just choose to activate it or not from settings.

Error message between each step (generation still works)

** Error executing callback cfg_denoiser_callback for D:\SD\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:\SD\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
c.callback(params)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 191, in denoiser_callback
for step, regions in self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable

IndexError: list index out of range

ไพฟๅˆฉใชๆ‹กๅผตๆฉŸ่ƒฝใ‚ใ‚ŠใŒใจใ†ใ”ใ–ใ„ใพใ™ใ€‚
Githubใ‚’ใปใจใ‚“ใฉไฝฟใฃใŸใ“ใจใŒ็„กใ„ใฎใงใ“ใ“ใซๆ›ธใ่พผใ‚ใฐใ‚ˆใ„ใฎใ‹ใ‚ˆใใ‚ใ‹ใฃใฆใŠใ‚‰ใšใ€ๅค‰ใชใ“ใจใ‚’ใ—ใฆใ„ใŸใ‚‰ใ™ใฟใพใ›ใ‚“ใ€‚

ๆœ€่ฟ‘webuiใฎใƒใƒผใ‚ธใƒงใƒณใ‚’ไธŠใ’ใŸใจใ“ใ‚ใ€้ซ˜่งฃๅƒๅบฆ่ฃœๅŠฉใงใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ใ‚ˆใ†ใซใชใฃใฆใ—ใพใ„ใพใ—ใŸใ€‚
๏ผˆCommit hash: 5ef669de080814067961f28357256e8fe27544f4๏ผ‰
้ซ˜่งฃๅƒๅบฆ่ฃœๅŠฉใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใฏใ‚จใƒฉใƒผใชใ็”Ÿๆˆใงใใฆใ„ใพใ™ใ€‚
ใ“ใ‚Œใฏไฝ•ใ‹ใ—ใ‚‰ใฎใƒใ‚ฐใงใ—ใ‚‡ใ†ใ‹๏ผŸใŠๆ‰‹ๆ•ฐใงใ™ใŒใ”็ขบ่ชใ„ใŸใ ใ‘ใ‚‹ใจๅนธใ„ใงใ™ใ€‚

File "C:\Users\xxxxxx\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 304, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
IndexError: list index out of range

็™บๅ‹•ใ—ใŸๅ ดๅˆใ€PNG Infoใซๆƒ…ๅ ฑใ‚’ๅŸ‹ใ‚่พผใ‚“ใงใปใ—ใ„

ใ„ใคใ‚‚ใŠไธ–่ฉฑใซใชใฃใฆใŠใ‚Šใพใ™ใ€‚

ใƒ—ใƒญใƒณใƒ—ใƒˆใซใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใŒๅญ˜ๅœจใ—ใฆNegPiPใŒ็™บๅ‹•ใ—ใŸๅ ดๅˆใ€็”Ÿๆˆ็”ปๅƒใฎPNG InfoใซNegPiPใŒ็™บๅ‹•ใ—ใŸใ“ใจใ‚’็คบใ™ใƒกใ‚ฟใƒ‡ใƒผใ‚ฟใ‚’ๅŸ‹ใ‚่พผใ‚“ใงใ‚‚ใ‚‰ใˆใ‚‹ใจใ€ๅฐ†ๆฅใƒกใ‚ฟใƒ‡ใƒผใ‚ฟใงๆƒ…ๅ ฑๆ•ด็†ใ—ใŸใ‚Šใ€ๆŠ•็จฟใ‚ตใ‚คใƒˆใซ็™ป้Œฒใ—ใŸ้š›ใซไฝฟ็”จใ—ใŸๆ‹กๅผตๆฉŸ่ƒฝใฎใƒชใ‚นใƒˆใ‚’ๅ‡บใ™ๆฉŸ่ƒฝใชใฉใงไฝฟใˆใ‚‹ใฎใงใฏใชใ„ใ‹ใจๆ€ใ„ใพใ—ใŸใ€‚

ไฝ™่ซ‡ใงใ™ใŒใ€็พ็Šถใ€ใƒžใ‚คใƒŠใ‚นๆŒ‡ๅฎšใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅซใ‚“ใ ใพใพใงCivitAIใซ็”ปๅƒๆŠ•็จฟใ—ใฆใ„ใ‚‹ใฎใงใ™ใŒใ€NegPiPใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็คบใ™ใ“ใจใŒใงใใšใซใ„ใพใ™ใ€‚ใ“ใ‚Œใฏ็พ็Šถใงใฏไป•ๆ–นใชใ„ใงใ™ใญโ€ฆ

Auto deactivate extension when prompt doesn't contain negative weights

Even if I don't add negative weights, but extension is active, generation speed drops a bit. In my case it is from 25.5s to 26.3s for batch size of 10 images. It is not significant, but it is bad

You should add check if prompt contains negative weights, and automatically deactivate extension for the generation if it doesn't contain negative weights

Technical explanation?

How does negpip work differently from CFG on a technical level? The code is a bit hard to understand

"NoneType" object is not subscriptable

*** Error executing callback cfg_denoiser_callback for D:\stable-diffusion-webui\webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
c.callback(params)
File "D:\stable-diffusion-webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 268, in denoiser_callback
for step, regions in self.hr_conds_all[0] if self.hrp and self.hr else self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable

SD web-UI v1.6.0
I'm using RTX 4060ti 16gb and i5-13400F

There are 2 ways I use this extension:
1. Positive prompt: (word:-1.8) *It said NegPiP Active: True but still shows the error*
2. Negative prompt: (word:-1.8) *It said NegPiP Active: True but still shows the error*

I don't know if it still works even if the error happens

ๅฝ“ไธŽtiled diffusion๏ผˆmultidiffusion๏ผ‰ๅŒๆ—ถๅฏ็”จๆ—ถ ๆŠฅ้”™โ€œSizes of tensors must match except in dimensionโ€

ๅฝ“negpipไธŽmultidiffusion๏ผˆtiled diffusionๅŠŸ่ƒฝ๏ผ‰ๅŒๆ—ถๅฏ็”จๆ—ถ๏ผŒๆ็คบไปฅไธ‹้”™่ฏฏ
็ณป็ปŸ็Žฏๅขƒ๏ผšWindows 11
WebUI็‰ˆๆœฌ๏ผš็ง‹ๅถๆ•ดๅˆ4.4 A41WebUI1.6

To create a public link, set share=True in launch().
[Lobe]: Initializing Lobe
Startup time: 32.6s (prepare environment: 9.0s, import torch: 8.1s, import gradio: 1.5s, setup paths: 0.8s, initialize shared: 0.4s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 8.5s, create ui: 2.4s, gradio launch: 0.5s, app_started_callback: 0.4s).
Applying attention optimization: xformers... done.
Model loaded in 3.9s (load weights from disk: 0.4s, create model: 0.9s, apply weights to model: 2.3s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.2s).
2023-10-30 14:23:10,197 - AnimateDiff - INFO - Moving motion module to CPU
[Tiled Diffusion] ControlNet found, support is enabled.
MultiDiffusion hooked into 'Restart' sampler, Tile size: 96x96, Tile count: 4, Batch size: 4, Tile batches: 1 (ext: ContrlNet)

CD Tuner Effective : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 0]
NegPiP enable, Positive:[3],Negative:None
*** Error completing request
*** Arguments: ('task(aeicsi7i99ugbex)', 0, 'Exquisite, full body, beautiful, young adult female Anime character, exaggerated features, expressive, pink hair, demon girl, diamond pupils, fluffy tail, cascading hair accessories, (eyeball hair ornament:1.1), beholder eye demon, (color gradient clothes made:1.2), ambient occlusion. Incredibly detailed, Overhead lighting, Cold Colors, Dreamcore, Calotype, Needle sharp, (animal ears:-1)', '[:lora:myself-badhand-v5_neg:0.2], Aissist-neg', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x20ECC6B3A30>, None, None, None, None, None, None, 20, 'Restart', 4, 0, 1, 1, 1, 7, 1.5, 0.7, 0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000020EC67B2E90>, 0, False, '', 0.8, 210015825, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000020ECC6BC2E0>, False, 'u2net', False, False, 10, 240, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, False, '1,1', 'Horizontal', '', 2, 1, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F249E5060>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F24A06740>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EABFF82B0>, [], [], False, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, True, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, '* CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 'Positive', 0, ', ', 'Generate and always save', 32, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, '', '', '', '', '', '', None, 1.0, 1, False, False, '', False, 'Normal', 1, True, 1, 1, 'None', False, False, False, 'YuNet', 512, 1024, 0.5, 1.5, False, 'face close up,', 0.5, 0.5, False, True, '', '', '', '', '', 1, 'None', '', '', 1, 'FirstGen', False, False, 'Current', False, 1 2 3
*** 0 , False, '', False, 1, False, False, 30, '', False, False, False, '', False, '', False, '', False, '', False, '', False, None, None, False, None, None, False, None, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\img2img.py", line 208, in img2img
processed = process_images(p)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 71, in restart_sampler
x = heun_step(x, old_sigma, new_sigma)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 19, in heun_step
denoised = model(x, old_sigma * s_in, **extra_args)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
return fn(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 70, in kdiff_forward
return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 165, in sample_one_step
x_tile_out = repeat_func(x_tile, bboxes)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 65, in repeat_func
return self.sampler_forward(x_tile, sigma_tile, cond=cond_tile)
File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\tomesd\patch.py", line 63, in _forward
x = u_c(self.attn2(m_c(self.norm2(x)), context=context)) + x
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 330, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 311, in sub_forward
context = torch.cat([context,conds],1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 1 for tensor number 1 in the list.

็ปๆต‹่ฏ•๏ผŒๅ…ณ้—ญnegpipๅŽ๏ผŒtiled diffusionๅŠŸ่ƒฝๆญฃๅธธๆ— ๆŠฅ้”™

`TypeError: 'NoneType' object is not subscriptable` raised in denoiser_callback

SD v1.5.2 c9c8485bc1e8720aba70f029d25cba1c4abf2b5c, ADetailer v23.9.3, Python v3.10.6

i2tใง ADetailer ใ‚’ๆœ‰ๅŠนใซใ—ใฆใ„ใŸใจใ“ใ‚ ADetailer ใฎๅฎŸ่กŒไธญใซไธ‹่จ˜ใŒ้€ฃ็ถšใ—ใฆ็™บ็”Ÿใ—ใพใ—ใŸใ€‚

*** Error executing callback cfg_denoiser_callback for C:\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py
    Traceback (most recent call last):
      File "C:\sd.webui\webui\modules\script_callbacks.py", line 195, in cfg_denoiser_callback
        c.callback(params)
      File "C:\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 191, in denoiser_callback
        for step, regions in self.conds_all[0]:
    TypeError: 'NoneType' object is not subscriptable

postprocess ใฎไธ‹่จ˜ใ‚’ใ‚ณใƒกใƒณใƒˆใ‚ขใ‚ฆใƒˆใ—ใŸใ‚‰็™บ็”Ÿใ—ใชใใชใ‚Šใพใ—ใŸใŒๅฏพๅฟœใจใ—ใฆๆญฃใ—ใ„ใ‹ใฏๅˆ†ใ‹ใ‚Šใพใ›ใ‚“ใ€‚ใ”ๅ ฑๅ‘Šใพใงใ€‚

    def postprocess(self, p, processed, *args):
        if hasattr(self,"handle"):
            hook_forwards(self, p.sd_model.model.diffusion_model, remove=True)
            del self.handle
        # self.conds_all = None
        # self.unconds_all = None

Uses wrong path for ui-config.json

The script fails to load due to trying to access ui-config.json from the wrong location. This is likely caused by using a launch parameter to set a different path for the file, but the extension doesn't take this into account and tries to use the default path instead.

*** Error loading script: negpip.py
Traceback (most recent call last):
File "S:\Library\Files\Tools\sd-webui\modules\scripts.py", line 382, in load_scripts
  script_module = script_loading.load_module(scriptfile.path)
File "S:\Library\Files\Tools\sd-webui\modules\script_loading.py", line 10, in load_module
  module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "S:\Library\Files\Tools\sd-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 23, in <module>
  with open(CONFIG, 'r') as json_file:
FileNotFoundError: [Errno 2] No such file or directory: 'ui-config.json'

Incompatibility with SD.Next

As the subject says, the callback fails during generation in this line:

for step, regions in self.conds_all[0]:

Because at least in my testing with SD.Next, self.conds_all is `None.

I'm not knowledgeable enough to know if the fault lies in negpip or in SD.Next.

Fails if negative prompt contains only embeddings or is empty

As in the title. If you generate something simple like

Positive prompt: 1girl, closeup
Negative prompt: EasyNegativeV2

with NegPiP enabled, A1111 fails with the following error:

      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "D:\Automatic1111\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
        x = self.attn2(self.norm2(x), context=context) + x
    TypeError: unsupported operand type(s) for +: 'NoneType' and 'Tensor'

Images after this fail with a different error, even if NegPiP is disabled:

      File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "D:\Automatic1111\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
        x = self.attn2(self.norm2(x), context=context) + x
      File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Automatic1111\extensions\sd-webui-negpip\scripts\negpip.py", line 298, in forward
        return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
    TypeError: 'NoneType' object is not subscriptable

This error keeps happening until you generate something with NegPiP enabled and a negative prompt filled in.

(On a side note, I'd like to thank you for your Regional Prompter extension - I think it's an essential tool that's just as important as ControlNet. Your other extensions, including NegPiP, look promising too)

Conflict with Tiled Diffusion & VAE

When using NegPiP and Tiled Diffusion & VAE in i2i, the picture cannot be generated.

image

This issue was also posted on the other side.
pkuliyi2015/multidiffusion-upscaler-for-automatic1111#334

*** Error completing request
*** Arguments: ('task(txlkh92y0mixds7)', 0, '1girl,(flower:-1),', '', [], <PIL.Image.Image image mode=RGBA size=640x1024 at 0x24FB50ACD00>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 3, 1.5, 0.5, 0, 1024, 640, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000024F8A4C2320>, 0, False, '', 0.8, 3683106263, False, -1, 0, 0, 0, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'SwinIR_4x', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0.1, 0.9, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, None, False, 1, False, '', False, False, False, True, True, 4, 3, 0.1, 1, 1, 0, 0.4, 7, False, False, False, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'BMAB Face(Normal)', 0.4, 4, 0.35, False, 0.26, False, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Only masked', 32, '', False, False, False, 0.4, 0.1, 0.9, False, 'Inpaint', 0.85, 0.6, 30, False, True, 'None', 1.5, '', 'None', UiControlNetUnit(enabled=True, module='none', model='control_v11f1e_sd15_tile [a371b31b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='ControlNet is more important', save_detected_map=True), True, '* CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\img2img.py", line 208, in img2img
processed = process_images(p)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-bmab\sd_bmab\sd_override\samper.py", line 67, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-bmab\sd_bmab\sd_override\samper.py", line 67, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
return fn(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 70, in kdiff_forward
return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 165, in sample_one_step
x_tile_out = repeat_func(x_tile, bboxes)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 65, in repeat_func
return self.sampler_forward(x_tile, sigma_tile, cond=cond_tile)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui
raise e
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui
return forward(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 731, in forward
h = module(h, emb, context)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
x = self.attn2(self.norm2(x), context=context) + x
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 330, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
File "D:\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 311, in sub_forward
context = torch.cat([context,conds],1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 1 for tensor number 1 in the list.

Not work well with prompt editing

The plugin doesn't work well with Prompt Editing and Prompts like the following don't work as expected:
[cat:(abc:-1):0.5]
Normally, the "abc" will appear only when the sample is 1/2 way through, but with the plugin enabled the "abc" appears earlier and interferes with the results.
Another example is that [cat:(abc:-1):0.99] should be equivalent to "cat", but with the plugin enabled it is equivalent to "cat,(abc:-1)".

Slow rendering speed

When I did not activate this plugin, the rendering speed was normal. With this plugin, the rendering speed will be very slow. Is there any solution?

Error with fp8 weight option

I use a1111 v1.8.0 with fp8 weight option enabled, and an error occurred.
"RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn"

Without fp8 weight option, the error did not occur.
Without minus prompt, the error did not occur.

*** Error running process_batch: E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py
    Traceback (most recent call last):
      File "E:\SDXL\webui\stable-diffusion-webui\modules\scripts.py", line 808, in process_batch
        script.process_batch(p, *script_args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 213, in process_batch
        self.conds_all = calcconds(nip)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 205, in calcconds
        conds, contokens = conddealer(targets)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 179, in conddealer
        cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "E:\SDXL\webui\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 276, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 331, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward
        hidden_states, attn_weights = self.self_attn(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 272, in forward
        query_states = self.q_proj(hidden_states) * self.scale
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 500, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.