hako-mikan / sd-webui-negpip Goto Github PK
View Code? Open in Web Editor NEWExtension for Stable Diffusion web-ui enables negative prompt in prompt
License: GNU Affero General Public License v3.0
Extension for Stable Diffusion web-ui enables negative prompt in prompt
License: GNU Affero General Public License v3.0
As the subject says, it looks like a __pycache__
folder was committed by mistake. It would be best to remove it.
ๅฝnegpipไธmultidiffusion๏ผtiled diffusionๅ่ฝ๏ผๅๆถๅฏ็จๆถ๏ผๆ็คบไปฅไธ้่ฏฏ
็ณป็ป็ฏๅข๏ผWindows 11
WebUI็ๆฌ๏ผ็งๅถๆดๅ4.4 A41WebUI1.6
To create a public link, set
share=True
inlaunch()
.
[Lobe]: Initializing Lobe
Startup time: 32.6s (prepare environment: 9.0s, import torch: 8.1s, import gradio: 1.5s, setup paths: 0.8s, initialize shared: 0.4s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 8.5s, create ui: 2.4s, gradio launch: 0.5s, app_started_callback: 0.4s).
Applying attention optimization: xformers... done.
Model loaded in 3.9s (load weights from disk: 0.4s, create model: 0.9s, apply weights to model: 2.3s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.2s).
2023-10-30 14:23:10,197 - AnimateDiff - INFO - Moving motion module to CPU
[Tiled Diffusion] ControlNet found, support is enabled.
MultiDiffusion hooked into 'Restart' sampler, Tile size: 96x96, Tile count: 4, Batch size: 4, Tile batches: 1 (ext: ContrlNet)CD Tuner Effective : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 0]
NegPiP enable, Positive:[3],Negative:None
*** Error completing request
*** Arguments: ('task(aeicsi7i99ugbex)', 0, 'Exquisite, full body, beautiful, young adult female Anime character, exaggerated features, expressive, pink hair, demon girl, diamond pupils, fluffy tail, cascading hair accessories, (eyeball hair ornament:1.1), beholder eye demon, (color gradient clothes made:1.2), ambient occlusion. Incredibly detailed, Overhead lighting, Cold Colors, Dreamcore, Calotype, Needle sharp, (animal ears:-1)', '[:lora:myself-badhand-v5_neg:0.2], Aissist-neg', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x20ECC6B3A30>, None, None, None, None, None, None, 20, 'Restart', 4, 0, 1, 1, 1, 7, 1.5, 0.7, 0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000020EC67B2E90>, 0, False, '', 0.8, 210015825, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000020ECC6BC2E0>, False, 'u2net', False, False, 10, 240, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, False, '1,1', 'Horizontal', '', 2, 1, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F249E5060>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F24A06740>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EABFF82B0>, [], [], False, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, True, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, '*CFG Scale
should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 'Positive', 0, ', ', 'Generate and always save', 32, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, '', '', '', '', '', '', None, 1.0, 1, False, False, '', False, 'Normal', 1, True, 1, 1, 'None', False, False, False, 'YuNet', 512, 1024, 0.5, 1.5, False, 'face close up,', 0.5, 0.5, False, True, '', '', '', '', '', 1, 'None', '', '', 1, 'FirstGen', False, False, 'Current', False, 1 2 3
*** 0 , False, '', False, 1, False, False, 30, '', False, False, False, '', False, '', False, '', False, '', False, '', False, None, None, False, None, None, False, None, None, False, 50, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\img2img.py", line 208, in img2img
processed = process_images(p)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\sd-webui-aki-v4.4\modules\processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 71, in restart_sampler
x = heun_step(x, old_sigma, new_sigma)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_extra.py", line 19, in heun_step
denoised = model(x, old_sigma * s_in, **extra_args)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
return fn(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 70, in kdiff_forward
return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 165, in sample_one_step
x_tile_out = repeat_func(x_tile, bboxes)
File "D:\sd-webui-aki-v4.4\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 65, in repeat_func
return self.sampler_forward(x_tile, sigma_tile, cond=cond_tile)
File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\tomesd\patch.py", line 63, in _forward
x = u_c(self.attn2(m_c(self.norm2(x)), context=context)) + x
File "D:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 330, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
File "D:\sd-webui-aki-v4.4\extensions\sd-webui-negpip\scripts\negpip.py", line 311, in sub_forward
context = torch.cat([context,conds],1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 1 for tensor number 1 in the list.
็ปๆต่ฏ๏ผๅ ณ้ญnegpipๅ๏ผtiled diffusionๅ่ฝๆญฃๅธธๆ ๆฅ้
I saw your tweet about Auto1111 not having the option to disable Negative Prompts in its pipeline, as this extension could perform that role instead. I thought to try that myself, using my usual negative prompts but instead with a negative weight in the positive prompts. When I tried using an embedding with a negative weight though, it spat out an error and all negatively weighted prompts were discarded in the resulting images, even those that weren't embeddings.
This isn't a feature or bug request though, more putting the info out there that this functionality isn't currently supported. I would have posted this as a discussion if the repository had one. It'd be great if this supports embeddings in the future, but it's a plenty great extension as it is. I was only curious if using this with negative_hand could have improved hand generation.
I understand that it works by putting minus in the positive prompt window,
On the contrary, if I put a minus in the negative prompt window, does it work as a positive prompt?
The script fails to load due to trying to access ui-config.json from the wrong location. This is likely caused by using a launch parameter to set a different path for the file, but the extension doesn't take this into account and tries to use the default path instead.
*** Error loading script: negpip.py
Traceback (most recent call last):
File "S:\Library\Files\Tools\sd-webui\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "S:\Library\Files\Tools\sd-webui\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "S:\Library\Files\Tools\sd-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 23, in <module>
with open(CONFIG, 'r') as json_file:
FileNotFoundError: [Errno 2] No such file or directory: 'ui-config.json'
Can this plug-in be called with an api๏ผThanks
Even if I don't add negative weights, but extension is active, generation speed drops a bit. In my case it is from 25.5s to 26.3s for batch size of 10 images. It is not significant, but it is bad
You should add check if prompt contains negative weights, and automatically deactivate extension for the generation if it doesn't contain negative weights
Only tested in img2img so far, will amend when I play with it later.
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x6144x320xf32>' and 'tensor<320xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
./webui.sh: line 255: 42012 Abort trap: 6 "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"
WebUI was launched with --skip-torch-cuda-test --upcast-sampling --use-cpu interrogate
I use a1111 v1.8.0 with fp8 weight option enabled, and an error occurred.
"RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn"
Without fp8 weight option, the error did not occur.
Without minus prompt, the error did not occur.
*** Error running process_batch: E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "E:\SDXL\webui\stable-diffusion-webui\modules\scripts.py", line 808, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 213, in process_batch
self.conds_all = calcconds(nip)
File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 205, in calcconds
conds, contokens = conddealer(targets)
File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 179, in conddealer
cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
File "E:\SDXL\webui\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "E:\SDXL\webui\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
z = self.process_tokens(tokens, multipliers)
File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 276, in process_tokens
z = self.encode_with_transformers(tokens)
File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 331, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
encoder_outputs = self.encoder(
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
layer_outputs = encoder_layer(
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward
hidden_states, attn_weights = self.self_attn(
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 272, in forward
query_states = self.q_proj(hidden_states) * self.scale
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\SDXL\webui\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 500, in network_Linear_forward
return originals.Linear_forward(self, input)
File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn
As in the title. If you generate something simple like
Positive prompt: 1girl, closeup
Negative prompt: EasyNegativeV2
with NegPiP enabled, A1111 fails with the following error:
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\Automatic1111\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
x = self.attn2(self.norm2(x), context=context) + x
TypeError: unsupported operand type(s) for +: 'NoneType' and 'Tensor'
Images after this fail with a different error, even if NegPiP is disabled:
File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\Automatic1111\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\Automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
x = self.attn2(self.norm2(x), context=context) + x
File "D:\Automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Automatic1111\extensions\sd-webui-negpip\scripts\negpip.py", line 298, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
TypeError: 'NoneType' object is not subscriptable
This error keeps happening until you generate something with NegPiP enabled and a negative prompt filled in.
(On a side note, I'd like to thank you for your Regional Prompter extension - I think it's an essential tool that's just as important as ControlNet. Your other extensions, including NegPiP, look promising too)
I activated it,
The toggle button displays 2 items.
About "true" and "false",
I can't explain what it means.
Someone please explain.
I don't know if this is a "them" problem or if they plan to fix it themselves, but one of the recent Forge updates broke several extension, including this one.
*** Error loading script: negpip.py
Traceback (most recent call last):
File "I:\StabilityMatrix\Packages\Stable Diffusion WebUI Forge b9705c5\modules\scripts.py", line 525, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "I:\StabilityMatrix\Packages\Stable Diffusion WebUI Forge b9705c5\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "I:\StabilityMatrix\Packages\Stable Diffusion WebUI Forge b9705c5\extensions\sd-webui-negpip\scripts\negpip.py", line 5, in
import ldm.modules.attention as atm
ModuleNotFoundError: No module named 'ldm'
*** Error executing callback cfg_denoiser_callback for D:\stable-diffusion-webui\webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
c.callback(params)
File "D:\stable-diffusion-webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 268, in denoiser_callback
for step, regions in self.hr_conds_all[0] if self.hrp and self.hr else self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable
SD web-UI v1.6.0
I'm using RTX 4060ti 16gb and i5-13400F
There are 2 ways I use this extension:
1. Positive prompt: (word:-1.8) *It said NegPiP Active: True but still shows the error*
2. Negative prompt: (word:-1.8) *It said NegPiP Active: True but still shows the error*
I don't know if it still works even if the error happens
ไพฟๅฉใชๆกๅผตๆฉ่ฝใใใใจใใใใใพใใ
Githubใใปใจใใฉไฝฟใฃใใใจใ็กใใฎใงใใใซๆธใ่พผใใฐใใใฎใใใใใใฃใฆใใใใๅคใชใใจใใใฆใใใใใฟใพใใใ
ๆ่ฟwebuiใฎใใผใธใงใณใไธใใใจใใใ้ซ่งฃๅๅบฆ่ฃๅฉใงใจใฉใผใ็บ็ใใใใใซใชใฃใฆใใพใใพใใใ
๏ผCommit hash: 5ef669de080814067961f28357256e8fe27544f4๏ผ
้ซ่งฃๅๅบฆ่ฃๅฉใไฝฟ็จใใชใๅ ดๅใฏใจใฉใผใชใ็ๆใงใใฆใใพใใ
ใใใฏไฝใใใใฎใใฐใงใใใใ๏ผใๆๆฐใงใใใ็ขบ่ชใใใ ใใใจๅนธใใงใใ
File "C:\Users\xxxxxx\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 304, in forward
return sub_forward(x, context, mask, additional_tokens, n_times_crossframe_attn_in_self,self.conds[0],self.contokens[0],self.unconds[0],self.untokens[0])
IndexError: list index out of range
Error running process_batch: D:**\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:**\modules\scripts.py", line 469, in process_batch
script.process_batch(p, script_args, **kwargs)
File "D:**\extensions\sd-webui-negpip\scripts\negpip.py", line 98, in process_batch
self.conds, self.contokens = conddealer(np)
File "D:***\extensions\sd-webui-negpip\scripts\negpip.py", line 80, in conddealer
if start is None: start = cond[0][0].cond[0:1,:]
IndexError: list index out of range
How does negpip work differently from CFG on a technical level? The code is a bit hard to understand
ใใคใใไธ่ฉฑใซใชใฃใฆใใพใใnegpipใฎใ้ฐใง่กจ็พใฎๅน
ใๅบใใฃใฆใใใๆฅฝใใใงใใ
ๆ่ฟControlnetใซIp-adapterใจใใใ็ปๅใใใญใณใใใซใใๆจๆฅใ่ฟฝๅ ใใใใใงใใ
ใฉใกใใใใญใณใใใซๅคงใใๅฝฑ้ฟใใๆกๅผตใใใใIpadapterใๅบๅใใใใญใณใใใใฉใใใงใใๆถใใใฆใใพใฃใฆใใใใใงใใ
ใจใฉใผใชใฉใฏๅบๅใใใฆใใชใใฎใงไธๆ่ญฐใชใฎใงใใใปใปใป่งฃๆฑบใใๆนๆณใฏ็กใใงใใใใ
It's a great extension, I keep it open at all times, so for me it just takes up more space in the t2i and i2i interfaces.
I'd like to get a way to hide it or just move it to settings and just choose to activate it or not from settings.
Edited: I'm not sure if it's an actual fix, but It works fine after I added with devices.autocast():
with devices.autocast():
cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
=====
if you try to put negative weight on embedding, it will show error
File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 268, in denoiser_callback
for step, regions in self.hr_conds_all[0] if self.hrp and self.hr else self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "H:\AItest\stable-diffusion-webui\modules\scripts.py", line 808, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 213, in process_batch
self.conds_all = calcconds(nip)
File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 205, in calcconds
conds, contokens = conddealer(targets)
File "H:\AItest\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 179, in conddealer
cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
File "H:\AItest\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "H:\AItest\stable-diffusion-webui\scripts\v2.py", line 36, in get_learned_conditioning_with_prior
cond = ldm.models.diffusion.ddpm.LatentDiffusion.get_learned_conditioning_original(self, c)
File "H:\AItest\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
z = self.process_tokens(tokens, multipliers)
File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 276, in process_tokens
z = self.encode_with_transformers(tokens)
File "H:\AItest\stable-diffusion-webui\modules\sd_hijack_clip.py", line 331, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
encoder_outputs = self.encoder(
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
layer_outputs = encoder_layer(
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
hidden_states = self.layer_norm1(hidden_states)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AItest\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 545, in network_LayerNorm_forward
return originals.LayerNorm_forward(self, input)
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 201, in forward
return F.layer_norm(
File "H:\AItest\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2546, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type Float but found Half
** Error executing callback cfg_denoiser_callback for D:\SD\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "D:\SD\stable-diffusion-webui\modules\script_callbacks.py", line 216, in cfg_denoiser_callback
c.callback(params)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 191, in denoiser_callback
for step, regions in self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable
The code relies on something that is not present in modules.prompt_parser
from SD.Next.
Backtrace:
12:49:51-811743 ERROR Running script process batch:
extensions/sd-webui-negpip/scripts/negpip.py:
AttributeError
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /notebooks/automatic/modules/scripts.py:467 in process_batch โ
โ โ
โ 466 โ โ โ โ args = p.per_script_args.get(script.title(), p.script_ โ
โ โฑ 467 โ โ โ โ script.process_batch(p, *args, **kwargs) โ
โ 468 โ โ โ except Exception as e: โ
โ โ
โ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:149 in โ
โ process_batch โ
โ โ
โ 148 โ โ โ
โ โฑ 149 โ โ self.conds_all = calcconds(nip) โ
โ 150 โ โ self.unconds_all = calcconds(pin) โ
โ โ
โ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:141 in โ
โ calcconds โ
โ โ
โ 140 โ โ โ โ โ โ if targets: โ
โ โฑ 141 โ โ โ โ โ โ โ conds, contokens = conddealer(targets) โ
โ 142 โ โ โ โ โ โ โ regionconds.append([region, conds, contoke โ
โ โ
โ /notebooks/automatic/extensions/sd-webui-negpip/scripts/negpip.py:114 in โ
โ conddealer โ
โ โ
โ 113 โ โ โ for target in targets: โ
โ โฑ 114 โ โ โ โ input = prompt_parser.SdConditioning([f"({target[0]}:{ โ
โ 115 โ โ โ โ cond = prompt_parser.get_learned_conditioning(shared.s โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
AttributeError: module 'modules.prompt_parser' has no attribute 'SdConditioning'
When I did not activate this plugin, the rendering speed was normal. With this plugin, the rendering speed will be very slow. Is there any solution?
As the subject says, the callback fails during generation in this line:
sd-webui-negpip/scripts/negpip.py
Line 243 in 573551b
Because at least in my testing with SD.Next, self.conds_all
is `None.
I'm not knowledgeable enough to know if the fault lies in negpip or in SD.Next.
Tested with your example and the results are random... Gothic become just not gothic and your other example magical dandy stay a women. The concept is interesting but in practice is just not working same as the example given.
When using NegPiP and Tiled Diffusion & VAE in i2i, the picture cannot be generated.
This issue was also posted on the other side.
pkuliyi2015/multidiffusion-upscaler-for-automatic1111#334
*** Error completing request
*** Arguments: ('task(txlkh92y0mixds7)', 0, '1girl,(flower:-1),', '', [], <PIL.Image.Image image mode=RGBA size=640x1024 at 0x24FB50ACD00>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 3, 1.5, 0.5, 0, 1024, 640, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000024F8A4C2320>, 0, False, '', 0.8, 3683106263, False, -1, 0, 0, 0, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'SwinIR_4x', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'Use same checkpoint', 'Use same vae', 1, 0, 'None', 'None', False, False, False, 'Use same checkpoint', 'Use same vae', 'txt2img-1pass', 'None', '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 0.5, 0.1, 0.9, False, False, 'Select Model', '', '', 'Use same sampler', 20, 7, 0.75, 4, 0.35, False, 50, 200, 0.5, False, True, 'stretching', 'bottom', 'None', 0.85, 0.75, False, 'Use same checkpoint', True, '', '', 'Use same sampler', 'BMAB fast', 20, 7, 0.75, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, None, False, 1, False, '', False, False, False, True, True, 4, 3, 0.1, 1, 1, 0, 0.4, 7, False, False, False, 'Score', 1, '', '', '', '', '', '', '', '', '', '', False, 512, 512, 7, 20, 4, 'Use same sampler', 'Only masked', 32, 'BMAB Face(Normal)', 0.4, 4, 0.35, False, 0.26, False, True, False, 'subframe', '', '', 0.4, 7, True, 4, 0.3, 0.1, 'Only masked', 32, '', False, False, False, 0.4, 0.1, 0.9, False, 'Inpaint', 0.85, 0.6, 30, False, True, 'None', 1.5, '', 'None', UiControlNetUnit(enabled=True, module='none', model='control_v11f1e_sd15_tile [a371b31b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='ControlNet is more important', save_detected_map=True), True, '* CFG Scale
should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '
Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, 50, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}When negpip is enabled, prompt editing go wrong.
Prompt example:
[(Black:-1.0) 1girl:Tiger:999]
When negpip is disabled, of cource, the prompt "Tiger" doesn't affect the output image. This is correct behavior.
However, when negpip is enabled, the output images seems to be affected by the prompt "Tiger".
This issue also occurs without negative prompt, like "[1girl:Tiger:999]".
ใใคใใไธ่ฉฑใซใชใฃใฆใใใพใใ
WebUI 1.7.0๏ผNegPiP ๆฌๆฅๆๆฐ็ใงใHiRes Fixใๆๅนใซใใ้ซ่งฃๅๅบฆใงใฎstepๆฐใ0๏ผๅ ๅบๅใฎใตใณใใชใณใฐในใใใๆฐๆๅฎใใใฎใพใพไฝฟใ๏ผใจใใฆๅบๅใใใจใใณใณใฝใผใซใงไปฅไธใฎใจใฉใผใ็บ็ใใฆใใพใใใ
ใใฎๅ ดๅใๅบๅ่ชไฝใฏ่กใใใใฎใงใใใใใคใในๆๅฎใใญใณใใใๅ ฅใฃใฆใใฆใNegPiPใๅนใใชใใชใฃใฆใใใใใงใใ
** Error running process_batch: C:\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 742, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "C:\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 160, in process_batch
if self.hrp: scheduled_hr_p = prompt_parser.get_learned_conditioning_prompt_schedules(p.hr_prompts,p.hr_second_pass_steps if p.hr_second_pass_steps > 0 else p.step)
AttributeError: 'StableDiffusionProcessingTxt2Img' object has no attribute 'step'
WebUI 1.6.0ใไฝฟใฃใฆใใใจใใฏ็บ็ใใฆใใชใใฃใ็็ถใจๆใใใพใใ
For example,
1girl, (open mouth), pink hair, (cat ear:-1)
This prompt with NegPiP results : (in console output)
[['open mouth, pink hair, cat ear', '-1']]
[]
NegPiP enable, Positive:[9],Negative:None
open mouth, pink hair
is treated as negative.
ใใคใไพฟๅฉใซๆดป็จใใใฆใใใ ใใฆใใใพใใ
ใใฆใใใญใณใใใฎใใคใในๆๅฎใใ้จๅใใๅใงๅผท่ชฟใซใใณใไฝฟใฃใฆใใใจใใใฎๅ
้ ญไฝ็ฝฎใใใฎใใญใณใใใใในใฆใใคใใน่งฃ้ใใใฆใใพใใใใชใฎใงใใใใใใฏไธๅ
ทๅใไปๆงใๅคๆญใงใใIssue่ตท่ใใใฆใใใ ใใพใใใ
ไปใฎๆกๅผตๆฉ่ฝ้กใฏใชใใซใใฆใใใคใใใชใฎใงใใใไธไธใใฎใใใใฎๅฝฑ้ฟใใใใพใใใใๅฎน่ตฆใใ ใใใ
ใใใใใ้กใใใใใพใใ
Hi,
This plugin is a gamechanger! However, it causes the following crash when used in conjunction with Regional Prompter:
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
Let me know if more diagnostic info is required.
SD v1.5.2 c9c8485bc1e8720aba70f029d25cba1c4abf2b5c, ADetailer v23.9.3, Python v3.10.6
i2tใง ADetailer ใๆๅนใซใใฆใใใจใใ ADetailer ใฎๅฎ่กไธญใซไธ่จใ้ฃ็ถใใฆ็บ็ใใพใใใ
*** Error executing callback cfg_denoiser_callback for C:\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py
Traceback (most recent call last):
File "C:\sd.webui\webui\modules\script_callbacks.py", line 195, in cfg_denoiser_callback
c.callback(params)
File "C:\sd.webui\webui\extensions\sd-webui-negpip\scripts\negpip.py", line 191, in denoiser_callback
for step, regions in self.conds_all[0]:
TypeError: 'NoneType' object is not subscriptable
postprocess
ใฎไธ่จใใณใกใณใใขใฆใใใใ็บ็ใใชใใชใใพใใใๅฏพๅฟใจใใฆๆญฃใใใใฏๅใใใพใใใใๅ ฑๅใพใงใ
def postprocess(self, p, processed, *args):
if hasattr(self,"handle"):
hook_forwards(self, p.sd_model.model.diffusion_model, remove=True)
del self.handle
# self.conds_all = None
# self.unconds_all = None
ใใคใใไธ่ฉฑใซใชใฃใฆใใใพใใ
ๆๆฐใใผใธใงใณใงใฏใใใคใในๆๅฎใฎใใญใณใใใ็กใใใฐ็ๆใซใฏๅฝฑ้ฟใ็กใใใใซใชใฃใฆใใใจๆใใพใใใใใใงใใใๅธธๆใชใณ๏ผใชใณใซใใใพใพWebUIใ็ตไบใใใใจใๆฌกๅ่ตทๅๆใซใชใใซใชใใชใ๏ผใซใใฆใใใใใจ่ใใฆใใพใใ
๏ผWebUIใฎใใญใณใใในใฟใคใซๆฉ่ฝใๅค็จใใฆใใใฎใงใใใใใคใในๆๅฎใฎๅ
ฅใฃใใใญใณใใใๅผใณๅบใใฆไฝฟใ้ใNegPiPใใชใณใซใๅฟใใฆๅบๅ็ตๆใๅคใใฃใฆๆ
ใฆใใใจใๅคใใใ๏ผ
LoRA BlockWeightๆกๅผตใฏๅธธๆใชใณใซใงใใใใใซใชใฃใฆใใใใใชใฎใงใใใใจๅใใซใชใใจใใใใชใจๆใใพใใใ
I get this crazy console spam while generating with this extension; only enabling the extension doesn't cause this, it only happens by actively using it.
Running on version 1.6.0 python: 3.10.6 torch: 2.0.0+cu118 xformers: 0.0.20.dev526 gradio: 3.41.2
Extension version 7d4defb 2023-09-22
The plugin doesn't work well with Prompt Editing and Prompts like the following don't work as expected:
[cat:(abc:-1):0.5]
Normally, the "abc" will appear only when the sample is 1/2 way through, but with the plugin enabled the "abc" appears earlier and interferes with the results.
Another example is that [cat:(abc:-1):0.99] should be equivalent to "cat", but with the plugin enabled it is equivalent to "cat,(abc:-1)".
ใใคใใไธ่ฉฑใซใชใฃใฆใใใพใใ
ใใญใณใใใซใใคใในๆๅฎใๅญๅจใใฆNegPiPใ็บๅใใๅ ดๅใ็ๆ็ปๅใฎPNG InfoใซNegPiPใ็บๅใใใใจใ็คบใใกใฟใใผใฟใๅใ่พผใใงใใใใใจใๅฐๆฅใกใฟใใผใฟใงๆ ๅ ฑๆด็ใใใใๆ็จฟใตใคใใซ็ป้ฒใใ้ใซไฝฟ็จใใๆกๅผตๆฉ่ฝใฎใชในใใๅบใๆฉ่ฝใชใฉใงไฝฟใใใฎใงใฏใชใใใจๆใใพใใใ
ไฝ่ซใงใใใ็พ็ถใใใคใในๆๅฎใฎใใญใณใใใๅซใใ ใพใพใงCivitAIใซ็ปๅๆ็จฟใใฆใใใฎใงใใใNegPiPใไฝฟ็จใใฆใใใใจใ็คบใใใจใใงใใใซใใพใใใใใฏ็พ็ถใงใฏไปๆนใชใใงใใญโฆ
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.