GithubHelp home page GithubHelp logo

kosinkadink / comfyui-advanced-controlnet Goto Github PK

View Code? Open in Web Editor NEW
386.0 10.0 36.0 241 KB

ControlNet scheduling and masking nodes with sliding context support

License: GNU General Public License v3.0

Python 100.00%

comfyui-advanced-controlnet's Introduction

ComfyUI-Advanced-ControlNet

Nodes for scheduling ControlNet strength across timesteps and batched latents, as well as applying custom weights and attention masks. The ControlNet nodes here fully support sliding context sampling, like the one used in the ComfyUI-AnimateDiff-Evolved nodes. Currently supports ControlNets, T2IAdapters, ControlLoRAs, ControlLLLite, SparseCtrls, SVD-ControlNets, and Reference.

Custom weights allow replication of the "My prompt is more important" feature of Auto1111's sd-webui ControlNet extension via Soft Weights, and the "ControlNet is more important" feature can be granularly controlled by changing the uncond_multiplier on the same Soft Weights.

ControlNet preprocessors are available through comfyui_controlnet_aux nodes.

Features

  • Timestep and latent strength scheduling
  • Attention masks
  • Replicate "My prompt is more important" feature from sd-webui-controlnet extension via Soft Weights, and allow softness to be tweaked via base_multiplier
  • Replicate "ControlNet is more important" feature from sd-webui-controlnet extension via uncond_multiplier on Soft Weights
    • uncond_multiplier=0.0 gives identical results of auto1111's feature, but values between 0.0 and 1.0 can be used without issue to granularly control the setting.
  • ControlNet, T2IAdapter, and ControlLoRA support for sliding context windows
  • ControlLLLite support (requires model_optional to be passed into and out of Apply Advanced ControlNet node)
  • SparseCtrl support
  • SVD-ControlNet support
    • Stable Video Diffusion ControlNets trained by CiaraRowles: Depth, Lineart
  • Reference support
    • Supports reference_attn, reference_adain, and refrence_adain+attn modes. style_fidelity and ref_weight are equivalent to style_fidelity and control_weight in Auto1111, respectively, and strength of the Apply ControlNet is the balance between ref-influenced result and no-ref result. There is also a Reference ControlNet (Finetune) node that allows adjust the style_fidelity, weight, and strength of attn and adain separately.

Table of Contents:

Scheduling Explanation

The two core concepts for scheduling are Timestep Keyframes and Latent Keyframes.

Timestep Keyframes hold the values that guide the settings for a controlnet, and begin to take effect based on their start_percent, which corresponds to the percentage of the sampling process. They can contain masks for the strengths of each latent, control_net_weights, and latent_keyframes (specific strengths for each latent), all optional.

Latent Keyframes determine the strength of the controlnet for specific latents - all they contain is the batch_index of the latent, and the strength the controlnet should apply for that latent. As a concept, latent keyframes achieve the same affect as a uniform mask with the chosen strength value.

advcn_image

Nodes

The ControlNet nodes provided here are the Apply Advanced ControlNet and Load Advanced ControlNet Model (or diff) nodes. The vanilla ControlNet nodes are also compatible, and can be used almost interchangeably - the only difference is that at least one of these nodes must be used for Advanced versions of ControlNets to be used (important for sliding context sampling, like with AnimateDiff-Evolved).

Key:

  • 🟩 - required inputs
  • 🟨 - optional inputs
  • 🟦 - start as widgets, can be converted to inputs
  • 🟥 - optional input/output, but not recommended to use unless needed
  • 🟪 - output

Apply Advanced ControlNet

image

Same functionality as the vanilla Apply Advanced ControlNet (Advanced) node, except with Advanced ControlNet features added to it. Automatically converts any ControlNet from ControlNet loaders into Advanced versions.

Inputs

  • 🟩positive: conditioning (positive).
  • 🟩negative: conditioning (negative).
  • 🟩control_net: loaded controlnet; will be converted to Advanced version automatically by this node, if it's a supported type.
  • 🟩image: images to guide controlnets - if the loaded controlnet requires it, they must preprocessed images. If one image provided, will be used for all latents. If more images provided, will use each image separately for each latent. If not enough images to meet latent count, will repeat the images from the beginning to match vanilla ControlNet functionality.
  • 🟨mask_optional: attention masks to apply to controlnets; basically, decides what part of the image the controlnet to apply to (and the relative strength, if the mask is not binary). Same as image input, if you provide more than one mask, each can apply to a different latent.
  • 🟨timestep_kf: timestep keyframes to guide controlnet effect throughout sampling steps.
  • 🟨latent_kf_override: override for latent keyframes, useful if no other features from timestep keyframes is needed. NOTE: this latent keyframe will be applied to ALL timesteps, regardless if there are other latent keyframes attached to connected timestep keyframes.
  • 🟨weights_override: override for weights, useful if no other features from timestep keyframes is needed. NOTE: this weight will be applied to ALL timesteps, regardless if there are other weights attached to connected timestep keyframes.
  • 🟦strength: strength of controlnet; 1.0 is full strength, 0.0 is no effect at all.
  • 🟦start_percent: sampling step percentage at which controlnet should start to be applied - no matter what start_percent is set on timestep keyframes, they won't take effect until this start_percent is reached.
  • 🟦stop_percent: sampling step percentage at which controlnet should stop being applied - no matter what start_percent is set on timestep keyframes, they won't take effect once this end_percent is reached.

Outputs

  • 🟪positive: conditioning (positive) with applied controlnets
  • 🟪negative: conditioning (negative) with applied controlnets

Load Advanced ControlNet Model

image

Loads a ControlNet model and converts it into an Advanced version that supports all the features in this repo. When used with Apply Advanced ControlNet node, there is no reason to use the timestep_keyframe input on this node - use timestep_kf on the Apply node instead.

Inputs

  • 🟥timestep_keyframe: optional and likely unnecessary input to have ControlNet use selected timestep_keyframes - should not be used unless you need to. Useful if this node is not attached to Apply Advanced ControlNet node, but still want to use Timestep Keyframe, or to use TK_SHORTCUT outputs from ControlWeights in the same scenario. Will be overriden by the timestep_kf input on Apply Advanced ControlNet node, if one is provided there.
  • 🟨model: model to plug into the diff version of the node. Some controlnets are designed for receive the model; if you don't know what this does, you probably don't want tot use the diff version of the node.

Outputs

  • 🟪CONTROL_NET: loaded Advanced ControlNet

Timestep Keyframe

image

Scheduling node across timesteps (sampling steps) based on the set start_percent. Chaining Timestep Keyframes allows ControlNet scheduling across sampling steps (percentage-wise), through a timestep keyframe schedule.

Inputs

  • 🟨prev_timestep_kf: used to chain Timestep Keyframes together to create a schedule. The order does not matter - the Timestep Keyframes sort themselves automatically by their start_percent. Any Timestep Keyframe contained in the prev_timestep_keyframe that contains the same start_percent as the Timestep Keyframe will be overwritten.
  • 🟨cn_weights: weights to apply to controlnet while this Timestep Keyframe is in effect. Must be compatible with the loaded controlnet, or will throw an error explaining what weight types are compatible. If inherit_missing is True, if no control_net_weight is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a weight_override, the weight_override will be used during sampling instead of control_net_weight.
  • 🟨latent_keyframe: latent keyframes to apply to controlnet while this Timestep Keyframe is in effect. If inherit_missing is True, if no latent_keyframe is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a latent_kf_override, the latent_lf_override will be used during sampling instead of latent_keyframe.
  • 🟨mask_optional: attention masks to apply to controlnets; basically, decides what part of the image the controlnet to apply to (and the relative strength, if the mask is not binary). Same as mask_optional on the Apply Advanced ControlNet node, can apply either one maks to all latents, or individual masks for each latent. If inherit_missing is True, if no mask_optional is passed in, will attempt to reuse the last-used mask_optional in the timestep keyframe schedule. It is NOT overriden by mask_optional on the Apply Advanced ControlNet node; will be used together.
  • 🟦start_percent: sampling step percentage at which this Timestep Keyframe qualifies to be used. Acts as the 'key' for the Timestep Keyframe in the timestep keyframe schedule.
  • 🟦strength: strength of the controlnet; multiplies the controlnet by this value, basically, applied alongside the strength on the Apply ControlNet node. If set to 0.0 will not have any effect during the duration of this Timestep Keyframe's effect, and will increase sampling speed by not doing any work.
  • 🟦null_latent_kf_strength: strength to assign to latents that are unaccounted for in the passed in latent_keyframes. Has no effect if no latent_keyframes are passed in, or no batch_indeces are unaccounted in the latent_keyframes for during sampling.
  • 🟦inherit_missing: determines if should reuse values from previous Timestep Keyframes for optional values (control_net_weights, latent_keyframe, and mask_option) that are not included on this TimestepKeyframe. To inherit only specific inputs, use default inputs.
  • 🟦guarantee_steps: when 1 or greater, even if a Timestep Keyframe's start_percent ahead of this one in the schedule is closer to current sampling percentage, this Timestep Keyframe will still be used for the specified amount of steps before moving on to the next selected Timestep Keyframe in the following step. Whether the Timestep Keyframe is used or not, its inputs will still be accounted for inherit_missing purposes.

Outputs

  • 🟪TIMESTEP_KF: the created Timestep Keyframe, that can either be linked to another or into a Timestep Keyframe input.

Timestep Keyframe Interpolation

image

Allows to create Timestep Keyframe with interpolated strength values in a given percent range. (The first generated keyframe will have guarantee_steps=1, rest that follow will have guarantee_steps=0).

Inputs

  • 🟨prev_timestep_kf: used to chain Timestep Keyframes together to create a schedule. The order does not matter - the Timestep Keyframes sort themselves automatically by their start_percent. Any Timestep Keyframe contained in the prev_timestep_keyframe that contains the same start_percent as the Timestep Keyframe will be overwritten.
  • 🟨cn_weights: weights to apply to controlnet while this Timestep Keyframe is in effect. Must be compatible with the loaded controlnet, or will throw an error explaining what weight types are compatible. If inherit_missing is True, if no control_net_weight is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a weight_override, the weight_override will be used during sampling instead of control_net_weight.
  • 🟨latent_keyframe: latent keyframes to apply to controlnet while this Timestep Keyframe is in effect. If inherit_missing is True, if no latent_keyframe is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a latent_kf_override, the latent_lf_override will be used during sampling instead of latent_keyframe.
  • 🟨mask_optional: attention masks to apply to controlnets; basically, decides what part of the image the controlnet to apply to (and the relative strength, if the mask is not binary). Same as mask_optional on the Apply Advanced ControlNet node, can apply either one maks to all latents, or individual masks for each latent. If inherit_missing is True, if no mask_optional is passed in, will attempt to reuse the last-used mask_optional in the timestep keyframe schedule. It is NOT overriden by mask_optional on the Apply Advanced ControlNet node; will be used together.
  • 🟦start_percent: sampling step percentage at which the first generated Timestep Keyframe qualifies to be used.
  • 🟦end_percent: sampling step percentage at which the last generated Timestep Keyframe qualifies to be used.
  • 🟦strength_start: strength of the Timestep Keyframe at start of range.
  • 🟦strength_end: strength of the Timestep Keyframe at end of range.
  • 🟦interpolation: the method of interpolation.
  • 🟦intervals: the amount of keyframes to generate in total - the first will have its start_percent equal to start_percent, the last will have its start_percent equal to end_percent.
  • 🟦null_latent_kf_strength: strength to assign to latents that are unaccounted for in the passed in latent_keyframes. Has no effect if no latent_keyframes are passed in, or no batch_indeces are unaccounted in the latent_keyframes for during sampling.
  • 🟦inherit_missing: determines if should reuse values from previous Timestep Keyframes for optional values (control_net_weights, latent_keyframe, and mask_option) that are not included on this TimestepKeyframe. To inherit only specific inputs, use default inputs.
  • 🟦print_keyframes: if True, will print the Timestep Keyframes generated by this node for debugging purposes.

Outputs

  • 🟪TIMESTEP_KF: the created Timestep Keyframe, that can either be linked to another or into a Timestep Keyframe input.

Timestep Keyframe From List

image

Allows to create Timestep Keyframe via a list of floats, such as with Batch Value Schedule from ComfyUI_FizzNodes nodes. (The first generated keyframe will have guarantee_steps=1, rest that follow will have guarantee_steps=0).

Inputs

  • 🟨prev_timestep_kf: used to chain Timestep Keyframes together to create a schedule. The order does not matter - the Timestep Keyframes sort themselves automatically by their start_percent. Any Timestep Keyframe contained in the prev_timestep_keyframe that contains the same start_percent as the Timestep Keyframe will be overwritten.
  • 🟨cn_weights: weights to apply to controlnet while this Timestep Keyframe is in effect. Must be compatible with the loaded controlnet, or will throw an error explaining what weight types are compatible. If inherit_missing is True, if no control_net_weight is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a weight_override, the weight_override will be used during sampling instead of control_net_weight.
  • 🟨latent_keyframe: latent keyframes to apply to controlnet while this Timestep Keyframe is in effect. If inherit_missing is True, if no latent_keyframe is passed in, will attempt to reuse the last-used weights in the timestep keyframe schedule. If Apply Advanced ControlNet node has a latent_kf_override, the latent_lf_override will be used during sampling instead of latent_keyframe.
  • 🟨mask_optional: attention masks to apply to controlnets; basically, decides what part of the image the controlnet to apply to (and the relative strength, if the mask is not binary). Same as mask_optional on the Apply Advanced ControlNet node, can apply either one maks to all latents, or individual masks for each latent. If inherit_missing is True, if no mask_optional is passed in, will attempt to reuse the last-used mask_optional in the timestep keyframe schedule. It is NOT overriden by mask_optional on the Apply Advanced ControlNet node; will be used together.
  • 🟩float_strengths: a list of floats, that will correspond to the strength of each Timestep Keyframe; first will be assigned to start_percent, last will be assigned to end_percent, and the rest spread linearly between.
  • 🟦start_percent: sampling step percentage at which the first generated Timestep Keyframe qualifies to be used.
  • 🟦end_percent: sampling step percentage at which the last generated Timestep Keyframe qualifies to be used.
  • 🟦null_latent_kf_strength: strength to assign to latents that are unaccounted for in the passed in latent_keyframes. Has no effect if no latent_keyframes are passed in, or no batch_indeces are unaccounted in the latent_keyframes for during sampling.
  • 🟦inherit_missing: determines if should reuse values from previous Timestep Keyframes for optional values (control_net_weights, latent_keyframe, and mask_option) that are not included on this TimestepKeyframe. To inherit only specific inputs, use default inputs.
  • 🟦print_keyframes: if True, will print the Timestep Keyframes generated by this node for debugging purposes.

Outputs

  • 🟪TIMESTEP_KF: the created Timestep Keyframe, that can either be linked to another or into a Timestep Keyframe input.

Latent Keyframe

image

A singular Latent Keyframe, selects the strength for a specific batch_index. If batch_index is not present during sampling, will simply have no effect. Can be chained with any other Latent Keyframe-type node to create a latent keyframe schedule.

Inputs

  • 🟨prev_latent_kf: used to chain Latent Keyframes together to create a schedule. If a Latent Keyframe contained in prev_latent_keyframes have the same batch_index as this Latent Keyframe, they will take priority over this node's value.
  • 🟦batch_index: index of latent in batch to apply controlnet strength to. Acts as the 'key' for the Latent Keyframe in the latent keyframe schedule.
  • 🟦strength: strength of controlnet to apply to the corresponding latent.

Outputs

  • 🟪LATENT_KF: the created Latent Keyframe, that can either be linked to another or into a Latent Keyframe input.

Latent Keyframe Group

image

Allows to create Latent Keyframes via individual indeces or python-style ranges.

Inputs

  • 🟨prev_latent_kf: used to chain Latent Keyframes together to create a schedule. If any Latent Keyframes contained in prev_latent_keyframes have the same batch_index as a this Latent Keyframe, they will take priority over this node's version.
  • 🟨latent_optional: the latents expected to be passed in for sampling; only required if you wish to use negative indeces (will be automatically converted to real values).
  • 🟦index_strengths: string list of indeces or python-style ranges of indeces to assign strengths to. If latent_optional is passed in, can contain negative indeces or ranges that contain negative numbers, python-style. The different indeces must be comma separated. Individual latents can be specified by batch_index=strength, like 0=0.9. Ranges can be specified by start_index_inclusive:end_index_exclusive=strength, like 0:8=strength. Negative indeces are possible when latents_optional has an input, with a string such as 0,-4=0.25.
  • 🟦print_keyframes: if True, will print the Latent Keyframes generated by this node for debugging purposes.

Outputs

  • 🟪LATENT_KF: the created Latent Keyframe, that can either be linked to another or into a Latent Keyframe input.

Latent Keyframe Interpolation

image

Allows to create Latent Keyframes with interpolated values in a range.

Inputs

  • 🟨prev_latent_kf: used to chain Latent Keyframes together to create a schedule. If any Latent Keyframes contained in prev_latent_keyframes have the same batch_index as a this Latent Keyframe, they will take priority over this node's version.
  • 🟦batch_index_from: starting batch_index of range, included.
  • 🟦batch_index_to: end batch_index of range, excluded (python-style range).
  • 🟦strength_from: starting strength of interpolation.
  • 🟦strength_to: end strength of interpolation.
  • 🟦interpolation: the method of interpolation.
  • 🟦print_keyframes: if True, will print the Latent Keyframes generated by this node for debugging purposes.

Outputs

  • 🟪LATENT_KF: the created Latent Keyframe, that can either be linked to another or into a Latent Keyframe input.

Latent Keyframe From List

image

Allows to create Latent Keyframes via a list of floats, such as with Batch Value Schedule from ComfyUI_FizzNodes nodes.

Inputs

  • 🟨prev_latent_kf: used to chain Latent Keyframes together to create a schedule. If any Latent Keyframes contained in prev_latent_keyframes have the same batch_index as a this Latent Keyframe, they will take priority over this node's version.
  • 🟩float_strengths: a list of floats, that will correspond to the strength of each Latent Keyframe; the batch_index is the index of each float value in the list.
  • 🟦print_keyframes: if True, will print the Latent Keyframes generated by this node for debugging purposes.

Outputs

  • 🟪LATENT_KF: the created Latent Keyframe, that can either be linked to another or into a Latent Keyframe input.

There are more nodes to document and show usage - will add this soon! TODO

comfyui-advanced-controlnet's People

Contributors

alexbofa avatar dorotaluna avatar harelc avatar kijai avatar kosinkadink avatar robinjhuang avatar tungnguyensipher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-advanced-controlnet's Issues

[ Feature Request ] Requesting the Multi-Controlnet feature for use with AnimateDiff Comfyui

I noticed that AnimateDiff for ComfyUI works well when using just a single Controlnet. However, when I tried using other nodes to combine multiple Controlnets, I couldn't get the AnimateDiff node to work. It's really disappointing that I can only use it with a single Controlnet. So, I would like to ask if you have any plans to develop a Multi-Controlnet Node for ComfyUI AnimateDiff. It's truly unfortunate that Multi-Controlnet doesn't seem to work for AnimateDiff.

Question

Where do you attach the Weight?

Error occurred when executing KSampler: 'NoneType' object has no attribute 'strength'

Hello! I can't resolve this error on many workflows with ControlNet Advanced nodes... It works without ControlNet Advanced...
But how can I use it? I need to use Controlnets...

2 - Vid2Vid Multi-ControlNet_my.json

The error:

Error occurred when executing KSampler:

'NoneType' object has no attribute 'strength'

File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 259, in animatediff_sample
return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 197, in wrapped_function
return function_to_wrap(*args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 709, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 615, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 554, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "D:\Stable-Diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\k_diffusion\sampling.py", line 580, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\Stable-Diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 275, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "D:\Stable-Diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 265, in forward
return self.apply_model(*args, **kwargs)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 262, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 627, in sliding_sampling_function
cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 478, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep, c, len(cond_or_uncond))
File "D:\Stable-Diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\control.py", line 359, in get_control_inject
if self.strength == 0.0 or self.current_timestep_keyframe.strength == 0.0:

pinokio/api/comfyui.git/app/main.py': [Errno 2] No such file or directory (env) bash-3.2$

The default interactive shell is now zsh.
To update your account to use zsh, please run chsh -s /bin/zsh.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ source /Volumes/2T/pinokio/api/comfyui.git/app/env/bin/activate /Volumes/2T/pinokio/api/comfyui.git/app/env && python main.py --cpu
python: can't open file '/Volumes/2T/pinokio/api/comfyui.git/app/main.py': [Errno 2] No such file or directory
(env) bash-3.2$
截屏2024-03-08 11 10 57

如何解决mac

KeyError: 'control_model.out.2.weight' following ComfyUI 2024-01-02 update

Hi,
First of all, happy new year and all the best.

I would need your help.

Following the latest ComfyUI update, I get the attached error when using controlnet nodes in workflows that worked just fine before the update.
Whatever the adapter I load, the message is the same.

Versions :
ComfyUI: 1872[8e2c99] - (2024-01-02)
Manager: V1.13.7

I also updated ComfyUI-Advanced-ControlNet custom node but that doesn't change anything.

Has anyone a clue of what could be the issue?
Thank you

control_model_out_2_weight_error_20240103

module 'comfy.ops' has no attribute 'disable_weight_init'

Node won't import properly because of this:

Traceback (most recent call last):
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\nodes.py", line 1800, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\__init__.py", line 1, in <module>
    from .control.nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\nodes.py", line 6, in <module>
    from .control import load_controlnet, convert_to_advanced, is_advanced_controlnet
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\control.py", line 12, in <module>
    from .control_sparsectrl import SparseControlNet, SparseCtrlMotionWrapper, SparseMethod, SparseSettings, SparseSpreadMethod, PreprocSparseRGBWrapper
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\control_sparsectrl.py", line 30, in <module>
    from .utils import TimestepKeyframeGroup, disable_weight_init_clean_groupnorm, prepare_mask_batch
  File "D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\utils.py", line 228, in <module>
    class disable_weight_init_clean_groupnorm(comfy.ops.disable_weight_init):
AttributeError: module 'comfy.ops' has no attribute 'disable_weight_init'

Cannot import D:\Program Files\Visions of Chaos\Machine Learning Files\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet module for custom nodes: module 'comfy.ops' has no attribute 'disable_weight_init'

The current ControlNet Advance backend code is not complete, there is no corresponding code for the KSampler handling of the pose or the forward prompt (in the source code named "data_api_packing.py"), this is equivalent to having no effect. In the current version, is it not possible to specify a multiple skeleton pose image, so that the protagonist can follow this pose control in the video or photo? If you do not have this code, probably when can use this code?

The current ControlNet Advance backend code is not complete, there is no corresponding code for the KSampler handling of the pose or the forward prompt (in the source code named "data_api_packing.py"), this is equivalent to having no effect.
In the current version, is it not possible to specify a multiple skeleton pose image, so that the protagonist can follow this pose control in the video or photo? If you do not have this code, probably when can use this code?

"compute_indices_weights_nearest" not implemented for 'Half'

File "/root/acc/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control.py", line 59, in sliding_get_control
self.cond_hint = comfy.utils.common_upscale(self.cond_hint_original, x_noisy.shape[3] * 8, x_noisy.shape[2] * 8, 'nearest-exact', "center").to(torch.fp32).to(self.device)
File "/root/ComfyUI/comfy/utils.py", line 402, in common_upscale
return torch.nn.functional.interpolate(s, size=(height, width), mode=upscale_method)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/functional.py", line 3938, in interpolate
return torch._C._nn._upsample_nearest_exact2d(input, output_size, scale_factors)
RuntimeError: "compute_indices_weights_nearest" not implemented for 'Half'

When using Apply Advanced ControlNet, muting that node, my workflow appears to remain cached

I was previously using the vanilla Apply ControlNet (Advanced) node. I have a workflow, using the rgthree contexts, allows me to mute groups to enable/disable various mutations (such as the positive/negative conditioning from controlnets).

Though, switching to this repo's Apply Advanced ControlNet, it seems that even when this node is muted (and the seed is fixed), the workflow remains cached, and I end up with the image that was just made (when that controlnet was active).

image

I've verified the behavior by restarting ComfyUI between runs. I was successfully able to generate 2 very different images (one when controlnet was enabled, another disabled). But muting or unmuting this node doesn't seem to do anything. (For clarity, I am in fact muting, not bypassing).

Load SparseCtrl Model cause error

I use Load SparseCtrl Model with animateDiff_v3_sd15_sparsectl_scibble.ckpt that download from comfyUi Manager but it cause error when KSampler node process.
It show error "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!".

How could I fix it ?

[Errno 2] No such file or directory

Hi,

I have one error in my ComfyUI installation:
[Errno 2] No such file or directory: 'C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control/control.py\init.py'

There is no control folder, if I create this one manually, the problem will not be fixed. tried update but nothing.
Can this be ignored? And how about the forward and double backward slashes in combination with single forward slashes?

Thanks!

Question [NOT A BUG]

Sorry to bother you, but I can't find the info I need. I mainly like to do Vid2Vid with animate diff (in ComfyUI). I am trying to change both the strength and the ending step of controlnet for like 2/3 of my frames but I don't know what nodes I should use. I was thinking about the laten batch thingy but I am just not sure. This is what I have come up with but I am not sure it is correct any help would be most welcome. Also following the readme I cant find some of the nodes that you are talking about (the don't have the same inputs and before you ask I am on the latest update)
image

new Controlnet nodes not compatible with animatediff v3 sparseControl model

new Controlnet nodes not compatible with animatediff v3 sparseControl model
when upgrade animatediff into v3,which contains the sparsContoller model, the KSampler throws out Error after apply controlnet to KSampler nodes as following:
`'NoneType' object has no attribute 'strength'

File "/home/ubuntu/ComfyUI/execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/ubuntu/ComfyUI/execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/ubuntu/ComfyUI/execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/ubuntu/ComfyUI/nodes.py", line 1389, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "/home/ubuntu/ComfyUI/nodes.py", line 1325, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 299, in motion_sample
latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_utils.py", line 205, in wrapped_function
return function_to_wrap(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI_smZNodes/init.py", line 129, in KSampler_sample
return _KSampler_sample(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 716, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI_smZNodes/init.py", line 138, in sample
return _sample(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 622, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 561, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 285, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 275, in forward
return self.apply_model(*args, **kwargs)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 998, in apply_model
out = super().apply_model(*args, **kwargs)
File "/home/ubuntu/ComfyUI/comfy/samplers.py", line 272, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 649, in sliding_sampling_function
cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 506, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
File "/home/ubuntu/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 402, in get_control_inject
if self.strength == 0.0 or self.current_timestep_keyframe.strength == 0.0:`

workflow:
Uploading image.png…

'ControlNet' object has no attribute 'load_device'

Hi @Kosinkadink,
I appreciate your contributions for animations, all of your ComfyUI plugging is very helpful.

I met some errors while loading controlnet advaned and had no clue, could you please help on this?

  • ComfyUI is already up to date with the latest version.
  • ComfyUI-Advanced-ControlNet ( 296a9ef )
image image

error log

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/kalijason/git/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/nodes.py", line 87, in load_controlnet
    controlnet = load_controlnet(controlnet_path, timestep_keyframe)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 728, in load_controlnet
    return convert_to_advanced(control, timestep_keyframe=timestep_keyframe)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 737, in convert_to_advanced
    return ControlNetAdvanced.from_vanilla(v=control, timestep_keyframe=timestep_keyframe)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kalijason/git/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 630, in from_vanilla
    global_average_pooling=v.global_average_pooling, device=v.device, load_device=v.load_device, manual_cast_dtype=v.manual_cast_dtype)
                                                                                  ^^^^^^^^^^^^^
AttributeError: 'ControlNet' object has no attribute 'load_device'

LLLite CNs don't work correctly with Hotshot

I haven't had any success using ControlNet-LLLite CNs together with Hotshot. It just turns into a bunch of junk. The same CNs work well without the motion module. I haven't tested with ADXL.

Running sparse control is giving some tensor error as model staying in cuda and cpu

When running the custom workflow using Steerable motion workflow latest ; this error keeps happening while in sampler node; This relates to some tensor mismatch and something else staying in cuda and cpu not getting resolved.
can you look in same; Thanks

loading in lowvram mode 64.0
  0%|                                                                                                                   | 0/25 [00:02<?, ?it/s]
2024-01-30 12:40:11,658 - root - ERROR - !!! Exception during processing !!!
2024-01-30 12:40:11,837 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\comfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 373, in new_func
    res_value = old_func(*final_args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 2175, in sample_adv
    return super().sample(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 700, in sample
    samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler,
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 537, in process_latent_image
    samples = KSamplerAdvanced().sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\nodes.py", line 1409, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\nodes.py", line 1345, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 334, in motion_sample
    latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 216, in wrapped_function
    return function_to_wrap(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 130, in KSampler_sample
    return _KSampler_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 149, in sample
    return _sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 618, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 281, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1030, in apply_model
    out = super().apply_model(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 370, in evolved_sampling_function
    cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 478, in sliding_calc_cond_uncond_batch
    sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_uncond_batch(model, sub_cond, sub_uncond, sub_x, sub_timestep, model_options)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\.patches.py", line 4, in calc_cond_uncond_batch
    return calc_cond_uncond_batch_original_tiled_diffusion_96d2ed19(model, cond, uncond, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\samplers.py", line 197, in calc_cond_uncond_batch
    c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 411, in get_control_inject
    return self.get_control_advanced(x_noisy, t, cond, batched_number)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 237, in get_control_advanced
    control_prev = self.previous_controlnet.get_control(x_noisy, t, cond, batched_number)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 411, in get_control_inject
    return self.get_control_advanced(x_noisy, t, cond, batched_number)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 310, in get_control_advanced
    control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(dtype), y=y)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 72, in forward
    h = module(h, emb, context)
        ^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 59, in forward
    return forward_timestep_embed(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 106, in forward_timestep_embed
    x = layer(x)
        ^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 494, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 644, in forward
    hidden_states = block(
                    ^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 735, in forward
    attention_block(
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 873, in forward
    hidden_states = self.pos_encoder(hidden_states).to(hidden_states.dtype)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\comfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 774, in forward
    x = x + self.pe[:, : x.size(1)]
        ~~^~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Prompt executed in 46.15 seconds
2024-01-30_12-56-26

StableCascade controlnet do not work with ACN_AdvancedControlNetApply

Hi!

StableCascade Controlnet models are supported by ComfyUI built-in nodes now.
But as soon as I try to run them with ACN_AdvancedControlNetApply (in my case canny-cn model) I get the following error message at the start of the sampling process (so I guess loading and applying the Cascade-Controlnet works):

!!! Exception during processing !!!
Traceback (most recent call last):
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1368, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1338, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 248, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 703, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 608, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 547, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 285, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in forward
    return self.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 269, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 249, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 197, in calc_cond_uncond_batch
    c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 468, in get_control_inject
    return self.get_control_advanced(x_noisy, t, cond, batched_number)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 122, in get_control_advanced
    return super().get_control(x_noisy, t, cond, batched_number)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 494, in get_control
    return self.control_merge(control_input, mid, control_prev, x_noisy.dtype)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 562, in control_merge_inject
    x *= self.strength * self.calc_weight(i, x, len(control_input))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 485, in calc_weight
    return self.weights.get(idx=idx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 58, in get
    return self.weights[idx]
           ~~~~~~~~~~~~^^^^^
IndexError: list index out of range

Prompt executed in 1.47 seconds

Please find attached the workflow that works with the ComfyUI built-in Controlnet-Node but not with the ACN_AdvancedControlNetApply.

Is there a way to fix it?
Idk if it has something to do with #78

Thanks for your great work and kind regards

SC-Controlnet-cannytest.json

Sampler crash when using Color Palette and

Hej :) thanks for the extension, it is really amazing this implementation!

I had really good result using Color Palette and the T2I color adapter (https://civitai.com/models/17220/controlnet-t2i-adapter-models).

But it seems like when doing more than 16 frames with the Animate Diff Sampler, then it crashes.
So I thought I leave you the error massage here.

Cheers!

Error occurred when executing AnimateDiffSampler:

index 1 is out of bounds for dimension 0 with size 1

File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample
return super().sample(
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 74, in sample
return orig_comfy_sample(model, *args, **kwargs, callback=callback)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 97, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 781, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 686, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 638, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
File "E:\stable_diffusion\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "E:\stable_diffusion\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 326, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "E:\stable_diffusion\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 129, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 155, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 314, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 466, in sampling_function
cond, uncond = sliding_calc_cond_uncond_batch(
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 442, in sliding_calc_cond_uncond_batch
sub_cond_out, sub_uncond_out = calc_cond_uncond_batch(
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 314, in calc_cond_uncond_batch
c["control"] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
File "E:\stable_diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\control\control.py", line 321, in get_control
self.cond_hint_original = full_cond_hint_original[self.sub_idxs]

[Update your ComfyUI to fix if you haven't updated since before Nov. 1st] Error occurred when executing KSampler: 'NoneType' object has no attribute 'strength'

Hello i was using without any problem until this morning, i start taken this error log. i think problem is Load advanced controlnet model. when i try with Load Controlnet Model Ksampler working well

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/content/drive/MyDrive/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/nodes.py", line 1236, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/content/drive/MyDrive/ComfyUI/nodes.py", line 1206, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 163, in animatediff_sample
return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_utils.py", line 185, in wrapped_function
return function_to_wrap(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/sample.py", line 97, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 785, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 690, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 630, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 537, in sliding_sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 410, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet-main/control/control.py", line 359, in get_control_inject
if self.strength == 0.0 or self.current_timestep_keyframe.strength == 0.0:
AttributeError: 'NoneType' object has no attribute 'strength'

Error when run comfyui in fp8 mode

When I use Load SparseCtrl with Apply Controlnet Node with ComfyUI in fp8 mode by --fp8_e4m3fn-text-enc --fp8_e4m3fn-unet, it show error while in KSampler node. It shows error

  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\sampler_nodes.py", line 43, in sample
    latent_image = sampler.sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, i, i+1, noise_mode, return_with_leftover_noise)[0]
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\a1111_compat.py", line 124, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\a1111_compat.py", line 42, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 346, in motion_sample
    latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 360, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 618, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 281, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 383, in evolved_sampling_function
    cond_pred, uncond_pred = comfy.samplers.calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\samplers.py", line 197, in calc_cond_uncond_batch
    c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond))
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 468, in get_control_inject
    return self.get_control_advanced(x_noisy, t, cond, batched_number)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 263, in get_control_advanced
    control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(dtype), y=y)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_sparsectrl.py", line 88, in forward
    h = module(h, emb, context)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 59, in forward
    return forward_timestep_embed(self, *args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 94, in forward_timestep_embed
    x = layer(x, emb)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 229, in forward
    return checkpoint(
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint
    return func(*inputs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 242, in _forward
    h = self.in_layers(x)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
    input = module(input)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 244, in forward
    return F.group_norm(
  File "C:\Users\admin\Downloads\apps\stabe_diffusion\ComfyUI\venvCuda118\lib\site-packages\torch\nn\functional.py", line 2558, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

May dev look into this error ? Thanks.

(IMPORT FAILED) ComfyUI-Advanced-ControlNet Nodes

Hello,

I'm having problems importing ComfyUI-Advanced-ControlNet Nodes

1 Kosinkadink (IMPORT FAILED) ComfyUI-Advanced-ControlNet Nodes: ControlNetLoaderAdvanced, DiffControlNetLoaderAdvanced, ScaledSoftControlNetWeights, SoftControlNetWeights, CustomControlNetWeights, SoftT2IAdapterWeights, CustomT2IAdapterWeights

Both on Google Colab and my local ComfyUI on Linux install.

I tried disabling/enabling, uninstall/install.

Don't know what else to try

anyone has any idea on how to fix it?

AttributeError: 'ControlNetAdvanced' object has no attribute 'model_sampling_current'

Does anyone have the same problem with me?
sys: Linux

ERROR:root:Traceback (most recent call last):
File "/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/data/chenyiran/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/ComfyUI/custom_nodes/comfyui-animatediff/animatediff/sampler.py", line 295, in animatediff_sample
return super().sample(
File "/ComfyUI/nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/ComfyUI/nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/ComfyUI/comfy/sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/ComfyUI/comfy/samplers.py", line 728, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/ComfyUI/comfy/samplers.py", line 633, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/ComfyUI/comfy/samplers.py", line 589, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
File "/env_webui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/env_webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "//ComfyUI/comfy/samplers.py", line 287, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "env_webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "/ComfyUI/comfy/k_diffusion/external.py", line 129, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/ComfyUI/comfy/k_diffusion/external.py", line 155, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/ComfyUI/comfy/samplers.py", line 275, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/ComfyUI/comfy/samplers.py", line 253, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, model_options)
File "/ComfyUI/comfy/samplers.py", line 206, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
File "/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 185, in get_control
return self.sliding_get_control(x_noisy, t, cond, batched_number)
File "/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 190, in sliding_get_control
control_prev = self.previous_controlnet.get_control(x_noisy, t, cond, batched_number)
File "/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 185, in get_control
return self.sliding_get_control(x_noisy, t, cond, batched_number)
File "/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/control/control.py", line 238, in sliding_get_control
timestep = self.model_sampling_current.timestep(t)
AttributeError: 'ControlNetAdvanced' object has no attribute 'model_sampling_current'

Is this the correct way to apply conditioning ONLY for the first n frames?

image

In my understanding, this will apply to frames 0 (batch_index_from) through 15 (batch_index_to_excl - 1). Since the strength goes from 1 to 0, it will immediately stop applying and have no influence on frame 17.

The one thing I don't understand is what exactly start_percent means. If I set it to 1 it applies through my whole sequence.

Error occurred when executing UltimateSDUpscale: 'NoneType' object has no attribute 'to'

Error occurred when executing UltimateSDUpscale:

'NoneType' object has no attribute 'to'

I'm attempting to load the ControlLLLite model using ADV_controlnet nodes with UltimateSDUpscale Plugin, it still not work.

Full Error Info:

Error occurred when executing UltimateSDUpscale:

'NoneType' object has no attribute 'to'

File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\nodes.py", line 125, in upscale
processed = script.run(p=sdprocessing, _=None, tile_width=tile_width, tile_height=tile_height,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 553, in run
upscaler.process()
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 136, in process
self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 243, in start
return self.linear_process(p, image, rows, cols)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 178, in linear_process
processed = processing.process_images(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 122, in process_images
(samples,) = common_ksampler(p.model, p.seed, p.steps, p.cfg, p.sampler_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1325, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 241, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 130, in KSampler_sample
return _KSampler_sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 712, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 149, in sample
return _sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 618, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 557, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 281, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 271, in forward
return self.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1030, in apply_model
out = super().apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 268, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 248, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 222, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\SeargeSDXL\modules\custom_sdxl_ksampler.py", line 70, in new_unet_forward
x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 847, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 43, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 613, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 440, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 479, in _forward
n, context_attn1, value_attn1 = p(n, context_attn1, value_attn1, extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_lllite.py", line 72, in __call__
q = q + self.modules[module_pfx_to_q](q, self.control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_lllite.py", line 173, in forward
cx = self.conditioning1(control.cond_hint.to(x.device, dtype=x.dtype))

Here is my workflow:
xlUSDbug.json

Maybe it needs UltimateSDUpscale to fix it.

SparseCtrl multiple keyframes input support

How to implement the Keyframe Interpolation or video Interpolation mentioned here.
image
I have played around with your tool for a while, but still not figured out how to do it, what all I could do is a single frame workflow
[Image removed by Kosinkadink lol]

Error occurred when executing ControlNetLoaderAdvanced

Hi,
I got an error today, but it's fine yesterday, and I don't know where the problem is.
today I upgraded compyui, then an error was reported.

Error occurred when executing ControlNetLoaderAdvanced:

T2IAdapter.__init__() missing 2 required positional arguments: 'compression_ratio' and 'upscale_algorithm'

File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\nodes.py", line 90, in load_controlnet
controlnet = load_controlnet(controlnet_path, timestep_keyframe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 555, in load_controlnet
return convert_to_advanced(control, timestep_keyframe=timestep_keyframe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 570, in convert_to_advanced
return T2IAdapterAdvanced.from_vanilla(v=control, timestep_keyframe=timestep_keyframe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 141, in from_vanilla
return T2IAdapterAdvanced(t2i_model=v.t2i_model, timestep_keyframes=timestep_keyframe, channels_in=v.channels_in, device=v.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 95, in __init__
super().__init__(t2i_model=t2i_model, channels_in=channels_in, device=device)

screenshot

Where do I put ControlNet models?

Dumb question but where do I put ControlNet models?

I've tried a bunch of different locations. There is no models folder inside the ComfyUI-Advanced-ControlNet folder which is where every other extension stores their models. I just see undefined in the Load Advanced ControlNet Model node.

Seems like a super cool extension and I'd like to use it, thank you for your work!

Error occurred when executing ControlNetApplyAdvanced:

Screenshot 2024-01-02 160047
Error occurred when executing ControlNetApplyAdvanced:

'NoneType' object has no attribute 'copy'

File "D:\comfy ui\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\comfy ui\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\comfy ui\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\comfy ui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 750, in apply_controlnet
c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent))
^^^^^^^^^^^^^^^^
comfyui error
is thair any one knows about this error Tell me how to fix

ControlLLLite issue with SDXL Animatediff

If you don't connect the model out, it works, but the output doesn't seem to use ControlLLLite.
Screenshot_185

If you connect to a model out, you get this error
Screenshot_186
Screenshot_187

I'm not sure if I'm connecting it incorrectly, but when using ControlLLLite, how exactly do I connect the in and out of the model? Is there an example? I am using sdxl

Thanks!

When bypassing a a group node containing a Apply Advanced ControlNet 🛂🅐🅒🅝, the positive and negative conditioning get switched.

This is quite simple. The steps to reproduce simply consist in creating a group node with Apply Advanced ControlNet 🛂🅐🅒🅝, then bypassing that group node.
When bypassed, the positive and negative conditioning will switch places: the positive get linked to the negative and vice versa. which leads to... pretty creepy but sometimes hilarious stuff.

This is what we get when bypassing Apply Advanced ControlNet 🛂🅐🅒🅝 when ungrouped, so this is also what is expected when bypassing a grouped node: https://ibb.co/WWw3ZBT
But this is what we get when bypassing a grouped node containing Apply Advanced ControlNet 🛂🅐🅒🅝 (Warning: it hurts the eyes a little bit): https://ibb.co/x569F61

A Workflow example is much needed

image
Right now I don't know how to use it. But it does need similar functionality.But it does need similar functionality like the controlnet plugin in sdwebui

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.