GithubHelp home page GithubHelp logo

fannovel16 / comfyui_controlnet_aux Goto Github PK

View Code? Open in Web Editor NEW
1.8K 1.8K 172.0 29.9 MB

ComfyUI's ControlNet Auxiliary Preprocessors

License: Apache License 2.0

Python 97.58% Batchfile 0.01% C++ 0.78% Cuda 1.63% Shell 0.01%

comfyui_controlnet_aux's People

Contributors

aiplayuser avatar cdsama avatar d8ahazard avatar drjkl avatar fannovel16 avatar frantic avatar haohaocreates avatar huchenlei avatar hwiese1980 avatar jetthu avatar kagevazquez avatar kbs0429 avatar layer-norm avatar loner233 avatar mijuku233 avatar neo avatar prodalor avatar qkdm avatar scottnealon avatar sphantix avatar tequeter avatar tungnguyensipher avatar xingren23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui_controlnet_aux's Issues

comfyui_controlnet_aux error when reactor or roop with it

[comfyui_controlnet_aux] | STATUS -> Some nodes failed to load:
Failed to import module bae because TypeError: A Message class can only inherit from Message
Failed to import module hed because TypeError: A Message class can only inherit from Message
Failed to import module leres because TypeError: A Message class can only inherit from Message
Failed to import module lineart because TypeError: A Message class can only inherit from Message
Failed to import module lineart_anime because TypeError: A Message class can only inherit from Message
Failed to import module manga_line because TypeError: A Message class can only inherit from Message
Failed to import module midas because TypeError: A Message class can only inherit from Message
Failed to import module mlsd because TypeError: A Message class can only inherit from Message
Failed to import module openpose because TypeError: A Message class can only inherit from Message
Failed to import module pidinet because TypeError: A Message class can only inherit from Message
Failed to import module scribble because TypeError: A Message class can only inherit from Message
Failed to import module simple_cv2 because TypeError: A Message class can only inherit from Message
Failed to import module zoe because TypeError: A Message class can only inherit from Message

Check that you properly installed the dependencies.
If you think this is a bug, please report it on the github page (https://github.com/Fannovel16/comfyui_controlnet_aux/issues)

NameError: name 'os' is not defined

Getting this error when starting ComfyUI:

Traceback (most recent call last):
  File "C:\AI\ComfyUI\ComfyUI\nodes.py", line 1688, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\__init__.py", line 11, in <module>
    for pkg_name in os.listdir(str(Path(here, "src"))):
NameError: name 'os' is not defined

Running latest build of ComfyUI on Windows 11, Python 3.10.11.

Workflow Debug: Inconsistent Problems with "Dimension 1 Mismatch" /64 error.

First off, thanks for your excellent nodes!

Rather than retyoing the issue, I first was trying to figure this out in ComfyAnon's discussions over there:

comfyanonymous/ComfyUI#1524

I'm using the "Anime Lineart" preprocessor node to generate lines for the softedge controlnet. It works well when it works, but other times it doesn't and I can't figure out why. I know that the "dimension mismatch" is typically a thing where the input image dimensions aren't divisible by 64. I also have another workflow that used P2LDGan, which required images to be sub-1024, so I am doing that here as well.

I thought at first my math was wrong, but it seems to do the right calculations and resizes to those 64-divisible dimensions, so I am not sure what else might be the issue. I also thought that when it did work, that the reason was because the images were ALREADY conformant to what "Anime Lineart" needed, but then I note that the She-Hulk image I drew (which is natively not divisible by 64) processes as expected.

What am I missing?

Canny line is blurred

The lines processed by Canny are blurred and unclear, and neither enlarging the original image nor modifying the size can change the result

No module named 'timm' with Comfyui Windows Portable

Modules are not loading in startup due to timm library missing

[comfyui_controlnet_aux] | INFO -> Some nodes failed to load:
Failed to import module bae because ModuleNotFoundError: No module named 'timm'
Failed to import module hed because ModuleNotFoundError: No module named 'timm'
Failed to import module leres because ModuleNotFoundError: No module named 'timm'
Failed to import module lineart because ModuleNotFoundError: No module named 'timm'
Failed to import module lineart_anime because ModuleNotFoundError: No module named 'timm'
Failed to import module manga_line because ModuleNotFoundError: No module named 'timm'
Failed to import module midas because ModuleNotFoundError: No module named 'timm'
Failed to import module mlsd because ModuleNotFoundError: No module named 'timm'
Failed to import module openpose because ModuleNotFoundError: No module named 'timm'
Failed to import module pidinet because ModuleNotFoundError: No module named 'timm'
Failed to import module scribble because ModuleNotFoundError: No module named 'timm'
Failed to import module simple_cv2 because ModuleNotFoundError: No module named 'timm'
Failed to import module zoe because ModuleNotFoundError: No module named 'timm'

Check that you properly installed the dependencies.

--

Also tried the install.bat, it through an error on the timm reference in requirements.txt

ERROR: Invalid requirement: 'timm==0.6.13 #isl-org/MiDaS#215 (comment)'

Error occurred when executing FakeScribblePreprocessor:

Hello, I have problem with only Fake Scribble Lines (aka scribble_hed) node. comfy_controlnet_preprocessors is deleted, everything is updated
image

in console

got prompt
!!! Exception during processing !!!
Traceback (most recent call last):
  File "E:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "E:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "E:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "E:\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\hed.py", line 43, in execute
    model = HEDdetector.from_pretrained(HF_MODEL_NAME, cache_dir=annotator_ckpts_path).to(model_management.get_torch_device())
NameError: name 'HEDdetector' is not defined

Prompt executed in 0.03 seconds

it always try to connect huggingface.co and cant work offline

'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /yzd-v/DWPose/resolve/main/yolox_l.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002551770DC00>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx
WARNING:huggingface_hub.utils._http:'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /yzd-v/DWPose/resolve/main/yolox_l.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002551770DC00>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002551770D450>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx
WARNING:huggingface_hub.utils._http:'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002551770D450>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx
Processing interrupted

Error on second CN generation

Getting the following error after running two consecutive generations using the same CN module. First one works normally, second attempt gives error. Using the workflows from https://huggingface.co/stabilityai/control-lora/tree/main/comfy-control-LoRA-workflows The error happens for all workflows that use the custom nodes from here. After the first generation, if I change any parameters with the CN custom node, such as the thresholds for canny, the error doesn't occur. If I generate once using Canny, then clear and load Depth for example, the error does not occur. Seems that when the CN processed image stays the same, the error occurs.

  • Using fresh install of Comfyui, manual install from Github.
  • custom nodes pulled from here via Git, and requirements installed
  • Windows 10 and python 3.11
  • 2070 8gb card
  • Any arguments used during comfy launch, such as lowvram, do not change the behavior

Tensor on device cuda:0 is not on the expected device meta!

File "C:\Users\jesse\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\nodes.py", line 1206, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\nodes.py", line 1176, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\sample.py", line 93, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\samplers.py", line 733, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\samplers.py", line 241, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\sd.py", line 809, in get_control
control = self.control_model(x=x_noisy, hint=self.cond_hint, timesteps=t, context=context, y=y)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\cldm\cldm.py", line 307, in forward
h = self.middle_block(h, emb, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 43, in forward
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\attention.py", line 696, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\attention.py", line 528, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\attention.py", line 628, in _forward
n = self.attn2(n, context=context_attn2, value=value_attn2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\ldm\modules\attention.py", line 421, in forward
q = self.to_q(x)
^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\ComfyUI\comfy\sd.py", line 862, in forward
return torch.nn.functional.linear(input, self.weight + (torch.mm(self.up.flatten(start_dim=1), self.down.flatten(start_dim=1))).reshape(self.weight.shape).type(self.weight.dtype), self.bias)

File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_prims_common\wrappers.py", line 220, in _fn
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_prims_common\wrappers.py", line 130, in _fn
result = fn(**bound.arguments)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_refs\__init__.py", line 975, in add
return prims.add(a, b)
^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_prims\__init__.py", line 346, in _elementwise_meta
utils.check_same_device(*args_, allow_cpu_scalar_tensors=True)
File "C:\Users\jesse\anaconda3\envs\comfy\Lib\site-packages\torch\_prims_common\__init__.py", line 596, in check_same_device
raise RuntimeError(msg)

openpose json data

Hi,

Would it be possible to also output the openpose json data?

I would sometimes like to adjust the detected pose when it gets something wrong in the openpose editor, but currently I can only estimate and rebuild the pose from the image.

Thanks in advance!

top_pool_forward miss in module _ext

image
Error occurred when executing AIO_Preprocessor:

top_pool_forward miss in module _ext

File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux_init_.py", line 102, in execute
return getattr(aux_class(), aux_class.FUNCTION)(**params)
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\uniformer.py", line 15, in semantic_segmentate
from controlnet_aux.uniformer import UniformerSegmentor
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer_init_.py", line 2, in
from .inference import init_segmentor, inference_segmentor, show_result_pyplot
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer\inference.py", line 8, in
from custom_mmpkg.custom_mmseg.models import build_segmentor
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmseg\models_init_.py", line 4, in
from .decode_heads import * # noqa: F401,F403
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmseg\models\decode_heads_init_.py", line 4, in
from .cc_head import CCHead
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmseg\models\decode_heads\cc_head.py", line 7, in
from custom_mmpkg.custom_mmcv.ops import CrissCrossAttention
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmcv\ops_init_.py", line 10, in
from .corner_pool import CornerPool
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmcv\ops\corner_pool.py", line 8, in
ext_module = ext_loader.load_ext('_ext', [
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\custom_mmcv\utils\ext_loader.py", line 15, in load_ext
assert hasattr(ext, fun), f'{fun} miss in module {name}'

Preprocessor resolution is always 512x512

Hello,
It seems that regardless of the input image resolution, the lineart preprocessor is always 512x512, then the image is upscaled to whatever the input resolution is set to, after it's been processed. You can see be saving out the processed image and seeing the upscaled pixels.

I have tested this by changing the default resolution properties in class LineartDetector line 126.
def __call__(self, input_image, coarse=False, detect_resolution=512, image_resolution=512, output_type="pil", **kwargs):

Setting the default 512 resolutions here to higher values such as 1024 will process at 1024 instead of 512. I guess the resolutions are never set so they are always default.

Thanks for your time and making such great tools!

Error with MIDAS/Fake Scribble - positional argument: 'resolution'

got prompt
3
!!! Exception during processing !!!
Traceback (most recent call last):
File "S:\Ai\Repos\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "S:\Ai\Repos\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "S:\Ai\Repos\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
TypeError: MIDAS_Depth_Map_Preprocessor.execute() missing 1 required positional argument: 'resolution'

OF-COCO and OF-ADE20K error

All the models for the preprocessors can be downloaded automatically, except for these two. If I want to download them manually, where should I store them? The error message is as follows.

Error occurred when executing OneFormer-COCO-SemSegPreprocessor:

module 'PIL.Image' has no attribute 'LINEAR'

File "D:\AI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\oneformer.py", line 15, in semantic_segmentate
from controlnet_aux.oneformer import OneformerSegmentor
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\oneformer_init_.py", line 2, in
from .api import make_detectron2_model, semantic_run
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\oneformer\api.py", line 7, in
from custom_detectron2.projects.deeplab import add_deeplab_config
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\projects\deeplab_init_.py", line 4, in
from .resnet import build_resnet_deeplab_backbone
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\projects\deeplab\resnet.py", line 6, in
from custom_detectron2.modeling import BACKBONE_REGISTRY
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\modeling_init_.py", line 20, in
from .meta_arch import (
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\modeling\meta_arch_init_.py", line 6, in
from .panoptic_fpn import PanopticFPN
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\modeling\meta_arch\panoptic_fpn.py", line 14, in
from .rcnn import GeneralizedRCNN
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\modeling\meta_arch\rcnn.py", line 9, in
from custom_detectron2.data.detection_utils import convert_image_to_rgb
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\data_init_.py", line 2, in
from . import transforms # isort:skip
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\data\transforms_init_.py", line 4, in
from .transform import *
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\data\transforms\transform.py", line 36, in
class ExtentTransform(Transform):
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2\data\transforms\transform.py", line 46, in ExtentTransform
def init(self, src_rect, output_size, interp=Image.LINEAR, fill=0):

Most line classes have reported errors

  1. The white lines on a black background can be extracted normally, but there is a part that has been automatically cropped into the upper right corner.

  2. Reestablish the CNT process, as most lines will report errors during the sampling phase.

Error occurred when executing KSamplerAdvanced:

tuple index out of range

File "D:\AI\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\AI\ComfyUI\ComfyUI\nodes.py", line 1270, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "D:\AI\ComfyUI\ComfyUI\nodes.py", line 1206, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\hacky.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\sample.py", line 93, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 740, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
File "D:\AI\ComfyUI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\AI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 321, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "D:\AI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 309, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "D:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 287, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "D:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 241, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep
, c, len(cond_or_uncond))
File "D:\AI\ComfyUI\ComfyUI\comfy\controlnet.py", line 153, in get_control
self.cond_hint = comfy.utils.common_upscale(self.cond_hint_original, x_noisy.shape[3] * 8, x_noisy.shape[2] * 8, 'nearest-exact', "center").to(self.control_model.dtype).to(self.device)
File "D:\AI\ComfyUI\ComfyUI\comfy\utils.py", line 351, in common_upscale
old_width = samples.shape[3]

comfyui portable version to install latest git repo, got dependency error

DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063

Error on controlnet_aux\dwpose

Hi

I get error after generating about 90-130 images using API!
I have enough Memory,CPU and vRAM!

Speed: 15.6ms preprocess, 15.6ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 640)
!!! Exception during processing !!!
Traceback (most recent call last):
File "D:\sl\win\Control\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\sl\win\Control\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\sl\win\Control\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py", line 28, in estimate_pose
out = common_annotator_call(model, image, include_hand=detect_hand, include_face=detect_face, include_body=detect_body)
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\utils.py", line 28, in common_annotator_call
np_result = model(np_image, output_type="np", **kwargs)
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose_init_.py", line 204, in call
poses = self.detect_poses(input_image)
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose_init_.py", line 180, in detect_poses
keypoints_info = self.dw_pose_estimation(oriImg.copy())
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose\wholebody.py", line 33, in call
keypoints, scores = inference_pose(self.session_pose, det_result, oriImg)
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose\cv_ox_pose.py", line 352, in inference_pose
outputs = inference(session, resized_img)
File "D:\sl\win\Control\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose\cv_ox_pose.py", line 70, in inference
outputs = sess.forward(outNames)
cv2.error: bad allocation

Thanks
Best regards

Some of preprocessors create maps with offset.

After around 07.09.2023 HED and Depth maps generated with offset, when used together with canny it results in doubled edges with offset.

It works well on commit 360fcefcff250e9245db9eede6515139ac5bb4e5. The issue happens somewhere after that commit.

reason for using custom_mmpkg?

Hello I have a question about using custom_mmpkg.
In src folder, it using 'custom_mmpkg/mmcv' why embedding this in src instead like 'pip install mmcv'?
because of embedded in src folder I have an issue of conflict mmcv version in another module file in custom_nodes

【ERROR!!!】FakeScribblePreprocessor- 'HEDdetector' is not defined

image
This is an old error, perhaps because HED V1 is blocked but not connected to HED V1.1?

Error occurred when executing FakeScribblePreprocessor:

name 'HEDdetector' is not defined

File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\hed.py", line 43, in execute
model = HEDdetector.from_pretrained(HF_MODEL_NAME, cache_dir=annotator_ckpts_path).to(model_management.get_torch_device())

Error occurred when executing Zoe-DepthMapPreprocessor

Running this on Mac OS with MPS support in nightly Pytorch.

!!! Exception during processing !!!
Traceback (most recent call last):
  File "~/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "~/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "~/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/zoe.py", line 18, in execute
    return (common_annotator_call(model, image), )
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/utils.py", line 28, in common_annotator_call
    np_result = model(np_image, output_type="numpy", **kwargs)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/__init__.py", line 56, in __call__
    depth = self.model.infer(image_depth)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/depth_model.py", line 126, in infer
    return self.infer_with_flip_aug(x, pad_input=pad_input, **kwargs)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/depth_model.py", line 110, in infer_with_flip_aug
    out = self._infer_with_pad_aug(x, pad_input=pad_input, **kwargs)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/depth_model.py", line 88, in _infer_with_pad_aug
    out = self._infer(x)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/depth_model.py", line 55, in _infer
    return self(x)['metric_depth']
  File "~/.pyenv/versions/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "~/.pyenv/versions/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/zoedepth/zoedepth_v1.py", line 144, in forward
    rel_depth, out = self.core(x, denorm=denorm, return_rel_depth=True)
  File "~/.pyenv/versions/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "~/.pyenv/versions/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/base_models/midas.py", line 263, in forward
    x = self.prep(x)
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/base_models/midas.py", line 187, in __call__
    return self.normalization(self.resizer(x))
  File "~/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux/zoe/zoedepth/models/base_models/midas.py", line 174, in __call__
    return nn.functional.interpolate(x, (height, width), mode='bilinear', align_corners=True)
  File "~/.pyenv/versions/comfyui/lib/python3.10/site-packages/torch/nn/functional.py", line 3926, in interpolate
    raise TypeError(
TypeError: expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]

I have no idea if this is related but noticed that in:
custom_nodes/comfyui_controlnet_aux/node_wrappers/zoe.py

it can't import
import controlnet_aux

More likely I'm thinking this must have something to do with the way torch is casting integers.

Error: "Message class can only inherit from Message"

I know that this issue also existed in previous version of the extension and i never managed to get it working with methods mentioned earlier, so I'm gonna leave this there.

What I already tried:

  • Do Comfy install in a venv
  • Clean venv and install everything manually
  • Clean install python and reinstall everything

My setup (something might mess things up and that's why it's not working):

  • Pyenv-win (used to manage python versions)
  • venv
  • manual ComfyUI install (via git clone)
  • manual extension install (via install.bat file)

The error:

Full error log from comfyui_controlnet_aux:
Traceback (most recent call last):
  File "C:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\__init__.py", line 26, in load_nodes
    module = importlib.import_module(
  File "C:\Users\janlu\.pyenv\pyenv-win\versions\3.10.11\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\mediapipe_face.py", line 2, in <module>
    from controlnet_aux.mediapipe_face import MediapipeFaceDetector
  File "C:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mediapipe_face\__init__.py", line 9, in <module>
    from .mediapipe_face_common import generate_annotation
  File "C:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mediapipe_face\mediapipe_face_common.py", line 5, in <module>
    import mediapipe as mp
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\__init__.py", line 16, in <module>
    import mediapipe.python.solutions as solutions
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\python\solutions\__init__.py", line 17, in <module>
    import mediapipe.python.solutions.drawing_styles
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\python\solutions\drawing_styles.py", line 20, in <module>
    from mediapipe.python.solutions.drawing_utils import DrawingSpec
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\python\solutions\drawing_utils.py", line 24, in <module>
    from mediapipe.framework.formats import detection_pb2
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\framework\formats\detection_pb2.py", line 14, in <module>
    from mediapipe.framework.formats import location_data_pb2 as mediapipe_dot_framework_dot_formats_dot_location__data__pb2
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\framework\formats\location_data_pb2.py", line 14, in <module>
    from mediapipe.framework.formats.annotation import rasterization_pb2 as mediapipe_dot_framework_dot_formats_dot_annotation_dot_rasterization__pb2
  File "C:\AI\ComfyUI\venv\lib\site-packages\mediapipe\framework\formats\annotation\rasterization_pb2.py", line 20, in <module>
    _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'mediapipe.framework.formats.annotation.rasterization_pb2', _globals)
  File "C:\AI\ComfyUI\venv\lib\site-packages\google\protobuf\internal\builder.py", line 108, in BuildTopDescriptorsAndMessages
    module[name] = BuildMessage(msg_des)
  File "C:\AI\ComfyUI\venv\lib\site-packages\google\protobuf\internal\builder.py", line 82, in BuildMessage
    create_dict[name] = BuildMessage(nested_msg)
  File "C:\AI\ComfyUI\venv\lib\site-packages\google\protobuf\internal\builder.py", line 85, in BuildMessage
    message_class = _reflection.GeneratedProtocolMessageType(
TypeError: A Message class can only inherit from Message



[comfyui_controlnet_aux] | STATUS -> Some nodes failed to load:
        Failed to import module mediapipe_face because TypeError: A Message class can only inherit from Message

nodes doesn't appear after I installed this repo

Traceback (most recent call last):
File "E:\SDUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1725, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\SDUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux-main_init
.py", line 2, in
from .utils import here, create_node_input_types
File "E:\SDUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux-main\utils.py", line 4, in import cv2
ModuleNotFoundError: No module named 'cv2'

Cannot import E:\SDUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux-main module for custom nodes: No module named 'cv2'

Models are recreated every time - Hugely inefficient

Hi, I was investigating why the nodes were so slow compared to my own code and it seems that the pipelines / models are instantiated anew on each run. I have 24GB VRAM and would prefer to keep everything loaded in and get massive speedups instead. I believe this should be possible, ComfyUI does it with their model loader.

Cheers

Sam model selection option

I noticed that automatically downloaded sam model is mobile (only around 40M), the segment result is not very good. Is it possible to use other sam model? or give option to select which sam model to used. thank you.

Latest commit broke all my annotators

Various errors, all relating to cv2 not having the specified modules (?) that were being called.

As I had updated many different things at once, spent hours troubleshooting thinking it was something else, reinstalling OpenCV-Python, Numpy, PIP... Finally tried rolling this repo back to previous commit and all problems are gone now (though I can't run the new LoRA ControlNets but I see no reason yet to think that's related).

Not at my computer anymore but I do still have many of the error messages on screen which I'll add later both for general information and so others can find this post/solution if they have the same errors.

General info:
GTX-1660 ti 6gb vRAM
Windows 11

Cannot found the nodes on the menu

I install this custom node with the manager.
When I try to find openpose node I didn't found it or any other node on the menu...
Is there some post installation to do ?

Keypose Preprocessor?

I was wondering if there are any keypose preprocessor nodes? With OpenPose, there are pre-processors that allow me to extract the stick-figure image from a photo of a person, and then apply that as controlnet conditioning . But with keypose, I cannot find any such pre-processors, which means that "T2I Adapter Keypose" is not useable yet.

TypeError: AIO_Preprocessor.execute() got an unexpected keyword argument 'resolution'

!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
TypeError: AIO_Preprocessor.execute() got an unexpected keyword argument 'resolution'

image

ImportError: attempted relative import beyond top-level package

Seems as if using the relative pathing for the HWC3/resize_image import is not working properly.

Using Ubuntu 22.

Traceback (most recent call last):
File "/home/apps/ComfyUI/nodes.py", line 1647, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/apps/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/init.py", line 66, in
import impact.impact_server # to load server api
File "/home/apps/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_server.py", line 10, in
import impact.core as core
File "/home/apps/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py", line 2, in
from segment_anything import SamPredictor
File "/home/apps/ComfyUI/custom_nodes/comfy_controlnet_aux/src/controlnet_aux/segment_anything/init.py", line 17, in
from ..util import HWC3, resize_image
ImportError: attempted relative import beyond top-level package

Cannot import /home/apps/ComfyUI/custom_nodes/ComfyUI-Impact-Pack module for custom nodes: attempted relative import beyond top-level package

Error occurred when executing DWPreprocessor: Unknown C++ exception from OpenCV code | comfyui portable version

When I run the workflow from the following file I get the following error:

This error appears in the console:

got prompt
C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose_init_.py:175: UserWarning: Currently DWPose doesn't support CUDA out-of-the-box.
warnings.warn("Currently DWPose doesn't support CUDA out-of-the-box.")

Error occurred when executing DWPreprocessor:
Unknown C++ exception from OpenCV code

File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py", line 28, in estimate_pose
out = common_annotator_call(model, image, include_hand=detect_hand, include_face=detect_face, include_body=detect_body)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 28, in common_annotator_call
np_result = model(np_image, output_type="np", **kwargs)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose_init_.py", line 204, in call
poses = self.detect_poses(input_image)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose_init_.py", line 180, in detect_poses
keypoints_info = self.dw_pose_estimation(oriImg.copy())
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose\wholebody.py", line 29, in call
det_result = inference_detector(self.session_det, oriImg)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\dwpose\cv_ox_det.py", line 103, in inference_detector
output = session.forward(outNames)

example.zip

Merge opportunity

I have implemented the LaMa preprocessor from inapint_only+lama preprocessor from Automatic1111.
Please help me finish the implementation of LaMa Processor in my repo. I'm missing some final step on how to use the generated image as an efficent control. It seems it either gives too much control or too little.
When the preprocessor will be ready I could merge it to this repo to finally achieve prompless in and out painting in comfy

ModuleNotFoundError for handful of nodes

Saw the other repo was archived, so just now replaced it with this one. When I try to use any of the nodes, though, I got an error. So I looked back on the startup log and see this. I get a Traceback for each module, but just showing the last one here for brevity (they're all the same ModuleNotFoundError issue).

I installed via ComfyUI-Manager. After I saw the errors I manually installed the requirements.txt file, too, to rule that out as an issue. The error tells me to make sure I installed any dependencies correctly, but I don't see anything more "dependencies" to install beyond this.

Traceback (most recent call last):
  File "/home/user/Work/StableDiffusion/ComfyUI/custom_nodes/comfyui_controlnet_aux/__init__.py", line 22, in load_nodes
    module = importlib.import_module(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/user/Work/StableDiffusion/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/zoe.py", line 2, in <module>
    from controlnet_aux.zoe import ZoeDetector
ModuleNotFoundError: No module named 'controlnet_aux.zoe'



[comfyui_controlnet_aux] | INFO -> Some nodes failed to load:
        Failed to import module manga_line because ModuleNotFoundError: No module named 'controlnet_aux.manga_line'
        Failed to import module openpose because ModuleNotFoundError: No module named 'controlnet_aux.dwpose'
        Failed to import module oneformer because ModuleNotFoundError: No module named 'controlnet_aux.oneformer'
        Failed to import module scribble because ModuleNotFoundError: No module named 'controlnet_aux.scribble'
        Failed to import module leres because ModuleNotFoundError: No module named 'controlnet_aux.leres'
        Failed to import module uniformer because ModuleNotFoundError: No module named 'controlnet_aux.uniformer'
        Failed to import module zoe because ModuleNotFoundError: No module named 'controlnet_aux.zoe'

Torch versions:

torch==2.1.0.dev20230726+rocm5.6
torchaudio==2.1.0.dev20230727+rocm5.6
torchvision==0.16.0.dev20230727+rocm5.6

When I look in the ckpts folder I see ckpts/models--lllyasviel--Annotators which seems wrong? Especially since I see elsewhere in code references to the path llyasviel/Annotators. Could be there's a windows problem on some of the stuff, so maybe there's a pathing issue where ckpts/models\lllyasviel\Annotators is translating wrong and messing up the imports?

Error occurred when executing UniFormer-SemSegPreprocessor

!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\uniformer.py", line 18, in semantic_segmentate
out = common_annotator_call(model, image)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 28, in common_annotator_call
np_result = model(np_image, output_type="np", **kwargs)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer_init_.py", line 83, in call
detected_map = self.inference(input_image)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer_init
.py", line 61, in _inference
result = inference_segmentor(self.model, img)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer\inference.py", line 94, in inference_segmentor
data = collate([data], samples_per_gpu=1)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\parallel\collate.py", line 79, in collate
return {
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\parallel\collate.py", line 80, in
key: collate([d[key] for d in batch], samples_per_gpu)
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\parallel\collate.py", line 77, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\parallel\collate.py", line 77, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\parallel\collate.py", line 84, in collate
return default_collate(batch)
File "E:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "E:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\data_utils\collate.py", line 150, in collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'custom_mmpkg.mmcv.parallel.data_container.DataContainer'>

SD1.5 version of controlnet reported an error when loading the model.What should I do?

error code:Error occurred when executing ControlNetLoader: module 'comfy.sd' has no attribute 'ModelPatcher' File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 602, in load_controlnet controlnet = comfy.controlnet.load_controlnet(controlnet_path) File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 394, in load_controlnet control = ControlNet(control_model, global_average_pooling=global_average_pooling) File "E:\comonAI\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 347, in init self.control_model_wrapped = comfy.sd.ModelPatcher(self.control_model, load_device=comfy.model_management.get_torch_device(), offload_device=comfy.model_management.unet_offload_device())

No module named 'timm'

I'm running my ComfyUI on Google CoLab. I'm trying to run through the comfyui_controlnet_aux test_cn_aux_full.json This is the test with all the variations. I'm getting four errors all named 'timm'
Error occurred when executing SAMPreprocessor:
No module named 'timm'
https://pastebin.com/UHd3Dq6Z

Error occurred when executing Zoe-DepthMapPreprocessor:
No module named 'timm'
https://pastebin.com/KqM5bK6u

Error occurred when executing MiDaS-NormalMapPreprocessor:
No module named 'timm'
https://pastebin.com/V6aWDtth

Error occurred when executing MiDaS-DepthMapPreprocessor:
No module named 'timm'
https://pastebin.com/8Nnf4NX6

Also, I'm getting two errors named 'fvcore'
Error occurred when executing OneFormer-COCO-SemSegPreprocessor:
No module named 'fvcore'
https://pastebin.com/iLpnCP8G

Error occurred when executing OneFormer-ADE20K-SemSegPreprocessor:
No module named 'fvcore'
https://pastebin.com/teJcyQJT

And one
Error occurred when executing FakeScribblePreprocessor:
name 'HEDdetector' is not defined
https://pastebin.com/JnFP0LxF

Hope this helps.

AUX Tile model's options are very blurry.

Some downsampling comparisons to older controlNET repo. From Left to right, base image > value 1 > 2 > 3

New(AUX)
new

Old

old

As you can see, in old tile controlNet prep. Pyrup_iters 1 = base image
In Aux, 1 is slightly blurrier than base and actually closer to Pyrup_iters 2 in old preprocessor.

Also can we choose downsampling rate as decimal number?

No module 'mmcv and 'addict'

I installed and I checked my dependencies. I can run on the page but I see this error on my terminal:

Full error log from comfyui_controlnet_aux:
Traceback (most recent call last):
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer\inference.py", line 5, in
import mmcv as mmcv
ModuleNotFoundError: No module named 'mmcv'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux_init_.py", line 23, in load_nodes
module = importlib.import_module(
File "importlib_init_.py", line 126, in import_module
File "", line 1050, in _gcd_import
File "", line 1027, in find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\uniformer.py", line 2, in
from controlnet_aux.uniformer import UniformerSegmentor
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer_init
.py", line 2, in
from .inference import init_segmentor, inference_segmentor, show_result_pyplot
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\uniformer\inference.py", line 11, in
import custom_mmpkg.mmcv as mmcv
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv_init
.py", line 4, in
from .fileio import *
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\fileio_init.py", line 2, in
from .file_client import BaseStorageBackend, FileClient
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\fileio\file_client.py", line 15, in
from custom_mmpkg.mmcv.utils.misc import has_method
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\utils_init.py", line 3, in
from .config import Config, ConfigDict, DictAction
File "C:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg\mmcv\utils\config.py", line 16, in
from addict import Dict
ModuleNotFoundError: No module named 'addict'

[comfyui_controlnet_aux] | STATUS -> Some nodes failed to load:
Failed to import module uniformer because ModuleNotFoundError: No module named 'addict'

Check that you properly installed the dependencies.
If you think this is a bug, please report it on the github page (https://github.com/Fannovel16/comfyui_controlnet_aux/issues)

Resolution always 512px

Hello,

I'm inputting a ~3k file, but the preprocessors are always returning 512px results. Am I doing something wrong?

MaxRetryError for requesting ZoeD_M12_N.pt or huggingface.co

description

when i use zoe-depth map to get image depth. sometimes it will tell me MaxRetryError!

expect value:

get depth image done!

details as below:

'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /lllyasviel/Annotators/resolve/main/ZoeD_M12_N.pt (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x00000207DC87E050>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d6facb74-7953-4e15-a629-1ab29edfe873)')' thrown while requesting HEAD https://huggingface.co/lllyasviel/Annotators/resolve/main/ZoeD_M12_N.pt
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\code\python\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\code\python\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\code\python\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\code\python\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\zoe.py", line 18, in execute
    model = ZoeDetector.from_pretrained(HF_MODEL_NAME, cache_dir=annotator_ckpts_path).to(model_management.get_torch_device())
  File "D:\code\python\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\zoe\__init__.py", line 36, in to
    self.model.to(device)
  File "D:\code\python\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\zoe\zoedepth\models\depth_model.py", line 42, in to
    return super().to(device)
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 3 more times]
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\code\python\stable-diffusion-webui\venv-3060\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

questions

  • how does the huggingface model save or cache in local with comfyui_controlnet_aux ? does it requset huggingface .co when
    ZoeD_M12_N.pt has been downloaded and saved to correct location ?

i have downloaded ZoeD_M12_N.pt and paste it to correct position for comfyui comfyui\ckpts\models--lllyasviel--Annotators\snapshots\982e7edaec38759d914a963c48c4726685de7d96\ZoeD_M12_N.pt

i changed default dir in my comfyui\custom_nodes\comfyui_controlnet_aux\config.yaml:

annotator_ckpts_path: ../../ckpts

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.