GithubHelp home page GithubHelp logo

comfyui.pinokio's People

Contributors

cocktailpeanut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

comfyui.pinokio's Issues

3 open tickets not one answered?

Developer just don't care or don't know...

How can we add custom arguments to the launch since this doesn't pull from the usual run_nvidia_gpu.bat???

VS will not install

When attempting to install on my WIN 11 machine. The last package VS will not install because it does not like that the folder for my user has a space. As you can see within the error below it refers to my first name Richard as the folder and not Richard West

Error: Command failed: start /wait C:\Users\Richard West\pinokio\bin\vs_buildtools.exe --passive --wait --includeRecommended --nocache --add Microsoft.VisualStudio.Workload.VCTools The system cannot find the file C:\Users\Richard. at C:\Users\Richard West\AppData\Local\Programs\Pinokio\resources\app.asar\node_modules\sudo-prompt-programfiles-x86\index.js:562:25 at FSReqCallback.readFileAfterClose [as oncomplete] (node:internal/fs/read_file_context:68:3)

Launch flags

How can add a flag to launch comfyUI with pinokio?

Like python3 main.py --force-fp16

Tried to Update/Install ComfyUI with SVD, now only Install button is available

I tried to update so I could used new SVD. It did not happen, I did not investigate, deleted whole comfyui.pinokio.git directory.
Now I have only Install button available, installed it already 3 times, no success.
I am getting:

DISK:\\pinokio\api\comfyUI.pinokio.git\ComfyUI>{{pip.install.torch}} "pip install - requiremetns.txt" '{{pip.install.torch}}' is not recognized as an internal or external command, operable program or batch file

Then safetensros are downloaded, again and again, even when present in the directory already, then

DISK:\\pinokio\api\comfyUI.pinokio.git\workflows>"git clone https://github.com/comfyanonymous/ComfyUI_examples"
The filename, directory name, or volume label syntax is incorrect.

Then Install Success.
Though there is only Install button, restarting of anything or deleting the folder does not help, this process is going repeated again.

set specific GPU

Hello, I would like run Comfyui on my second GPU, I am trying to rewrite run_nvidia_gpu.bat, which I found, with command "--cuda-device 1"
but it doesn't work. Is it possible to run it from Pinokio and specify GPU?

Torch not compiled with CUDA enabled - Nvidia - Linux

I get the following error when trying to start a fresh install on Fedora linux with an Nvidia gpu (4090)

(env) (base) [allen@pandora ComfyUI]$ python3 main.py
** ComfyUI start up time: 2023-11-27 20:54:01.685928

Prestartup times for custom nodes:
0.0 seconds: /home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/custom_nodes/ComfyUI-Manager

Traceback (most recent call last):
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/main.py", line 72, in
import execution
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/execution.py", line 12, in
import nodes
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/nodes.py", line 20, in
import comfy.diffusers_load
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/diffusers_load.py", line 4, in
import comfy.sd
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/sd.py", line 5, in
from comfy import model_management
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/model_management.py", line 114, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/comfy/model_management.py", line 83, in get_torch_device
return torch.device(torch.cuda.current_device())
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/env/lib/python3.10/site-packages/torch/cuda/init.py", line 769, in current_device
_lazy_init()
File "/home/allen/pinokio/api/comfyui.pinokio.git/ComfyUI/env/lib/python3.10/site-packages/torch/cuda/init.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

can't copy from the server

Hello, I woudl like to copy an error message from teh server window when I need to google it - it is not possible.
Is there a way how to make it possible?

Thank you

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend.

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend.
I recently encountered an issue while using the 'Impact Pack' custom node in the ComfyUI, which is part of the Pinokio service. The node failed to function as expected. Notably, I am using the portable version of ComfyUI, and when I attempted the same workflow, it worked correctly. This leads me to believe that the problem might be specific to Pinokio.

This issue arose after updating to version 1.0.
I performed Factory Reset twice, and also carried out multiple reinstallations and updates. Eventually, I uninstalled and reinstalled Pinokio itself, but I still have an issue.

Error occurred when executing ImpactSimpleDetectorSEGS_for_AD:

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback]
AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback]
AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback]
AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback]
AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback]
AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback]
AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback]
AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback]
AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]

File "C:\Users\paras\pinokio\api\comfyui.git\app\execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\paras\pinokio\api\comfyui.git\app\execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\paras\pinokio\api\comfyui.git\app\execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\paras\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Impact-Pack\modules\impact\detectors.py", line 458, in doit
return SimpleDetectorForAnimateDiff.detect(bbox_detector, image_frames, bbox_threshold, bbox_dilation, crop_factor, drop_size,
File "C:\Users\paras\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Impact-Pack\modules\impact\detectors.py", line 324, in detect
segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, crop_factor, drop_size)
File "C:\Users\paras\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subcore.py", line 106, in detect
detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold)
File "C:\Users\paras\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subcore.py", line 33, in inference_bbox
pred = model(image, conf=confidence, device=device)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\engine\model.py", line 101, in call
return self.predict(source, stream, **kwargs)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\engine\model.py", line 274, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\engine\predictor.py", line 204, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\engine\predictor.py", line 290, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess
preds = ops.non_max_suppression(
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\ultralytics\utils\ops.py", line 278, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "C:\Users\paras\pinokio\api\comfyui.git\app\env\lib\site-packages\torch_ops.py", line 692, in call
return self._op(*args, **kwargs or {})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.