GithubHelp home page GithubHelp logo

heshengtao / comfyui_llm_party Goto Github PK

View Code? Open in Web Editor NEW
476.0 6.0 47.0 98.13 MB

Dify in comfyui is compatible with Omost,ChatTTS,FLUX prompt generator,access to Feishu,discord,and adapts to all models with similar openai interfaces, such as ollama, qwen, GLM, deepseek, moonshot,doubao. Adapted to local models such as llama/ Peach-9B/qwen/GLM,Linkage neo4j KG,Implemented the function of graphRAG.Supports a variety of RAG.

License: GNU Affero General Public License v3.0

Python 99.68% Batchfile 0.03% JavaScript 0.29%
comfyui comfyui-nodes llm openai workflow stable-diffusion agent dify flowise macos

comfyui_llm_party's Introduction

图片

C‌‌​‎​‎‏​‍‎​‎​‎‏​‌‎​‎‍​‍‏​‍‌​‌‏omfyui_llm_party aims to develop a complete set of nodes for LLM workflow construction based on comfyui as the front end. It allows users to quickly and conveniently build their own LLM workflows and easily integrate them into their existing image workflows.

Effect display

EN.mp4

Project Overview

ComfyUI LLM Party, from the most basic LLM multi-tool call, role setting to quickly build your own exclusive AI assistant, to the industry-specific word vector RAG and GraphRAG to localize the management of the industry knowledge base; from a single agent pipeline, to the construction of complex agent-agent radial interaction mode and ring interaction mode; from the access to their own social APP (QQ, Feishu, Discord) required by individual users, to the one-stop LLM + TTS + ComfyUI workflow required by streaming media workers; from the simple start of the first LLM application required by ordinary students, to the various parameter debugging interfaces commonly used by scientific researchers, model adaptation. All of this, you can find the answer in ComfyUI LLM Party.

Latest update

  1. Added FLUX prompt word generator mask node, which can generate Hearthstone cards, Game King cards, posters, comics and other styles of prompt words, which can make the FLUX model straight out. Reference workflow: FLUX prompt word
  2. You can use this LLM tool maker to automatically generate LLM tools, save the tool code you generated as a python file, and then copy the code to the custom_tool folder, and then you create a new node. Example workflow: LLM tool generator.
  3. It supports duckduckgo search, but it has significant limitations. It seems that only English keywords can be entered, and multiple concepts cannot appear in keywords. The advantage is that there are no APIkey restrictions.
  4. It supports the function of calling multiple knowledge bases separately, and it is possible to specify which knowledge base is used to answer questions in the prompt word. Example workflow: multiple knowledge bases are called separately.
  5. Support LLM input extra parameters, including advanced parameters such as json out. Example workflow: LLM input extra parameters.Separate prompt words with json_out.
  6. Added the function of connecting the agent to discord. (still testing)
  7. Added the function of connecting the agent to Feishu, thank you very much guobalove for your contribution! Refer to the workflow Feishu robot.
  8. Added universal API call node and a large number of auxiliary nodes for constructing the request body and grabbing the information in the response.
  9. Added empty model node, you can uninstall LLM from video memory at any location!
  10. The chatTTS node has been added, thank you very much for the contribution of guobalove! model_path parameter can be empty! It is recommended to use HF mode to load the model, the model will be automatically downloaded from hugging face, no need to download manually; if using local loading, please put the model'sasset and config folders in the root directory. Baidu cloud address, extraction code: qyhu; if using custom mode to load, please put the model's asset and config folders under model_path.

User Guide

  1. For the instructions for using the node, please refer to: how to use nodes

  2. If there are any issues with the plugin or you have other questions, feel free to join the QQ group: 931057213.

  3. Please refer to the workflow tutorial: Workflow Tutorial, thanks to HuangYuChuh for your contribution!

  4. Advanced workflow gameplay account:openart

  5. More workflows please refer to the workflow folder.

Vedio tutorial

  1. Building a Modular AI with ComfyUI×LLM: A Step-by-Step Tutorial (Super Easy!)

  2. Teach you GPT-4o access to comfyui | Make workflow call another workflow | Make LLM a tool

  3. Disguise your workflow as GPT to access WeChat | Omost compatible! Flexibly create your own dalle3

  4. How to play interactive fiction games in comfyui

  5. AI girlfriend, and is your shape | comfyui on the implementation of graphRAG, linkage neoa4j | comfyui workflow access streamlit front-end

Model support

  1. Support all API calls in openai format(Combined with oneapi can call almost all LLM APIs, also supports all transit APIs), base_url selection reference config.ini.example, which has been tested so far:
  1. Most of the local models supported by the transformer library AutoModelForCausalLM class have been tested so far(If the model type on the local model node does not know what to choose, choose llama, which can be adapted with high probability):
  1. Model download

Download

  • You can configure the language in config.ini, currently only Chinese (zh_CN) and English (en_US), the default is your system language.
  • Install using one of the following methods:

Method 1:

  1. Search for comfyui_LLM_party in the comfyui manager and install it with one click.
  2. Restart comfyui.

Method 2:

  1. Navigate to the custom_nodes subfolder under the ComfyUI root folder.
  2. Clone this repository with git clone https://github.com/heshengtao/comfyui_LLM_party.git.

Method 3:

  1. Click CODE in the upper right corner.
  2. Click download zip.
  3. Unzip the downloaded package into the custom_nodes subfolder under the ComfyUI root folder.

Environment Deployment

  1. Navigate to the comfyui_LLM_party project folder.
  2. Enter pip install -r requirements.txt in the terminal to deploy the third-party libraries required by the project into the comfyui environment. Please ensure you are installing within the comfyui environment and pay attention to any pip errors in the terminal.
  3. If you are using the comfyui launcher, you need to enter path_in_launcher_configuration\python_embeded\python.exe -m pip install -r requirements.txt in the terminal to install. The python_embeded folder is usually at the same level as your ComfyUI folder.
  4. If you have some environment configuration problems, you can try to use the dependencies in requirements_fixed.txt.

Configuration

APIKEY can be configured using one of the following methods

Method 1:

  1. Open the config.ini file in the project folder of the comfyui_LLM_party.
  2. Enter your openai_api_key, base_url in config.ini.
  3. If you are using an ollama model, fill in http://127.0.0.1:11434/v1/ in base_url, ollama in openai_api_key, and your model name in model_name, for example: llama3.
  4. If you want to use Google search or Bing search tools, enter your google_api_key, cse_id or bing_api_key in config.ini.
  5. If you want to use image input LLM, it is recommended to use image bed imgbb and enter your imgbb_api in config.ini.
  6. Each model can be configured separately in the config.ini file, which can be filled in by referring to the config.ini.example file. After you configure it, just enter model_name on the node.

Method 2:

  1. Open the comfyui interface.
  2. Create a Large Language Model (LLM) node and enter your openai_api_key and base_url directly in the node.
  3. If you use the ollama model, use LLM_api node, fill in http://127.0.0.1:11434/v1/ in base_url node, fill in ollama in api_key, and fill in your model name in model_name, for example: llama3.
  4. If you want to use image input LLM, it is recommended to use graph bed imgbb and enter your imgbb_api_key on the node.

Changelog

  1. You can right-click in the comfyui interface, select llm from the context menu, and you will find the nodes for this project. how to use nodes
  2. Supports API integration or local large model integration. Modular implementation for tool invocation.When entering the base_url, please use a URL that ends with /v1/.You can use ollama to manage your model. Then, enter http://127.0.0.1:11434/v1/ for the base_url, ollama for the api_key, and your model name for the model_name, such as: llama3.
  1. Local knowledge base integration with RAG support.sample workflow: Knowledge Base RAG Search
  2. Ability to invoke code interpreters.
  3. Enables online queries, including Google search support.sample workflow: movie query workflow
  4. Implement conditional statements within ComfyUI to categorize user queries and provide targeted responses.sample workflow: intelligent customer service
  5. Supports looping links for large models, allowing two large models to engage in debates.sample workflow: Tram Challenge Debate
  6. Attach any persona mask, customize prompt templates.
  7. Supports various tool invocations, including weather lookup, time lookup, knowledge base, code execution, web search, and single-page search.
  8. Use LLM as a tool node.sample workflow: LLM Matryoshka dolls
  9. Rapidly develop your own web applications using API + Streamlit.
  10. Added a dangerous omnipotent interpreter node that allows the large model to perform any task.
  11. It is recommended to use the show_text node under the function submenu of the right-click menu as the display output for the LLM node.
  12. Supported the visual features of GPT-4O!sample workflow:GPT-4o
  13. A new workflow intermediary has been added, which allows your workflow to call other workflows!sample workflow:Invoke another workflow
  14. Adapted to all models with an interface similar to OpenAI, such as: Tongyi Qianwen/QWEN, Zhigu Qingyan/GLM, DeepSeek, Kimi/Moonshot. Please fill in the base_url, api_key, and model_name of these models into the LLM node to call them.
  15. Added an LVM loader, now you can call the LVM model locally, support lava-llama-3-8b-v1_1-gguf model, other LVM models should theoretically run if they are GUFF format.The example workflow can be found here: start_with_LVM.json.
  16. I wrote a fastapi.py file, and if you run it directly, you’ll get an OpenAI interface on http://127.0.0.1:8817/v1/. Any application that can call GPT can now invoke your comfyui workflow! I will create a tutorial to demonstrate the details on how to do this.
  17. I’ve separated the LLM loader and the LLM chain, dividing the model loading and model configuration. This allows for sharing models across different LLM nodes!
  18. macOS and mps devices are now supported! Thanks to bigcat88 for their contribution!
  19. You can build your own interactive novel game, and go to different endings according to the user's choice! Example workflow reference: interactive_novel
  20. Adapted to OpenAI's whisper and tts functions, voice input and output can be realized. Example workflow reference: voice_input&voice_output
  21. Compatible with Omost!!! Please download omost-llama-3-8b-4bits to experience it now! Sample workflow reference: start_with_OMOST
  22. Added LLM tools to send messages to WeCom, DingTalk, and Feishu, as well as external functions to call.
  23. Added a new text iterator, which can output only part of the characters at a time. It is safe to split the text according to Carriage Return and chunk size, and will not be divided from the middle of the text. chunk_overlap refers to how many characters the divided text overlaps. In this way, you can enter super long text in batches, as long as you don't have a brain to click, or open the loop in comfyui to execute, it can be automatically executed. Remember to turn on the is_locked property, which can automatically lock the workflow at the end of the input and will not continue to execute. Example workflow: text iteration input
  24. Added the model name attribute to the local LLM loader, local llava loader. If it is empty, it will be loaded using various local paths in the node. If it is not empty, it will be loaded using the path parameters you fill in yourself in config.ini. If it is not empty and not in config.ini, it will be downloaded from huggingface or loaded from the model save directory of huggingface. If you want to download from huggingface, please fill in the format of for example: THUDM/glm-4-9b-chat.Attention! Models loaded in this way must be adapted to the transformer library.
  25. Adapted to CosyVoice, now you can use the TTS function directly without downloading any model or any API key. Currently the interface is only adapted to Chinese.
  26. Added JSON file parsing node and JSON value node, which allows you to get the value of a key from a file or text. Thanks to guobalove for your contribution!
  27. Improved the code of tool call. Now LLM without tool call function can also open is_tools_in_sys_prompt attribute (local LLM does not need to be opened by default, automatic adaptation). After opening, the tool information will be added to the system prompt word, so that LLM can call the tool.Related papers on implementation principles: Achieving Tool Calling Functionality in LLMs Using Only Prompt Engineering Without Fine-Tuning
  28. A new custom_tool folder is created to store the code of the custom tool. You can refer to the code in the custom_tool folder, put the code of the custom tool into the custom_tool folder, and you can call the custom tool in LLM.
  29. Added Knowledge Graph tool, so that LLM and Knowledge Graph can interact perfectly. LLM can modify Knowledge Graph according to your input, and can reason on Knowledge Graph to get the answers you need. Example workflow reference: graphRAG_neo4j
  30. Added personality AI function, 0 code to develop your own girlfriend AI or boyfriend AI, unlimited dialogue, permanent memory, stable personality. Example workflow reference: Mylover Personality AI

Next Steps Plan:

  1. More model adaptations, at least covering the API interfaces of mainstream large models and local calls of mainstream open-source models, as well as more LVM model adaptations. Currently, I have only adapted the visual function calls of GPT-4;
  2. More ways to build agents. The work I have completed in this area includes importing an LLM as a tool to another LLM, achieving radial construction of LLM workflows, and importing one workflow as a node into another workflow. I might develop some cooler functions in this area in the future.
  3. More automation features. In the future, I will introduce more nodes that automatically push images, text, videos, and audio to other applications, as well as listening nodes that implement automatic replies to mainstream social software and forums.
  4. More knowledge base management functions. The project already supports local file search and web search. In the future, I will introduce knowledge graph search and long-term memory search. This will allow agents to think logically about professional knowledge and always remember certain key information when conversing with users.
  5. More tools, more persona. This part is the easiest to do but also requires the most accumulation. I hope that in the future, this project can have as many custom nodes as comfyui, with a multitude of tools and persona.

Disclaimer:

This open-source project and its contents (hereinafter referred to as "Project") are provided for reference purposes only and do not imply any form of warranty, either expressed or implied. The contributors of the Project shall not be held responsible for the completeness, accuracy, reliability, or suitability of the Project. Any reliance you place on the Project is strictly at your own risk. In no event shall the contributors of the Project be liable for any indirect, special, or consequential damages or any damages whatsoever resulting from the use of the Project.

Special thanks:

octocat octocat octocat

Support:

Join the community

If there is a problem with the plugin or you have any other questions, please join our community.

  1. discord:discord link
  2. QQ group: 931057213
  1. WeChat group: Choo-Yong (enter the group after adding the small assistant WeChat)

Follow us

  1. If you want to continue to pay attention to the latest features of this project, please follow the Bilibili account: Party host BB machine
  2. The OpenArt account is continuously updated with the most useful party workflows:openart

Donation support

If my work has brought value to your day, consider fueling it with a coffee! Your support not only energizes the project but also warms the heart of the creator. ☕💖 Every cup makes a difference!

Star History

Star History Chart

comfyui_llm_party's People

Contributors

bigcat88 avatar guobalove avatar heshengtao avatar huangyuchuh avatar pre-commit-ci[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

comfyui_llm_party's Issues

centos平台报错:ComfyUI/custom_nodes/comfyui_LLM_party module for custom nodes: [Errno 2] No such file or di rectory: 'dpkg'

请问如何解决该报错,很着急

找不到llama_cpp模块
Traceback (most recent call last):
File "/aigc-nas01/renwenyi/ComfyUI/nodes.py", line 1941, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/aigc-nas01/renwenyi/ComfyUI/custom_nodes/comfyui_LLM_party/init.py", line 1, in
from .install import (
File "/aigc-nas01/renwenyi/ComfyUI/custom_nodes/comfyui_LLM_party/install.py", line 391, in
install_portaudio()
File "/aigc-nas01/renwenyi/ComfyUI/custom_nodes/comfyui_LLM_party/install.py", line 337, in install_portaudio
result = subprocess.run(["dpkg", "-s", "libportaudio2"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/home/work/anaconda3/envs/animate_anyone/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/home/work/anaconda3/envs/animate_anyone/lib/python3.10/subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/work/anaconda3/envs/animate_anyone/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'dpkg'

何不加入 G4F

看了很多API的作法。
G4F是否也能加入,讓無API的使用者也能使用到AI協助的過程。

import g4f

ModuleNotFoundError: No module named 'server'

ComfyUI-Manager: EXECUTE => ['/home/Anaconda/anaconda3/envs/comfyui/bin/python', 'install.py']

[!] /home/ComfyUI/custom_nodes/comfyui_LLM_party/install.py:11: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
[!] import pkg_resources
[!] Traceback (most recent call last):
[!] File "/home/ComfyUI/custom_nodes/comfyui_LLM_party/install.py", line 14, in
[!] from server import PromptServer
[!] ModuleNotFoundError: No module named 'server'
install script failed: https://github.com/heshengtao/comfyui_LLM_party

安装报错,提示没有server模块,请问这个server模块是什么呢,通过pip server直接装,还是有特定的模块。

Feature Request: LangChainDeprecationWarning and Requirements

1、LangChainDeprecationWarning: tools/(wikipedia.py & load_ebd.py & check_web.py)

LangChainDeprecationWarning: Importing HuggingFaceBgeEmbeddings from langchain.embeddings is deprecated. Please replace deprecated imports:

>> from langchain.embeddings import HuggingFaceBgeEmbeddings

with new imports of:

>> from langchain_community.embeddings import HuggingFaceBgeEmbeddings
  • a: from langchain.embeddings import HuggingFaceBgeEmbeddings
  • b: from langchain_community.embeddings import HuggingFaceBgeEmbeddings

If with b will make the results different or bad results.

2、Maybe we should not check and install python module when starting comfyui:

  • We can install maually by using requirements.txt with cmd or terminal.

macOS support

A quick search did not find where it is used.

If you remove it, then this project can at least be installed on MacOS (auto-gptq does not yet support macOS, since it very interestingly depends on specific versions of Pytorch).

Or I missed something and auto-gptq is used somewhere here?

是否可以调用本地ollama接口

如果可以的话就太棒了,ollama管理模型,这应该是可以做到的,像comfyui ollama插件就是调用ollama的接口在comfyui中使用llm

Excel Parsing of info from columns

Is your feature request related to a problem? Please describe.
How to select the prompt and suffix from column A and B from an excel sheet and concatenate them?

Describe the solution you'd like
When queueing a specified line and column should be concatenated into a prompt without these kinds of tokens ({}:")

ex:
{"Prompt": "deer", "Suffix": "high quality"}

Should result in:

deer, high quality

Describe alternatives you've considered
I've used used a few WAS nodes to do search and replace and concat, but it gets messy with alot of nodes needed

Additional context
Here's a gif showing what I mean

llmpartyexcel

failed to import dpkg

if you are using a arch based system and got this error:
/ComfyUI/custom_nodes/comfyui_LLM_party module for custom nodes: [Errno 2] No such file or directory: 'dpkg'

you can replace the function on line 329 in install.py in the comfyui_LLM_party folder with this function as a fix

def install_portaudio():
    try:
        if os.name == "posix":
            if sys.platform == "linux" or sys.platform == "linux2":
                result = subprocess.run(["cat", "/etc/os-release"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
                if "EndeavourOS" in result.stdout or "Arch" in result.stdout:
                    result = subprocess.run(["pacman", "-Q", "portaudio"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
                    if result.returncode != 0:
                        os.system("sudo pacman -Sy")
                        os.system("sudo pacman -S --noconfirm portaudio")
                    else:
                        print("portaudio is already installed.")
                else:
                    result = subprocess.run(["dpkg", "-s", "libportaudio2"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
                    if result.returncode != 0:
                        os.system("sudo apt-get update")
                        os.system("sudo apt-get install -y libportaudio2 libasound-dev")
                    else:
                        print("libportaudio2 is already installed.")
            elif sys.platform == "darwin":
                # macOS
                result = subprocess.run(["brew", "list", "portaudio"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
                if result.returncode != 0:
                    subprocess.check_call(["brew", "install", "portaudio"])
                else:
                    print("portaudio is already installed.")
        elif os.name == "nt":
            pass
        else:
            print("Unsupported operating system")
    except subprocess.CalledProcessError as e:
        print(f"Error installing PortAudio library: {e}")

fastapi文件名与所导包的包名重名

问题:
按照文档 pip install -r requirements.txt 执行后 ,在comfyui也安装成功了
1904af526d536afb31176cfdfa2771b2

但是执行 fastapi.py 时提示 ImportError: cannot import name 'FastAPI' from partially initialized module 'fastapi'
657f85f05234056fb958784e96cf0b0d

解决方法:
修改自己的 fastapi.py 名称,不要与 from fastapi import FastAPI, HTTPException,Request, Depends 导包的 fastapi 就可以了,
估计是自定义的文件名与所导包的包名重名导致

TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

Using deterministic algorithms for pytorch
Total VRAM 11264 MB, total RAM 32509 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native
VAE dtype: torch.float32
Using pytorch cross attention
Adding extra search path checkpoints E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Stable-diffusion
Adding extra search path configs E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Stable-diffusion
Adding extra search path vae E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/VAE
Adding extra search path loras E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Lora
Adding extra search path loras E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/LyCORIS
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ESRGAN
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/RealESRGAN
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/SwinIR
Adding extra search path embeddings E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/embeddings
Adding extra search path hypernetworks E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/hypernetworks
Adding extra search path controlnet E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ControlNet
Adding extra search path controlnet E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/extensions/sd-webui-controlnet/models
Adding extra search path mmdets E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/mmdets/bbox
Adding extra search path ultralytics E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ultralytics/bbox
Adding extra search path ultralytics E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ultralytics/segm
Adding extra search path sams E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/sams
Adding extra search path seecoders E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/seecoders
Adding extra search path deepbump E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/deepbump
Adding extra search path insightface E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/insightface
Adding extra search path face_restore E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/face_restore
Adding extra search path FILM E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/FILM

Import times for custom nodes:
0.0 seconds: E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
6.2 seconds: E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
!!! Exception during processing!!! stat: path should be string, bytes, os.PathLike or integer, not NoneType
Traceback (most recent call last):
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 831, in chatbot
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 865, in from_pretrained
return tokenizer_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\tokenization_utils_base.py", line 2110, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\tokenization_utils_base.py", line 2336, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\tokenization_chatglm.py", line 109, in init
self.tokenizer = SPTokenizer(vocab_file)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\tokenization_chatglm.py", line 17, in init
assert os.path.isfile(model_path), model_path
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 30, in isfile
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Prompt executed in 0.13 seconds

Please, add local LVM model support and example [feature request]

Any model will do, even a simple one for start.

Most people in ComfyUI will be interested in a model that can determine the GENDER of an object: boy or girl/man or woman

This is very useful for InstantID and IP-Adapter workflows, where you want regenerate a picture.

Also VLM nodes will be very useful for upscaler workflows and for transfer styles ones.

As I am very interested in LVM Nodes, I can try create one and open a PR for this, I will have some time at evenings during week, so at next weekend probably can open a PR.

AttributeError: 'Llava15ChatHandler' object has no attribute 'clip_ctx'

Using deterministic algorithms for pytorch
Total VRAM 11264 MB, total RAM 32509 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native
VAE dtype: torch.float32
Using pytorch cross attention
Adding extra search path checkpoints E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Stable-diffusion
Adding extra search path configs E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Stable-diffusion
Adding extra search path vae E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/VAE
Adding extra search path loras E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/Lora
Adding extra search path loras E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/LyCORIS
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ESRGAN
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/RealESRGAN
Adding extra search path upscale_models E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/SwinIR
Adding extra search path embeddings E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/embeddings
Adding extra search path hypernetworks E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/hypernetworks
Adding extra search path controlnet E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ControlNet
Adding extra search path controlnet E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/extensions/sd-webui-controlnet/models
Adding extra search path mmdets E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/mmdets/bbox
Adding extra search path ultralytics E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ultralytics/bbox
Adding extra search path ultralytics E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/ultralytics/segm
Adding extra search path sams E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/sams
Adding extra search path seecoders E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/seecoders
Adding extra search path deepbump E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/deepbump
Adding extra search path insightface E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/insightface
Adding extra search path face_restore E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/face_restore
Adding extra search path FILM E:/AI/SD/webui/QiuYe/sd-webui-aki-v4.3/models/FILM

Import times for custom nodes:
0.0 seconds: E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
6.1 seconds: E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
!!! Exception during processing!!! [WinError -529697949] Windows Error 0xe06d7363
Traceback (most recent call last):
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 1412, in load_llava_checkpoint
clip = Llava15ChatHandler(clip_model_path = clip_path, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1072, in init
self.clip_ctx = self._llava_cpp.clip_model_load(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llava_cpp.py", line 174, in clip_model_load
return _libllava.clip_model_load(fname, verbosity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError -529697949] Windows Error 0xe06d7363
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Prompt executed in 0.14 seconds
got prompt
!!! Exception during processing!!! [WinError -529697949] Windows Error 0xe06d7363
Traceback (most recent call last):
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 1412, in load_llava_checkpoint
clip = Llava15ChatHandler(clip_model_path = clip_path, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1072, in init
self.clip_ctx = self._llava_cpp.clip_model_load(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llava_cpp.py", line 174, in clip_model_load
return _libllava.clip_model_load(fname, verbosity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError -529697949] Windows Error 0xe06d7363
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Prompt executed in 0.01 seconds
Exception ignored in: <function Llava15ChatHandler.del at 0x000002202AA14040>
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1078, in del
if self.clip_ctx is not None and self._clip_free is not None:
^^^^^^^^^^^^^
AttributeError: 'Llava15ChatHandler' object has no attribute 'clip_ctx'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

something went wrong: Response does not contain codes!

got prompt
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:44<00:00, 22.17s/it]
Some weights of the model checkpoint at /Users/alex/ComfyUI/models/omost were not used when initializing LlamaForCausalLM: ['model.layers.13.mlp.up_proj.weight.quant_map', 'model.layers.26.self_attn.q_proj.weight.nested_absmax', 'model.layers.10.self_attn.o_proj.weight.nested_quant_map',
- This IS expected if you are initializing LlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
'PreTrainedTokenizerFast' object has no attribute 'apply_chat_template'
!!! Exception during processing!!! Response does not contain codes!
Traceback (most recent call last):
  File "/Users/alex/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/Users/alex/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/Users/alex/ComfyUI/execution.py", line 65, in map_node_over_list
    results.append(getattr(obj, func)(**input_data_all))
  File "/Users/alex/ComfyUI/custom_nodes/comfyui_LLM_party/tools/omost.py", line 46, in notify
    canvas = omost_canvas.from_bot_response(text[0])
  File "/Users/alex/ComfyUI/custom_nodes/comfyui_LLM_party/lib_omost/canvas.py", line 134, in from_bot_response
    assert matched, 'Response does not contain codes!'
AssertionError: Response does not contain codes!

能不能让llama 3.1也可以使用工具?

Error code: 400 - {'error': {'message': 'llama3.1:8b-instruct-q6_K does not support tools', 'type': 'api_error', 'param': None, 'code': None}}

ollama的新版本已经支持工具调用了。

LLM_local: Separate `device` and `dtype` in the node

elif device == "cuda-fp16":
    qwen_device = "cuda"
    qwen_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).half().cuda()

Can we also separate device and tensor type(dtype)?

This is screen from the SUPIR node:

image

Otherwise you will have to make a huge list like:

cuda-fp32
cuda-bf16
cuda-fp16
cuda-fp8

mps-fp32
mps-bf16
mps-fp16

and so on probably for other Hardware Accelerators(like xpu) too.

Selecting separatly device and DType will be the best option, imho.

Also usually Node should use that device that is used by ComfyUI - maybe we can add an "auto" option for device and set it as default one.

Solved

I've installed comfyui_LLM_party via Comfyui Manager, and there is nothing in Install Missing Custom Nodes but the installed comfyui_LLM_party with update button -

image

failed to add config settings(like api keys) after the first time running the app

错误描述
在第一次运行应用程序之后, 于config.ini文件中添加的设置(如api key等)将不会在ComfyUI_LLM_party节点上起作用

Describe the bug
Adding settings in config.ini will not work on nodes for model and etc. after the first time running the app.

重现步骤

  1. 按照readme,添加设置(API密钥等)在config.ini中,运行ComfyUI。
  2. 在comfyui_LLM_party文件夹下将生成一个新的文件“config”,类型为“configuration settings”。
  3. 设置生效。
  4. 停止comfyui,修改config.ini,再次运行应用程序。
  5. 新设置不起作用,出现错误“'Incorrect API key provided: sk-XXXXX”等。

To Reproduce

  1. Follow readme, add settings(api keys and etc.) in config.ini, run ComfyUI.
  2. a new file "config" with type "configuration settings" will be generated under comfyui_LLM_party folder.
  3. the setting works.
  4. stop the comfyui, modify the config.ini, run app again.
  5. new setting doesn't work, error like "'Incorrect API key provided: sk-XXXXX"

截图Screenshots
image

解决方案

  • 第一次运行应用程序时会生成type为“configuration settings”的config文件,ComfyUI从该文件中读取设置。
  • 在第一次运行应用程序后,修改该文件中的设置,而不是修改config.ini文件。
  • 或者可以删除它,修改config.ini并运行应用程序。一个新的“configuration settings”文件将会生成。
  • 然后节点可以在不需要填充额外信息(如api key)的情况下工作。

Solutions

  • the first time running of app will generate the "configuration settings" file, ComfyUI read settings from it
  • modify settings in this file instead of config.ini file after the first time app's running.
  • or you can delete it, modify config.ini and run the app. A new "configuration settings" file will be generate.
  • Nodes work without information filling needed.
    f595d3547a5c0ed18360518a9008d5a4

TypeError: install_package() got an unexpected keyword argument 'custom_command'

Traceback (most recent call last):
File "C:\ai\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\nodes.py", line 1941, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ai\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\custom_nodes\comfyui_LLM_party_init
.py", line 1, in
from .install import (
File "C:\ai\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\custom_nodes\comfyui_LLM_party\install.py", line 364, in
install_llama(system_info)
File "C:\ai\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\custom_nodes\comfyui_LLM_party\install.py", line 181, in install_llama
install_package("llama-cpp-python", custom_command=custom_command)
TypeError: install_package() got an unexpected keyword argument 'custom_command'

节点安装后,打开comfyui的时候出错

ModuleNotFoundError: No module named 'jax.numpy'; 'jax' is not a package

When started the latest version,it shows this error message,and my jax of ComfyUI venv is 0.4.30. I have reinstall jax & jaxlib,but still have the same error................

Traceback (most recent call last):
File "C:\ComfyUI\nodes.py", line 1931, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ComfyUI\custom_nodes\comfyui_LLM_party_init
.py", line 8, in
from .llm import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "C:\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 1873, in
load_custom_tools()
File "C:\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 1851, in load_custom_tools
spec.loader.exec_module(module)
File "C:\ComfyUI\custom_nodes\comfyui_LLM_party\custom_tool\chatTTS_node.py", line 4, in
import ChatTTS
File "C:\ComfyUI\python\lib\site-packages\ChatTTS_init_.py", line 1, in
from .core import Chat
File "C:\ComfyUI\python\lib\site-packages\ChatTTS\core.py", line 18, in
from .model import DVAE, GPT, gen_logits
File "C:\ComfyUI\python\lib\site-packages\ChatTTS\model_init_.py", line 1, in
from .dvae import DVAE
File "C:\ComfyUI\python\lib\site-packages\ChatTTS\model\dvae.py", line 9, in
from vector_quantize_pytorch import GroupedResidualFSQ
File "C:\ComfyUI\python\lib\site-packages\vector_quantize_pytorch_init_.py", line 2, in
from vector_quantize_pytorch.residual_vq import ResidualVQ, GroupedResidualVQ
File "C:\ComfyUI\python\lib\site-packages\vector_quantize_pytorch\residual_vq.py", line 18, in
from einx import get_at
File "C:\ComfyUI\python\lib\site-packages\einx_init_.py", line 5, in
from . import backend
File "C:\ComfyUI\python\lib\site-packages\einx\backend_init_.py", line 1, in
from .register import register_for_module, register, get, backends, numpy
File "C:\ComfyUI\python\lib\site-packages\einx\backend\register.py", line 53, in
register_for_module("jax", _jax.create)
File "C:\ComfyUI\python\lib\site-packages\einx\backend\register.py", line 19, in register_for_module
register(backend_factory())
File "C:\ComfyUI\python\lib\site-packages\einx\backend_jax.py", line 11, in create
import jax.numpy as jnp
ModuleNotFoundError: No module named 'jax.numpy'; 'jax' is not a package

UnicodeEncodeError: 'locale' codec can't encode character '\u5e74' in position 2: encoding error

After Updating Comfy and all of my nodes yesterday I got this error:

!!! Exception during processing!!! 'locale' codec can't encode character '\u5e74' in position 2: encoding error
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 148, in recursive_execute
obj = class_def()
^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 448, in init
self.id = current_time.strftime("%Y年%m月%d日%H时%M分%S秒")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'locale' codec can't encode character '\u5e74' in position 2: encoding error

Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 148, in recursive_execute
obj = class_def()
^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 448, in init
self.id = current_time.strftime("%Y年%m月%d日%H时%M分%S秒")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'locale' codec can't encode character '\u5e74' in position 2: encoding error

Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 148, in recursive_execute
obj = class_def()
^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_LLM_party\llm.py", line 448, in init
self.id = current_time.strftime("%Y年%m月%d日%H时%M分%S秒")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'locale' codec can't encode character '\u5e74' in position 2: encoding error

Can't Install Requirements

pip install -r requirements.txt
Requirement already satisfied: beautifulsoup4 in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 1)) (4.12.3)
Collecting docx2txt (from -r requirements.txt (line 2))
Downloading docx2txt-0.8.tar.gz (2.8 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting langchain (from -r requirements.txt (line 3))
Downloading langchain-0.2.6-py3-none-any.whl.metadata (7.0 kB)
Collecting langchain_community (from -r requirements.txt (line 4))
Downloading langchain_community-0.2.6-py3-none-any.whl.metadata (2.5 kB)
Collecting langchain_text_splitters (from -r requirements.txt (line 5))
Downloading langchain_text_splitters-0.2.2-py3-none-any.whl.metadata (2.1 kB)
Requirement already satisfied: openai in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 6)) (1.30.5)
Collecting openpyxl (from -r requirements.txt (line 7))
Downloading openpyxl-3.1.5-py2.py3-none-any.whl.metadata (2.5 kB)
Requirement already satisfied: pandas in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 8)) (2.2.2)
Requirement already satisfied: pytz in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 9)) (2024.1)
Requirement already satisfied: Requests in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 10)) (2.32.2)
Collecting xlrd (from -r requirements.txt (line 11))
Downloading xlrd-2.0.1-py2.py3-none-any.whl.metadata (3.4 kB)
Collecting faiss-cpu (from -r requirements.txt (line 12))
Downloading faiss_cpu-1.8.0.post1-cp310-cp310-win_amd64.whl.metadata (3.8 kB)
Requirement already satisfied: websocket-client in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 13)) (1.8.0)
Collecting streamlit (from -r requirements.txt (line 14))
Downloading streamlit-1.36.0-py2.py3-none-any.whl.metadata (8.5 kB)
Collecting virtualenv (from -r requirements.txt (line 15))
Downloading virtualenv-20.26.3-py3-none-any.whl.metadata (4.5 kB)
Collecting tiktoken (from -r requirements.txt (line 16))
Downloading tiktoken-0.7.0-cp310-cp310-win_amd64.whl.metadata (6.8 kB)
Requirement already satisfied: transformers in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 17)) (4.41.1)
Collecting transformers_stream_generator (from -r requirements.txt (line 18))
Downloading transformers-stream-generator-0.0.5.tar.gz (13 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting optimum (from -r requirements.txt (line 19))
Downloading optimum-1.20.0-py3-none-any.whl.metadata (19 kB)
Collecting pdfplumber (from -r requirements.txt (line 20))
Downloading pdfplumber-0.11.1-py3-none-any.whl.metadata (39 kB)
Collecting wikipedia (from -r requirements.txt (line 21))
Downloading wikipedia-1.4.0.tar.gz (27 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting arxiv (from -r requirements.txt (line 22))
Downloading arxiv-2.1.3-py3-none-any.whl.metadata (6.1 kB)
Requirement already satisfied: bitsandbytes in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 23)) (0.43.1)
Requirement already satisfied: accelerate in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 24)) (0.30.1)
Requirement already satisfied: fastapi in c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages (from -r requirements.txt (line 25)) (0.111.0)
Collecting py-cpuinfo (from -r requirements.txt (line 26))
Downloading py_cpuinfo-9.0.0-py3-none-any.whl.metadata (794 bytes)
Collecting diskcache (from -r requirements.txt (line 27))
Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting requests_toolbelt (from -r requirements.txt (line 28))
Downloading requests_toolbelt-1.0.0-py2.py3-none-any.whl.metadata (14 kB)
Collecting playsound (from -r requirements.txt (line 29))
Downloading playsound-1.3.0.tar.gz (7.7 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
Traceback (most recent call last):
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 353, in
main()
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\tanzi\AppData\Local\Temp\pip-build-env-u_hkwway\overlay\Lib\site-packages\setuptools\build_meta.py", line 327, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
File "C:\Users\tanzi\AppData\Local\Temp\pip-build-env-u_hkwway\overlay\Lib\site-packages\setuptools\build_meta.py", line 297, in _get_build_requires
self.run_setup()
File "C:\Users\tanzi\AppData\Local\Temp\pip-build-env-u_hkwway\overlay\Lib\site-packages\setuptools\build_meta.py", line 497, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\tanzi\AppData\Local\Temp\pip-build-env-u_hkwway\overlay\Lib\site-packages\setuptools\build_meta.py", line 313, in run_setup
exec(code, locals())
File "", line 6, in
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\inspect.py", line 1139, in getsource
lines, lnum = getsourcelines(object)
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\inspect.py", line 1121, in getsourcelines
lines, lnum = findsource(object)
File "c:\users\tanzi\appdata\local\programs\python\python310\lib\inspect.py", line 958, in findsource
raise OSError('could not get source code')
OSError: could not get source code
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

關於一些workflow更新

workflow 使用出現一些節點發生錯誤。
每次更新節點後,過去的一些 workflow 反而無法使用了。

是否可以同步針對新的節點,更新 workflow。
不然真的不太會用。

還請推出新的影片教學,和我們說說新的使用,謝謝。

Add new tool: duckduckgo/ searXNG websearch, no API key required

Is your feature request related to a problem? Please describe.
no, but It would be a good addition for people who do not have an API key for web search (Google, Bing)

Describe the solution you'd like
similar to Google or Bing Search, but no API key required (none with duckduckgo, or a URL with SearXNG

Describe alternatives you've considered

Additional context

导入 amap 时出错:cannot import name 'language' from 'config'

更新完最新的节点后,出现报错信息,不确定是我的个例 or not. 为啥读取到ComfyUI-BiRefNet-ZHO里面的config了?

llama-cpp installed
py-cord[voice] is already installed
导入 amap 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 amap_decode 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 amap_encode 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 arxiv 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
Error importing jax. Please check your jax installation.
导入 chatTTS_node 时出错:'NoneType' object has no attribute 'ndarray'
导入 discord_bot 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 discord_monitor 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 ebd_tool 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 example_tool 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 extra_parameters 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 feishu_download 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 feishu_download_img 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 feishu_get_history_msg 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 feishu_send_msg 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 file_exist 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 file_online_delete 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 file_online_storage 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 imges2imge 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 json_parser 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 omost_json2py 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 openai_ebd 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 text2json 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
导入 url2image 时出错:cannot import name 'language' from 'config' (C:\ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO\config.py)
llama-cpp installed
py-cord[voice] is already installed

image

有没有豆包平台工作流的例子,配完config.ini不知道怎么连工作流

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

取消requirements.txt中包的版本限制。

bitsandbytes==0.43.1
accelerate==0.30.1
playsound<=1.2.2
tenacity>=8.1.0,<8.4.0

建议取消这类包的版本限制,直接写成
bitsandbytes
accelerate等。
如果功能有影响的话建议寻找替代方案。

这样能让更多的小伙伴无痛安装,并且以后安装其他插件时,没有后顾之忧。

十分喜欢你的插件,加油~

ModuleNotFoundError: No module named 'arxiv'

微信图片_20240724072800
我已经安装了,pip list里也有 python路径也正确,但一直这个报错········
I have installed it and the Python path in the pip list is also correct, but this error persists········

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.