metauto-ai / gptswarm Goto Github PK
View Code? Open in Web Editor NEW๐ GPTSwarm: LLM agents as (Optimizable) Graphs
Home Page: https://gptswarm.org
License: MIT License
๐ GPTSwarm: LLM agents as (Optimizable) Graphs
Home Page: https://gptswarm.org
License: MIT License
2024-05-01 17:00:25.721 | ERROR | swarm.graph.node:execute:141 - Node DirectAnswer failed to execute due to: RetryError: RetryError[<Future at 0x314e4fcd0 state=finished raised OpenAIError>]
2024-05-01 17:03:04.397 | ERROR | swarm.graph.node:execute:141 - Node DirectAnswer failed to execute due to: RetryError: RetryError[<Future at 0x314e4ff40 state=finished raised OpenAIError>]
2024-05-01 17:07:03.721 | ERROR | swarm.graph.node:execute:141 - Node DirectAnswer failed to execute due to: RetryError: RetryError[<Future at 0x314e84100 state=finished raised OpenAIError>]
Showing the above error when running the examples, the local openai connection is okay.
Hello GPTSwarm authors,
Thank you for such an interesting work on the multi-agent communication network!
I am very interested in the idea of optimization of graph structure, and I have a question about the cost of the reinforcement optimization in your MMLU and Mini crosswords experiments.
In the paper, it seems like the optimization requires 200 iterations * 4 graph samples at each iteration for MMLU. This leads to a total of 800 queries of the multi-agent LLM system on the utility function, which is defined as the accuracy of 10% validation set (153 questions).
I haven't reproduced the experiment results yet, but it sounds like a big cost to get the optimized edge probability. Could you please share some information about the training cost during your experiments?
Many thanks!
Leon
Hi,
After testing, the demo_swarm can be run in a newly-created Colab Notebook after pip install gptswarm
, but cannot be run in a newly-created Python 3.10 environment by miniconda after pip install gptswarm
. Please refer to the pictures:
I suspect the reason might be: In Colab, although the default version for Python is 3.10, the default version for numpy is 1.25.2. However, in a newly-created conda Python 3.10 environment, during pip install gptswarm
, it would install numpy version 2.0.0.
I wondered if this issue can be fixed by adding a restriction for numpy version in the "pyproject.toml" file, or it may be caused by something in the demo code, or by something else, I'm not entirely sure.
Thank you very much!
The detailed output (After 'conda create -n tmpenv python=3.10' and 'pip install gptswarm' and the demo_swarm):
GPTSWARM_ROOT = find_project_root()
@mczhuge FYI this does not work with packages.
output: Foo Bar Asy
2024-02-27 15:46:46.445 | ERROR | swarm.graph.node:execute:141 - Node FinalDecision failed to execute due to: AttributeError: 'list' object has no attribute 'strip'
2024-02-27 15:46:46.446 | INFO | swarm.graph.node:log:160 - Memory Records for ID 3BUm:
operation: GenerateQuery
files: ['datasets/demos/agi.txt']
subtask: # Information Gathering for Question Resolution
@Obs01ete @Wenyi-AI-Wang Hi, Dmitrii and Wenyi. Thanks for your effort!
These are the remain work for us.
This is one of the most approachable agent frameworks I have used. Thank you.
Would you mind giving me some advice on state management. I just need direction as there seem to be several ways of doing the same thing, and I don't want to start down the wrong (/unintended) path.
Question
I need agents to manage a hierarchy of objects. Which implies a user-navigable state machine.
[edit] Simplified the question
DirectAnswer
-type agent may suffice? Furthermore, I imagine a swarm
would suffice (versus a DAG like CoT
or ToT
)agent.id
. Would you keep said state (somehow) in globalMemory
, or as an inputs[x]/outputs[x]
state that is flowed around, as extra properties in executions {}
or in an env
(per your crossword example)?Thanks
Hi,
Thanks for this useful framework!
There is an issue while running multi-agent commands that require asynchronous use of LMStudio API. For instance, running PYTHONPATH=. python experiments/run_mmlu.py --num-truthful-agents=3 --mode=OptimizedSwarm
, the outputs of LMStudio seem to be not assigned to the relevant question but to random input questions.
I have not tested this with Open AI API to see if this is specifically an incompatibility issue of LMStudio.
My current temporary solution is not to use the asynchronous implementation but this is annoyingly very slow. Have you encountered this? Is there a way to still use your asynchronous implementation and not experience this issue?
Thanks so much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.