GithubHelp home page GithubHelp logo

metamind-ai / autogen-agi Goto Github PK

View Code? Open in Web Editor NEW
215.0 10.0 37.0 464 KB

AutoGen AGI: Advancing AI agents using AutoGen towards AGI capabilities. Explore cutting-edge enhancements in group chat dynamics, decision-making, and complex task proficiency. Join our journey in shaping AI's future!

Home Page: https://www.metamindsolutions.ai/

License: MIT License

Python 100.00%
agi ai autogen machine-learning

autogen-agi's Introduction

autogen-api logo

AutoGen AGI focuses on advancing the AutoGen framework for multi-agent conversational systems, with an eye towards characteristics of Artificial General Intelligence (AGI). This project introduces modifications to AutoGen, enhancing group chat dynamics among autonomous agents and increasing their proficiency in robustly handling complex tasks. The aim is to explore and incrementally advance agent behaviors, aligning them more closely with elements reminiscent of AGI.

Features

  • Enhanced Group Chat ๐Ÿ’ฌ: Modified AutoGen classes for advanced group chat functionalities.
  • Agent Council ๐Ÿง™: Utilizes a council of agents for decision-making and speaker/actor selection. Based on a prompting technique explored in this blog post.
  • Conversation Continuity ๐Ÿ”„: Supports loading and continuation of chat histories.
  • Agent Team Awareness ๐Ÿ‘ฅ: Each agent is aware of its role and the roles of its peers, enhancing team-based problem-solving.
  • Advanced RAG ๐Ÿ“š: Built in Retrieval Augmented Generation (RAG) leveraging RAG-fusion and llm re-ranking implemented via llama_index.
  • Domain Discovery ๐Ÿ”: Built in domain discovery for knowledge outside of llm training data.
  • Custom Agents ๐ŸŒŸ: A growing list of customized agents.

Demo Transcript ๐Ÿ“œ

In the following link you can see some example output of the demo task, which is to get a team of agents to write and execute another team of autogen agents:

agent council demo

๐Ÿง™Example transcript of an "Agent Council" discussion ๐Ÿง™

WARNING โš ๏ธ

This project leverages agents that have access to execute code locally. In addition it is based on the extended context window of gpt-4-turbo, which can be costly. Proceed at your own risk.

Installation ๐Ÿ› ๏ธ

  • clone the project:
git clone [email protected]:metamind-ai/autogen-agi.git
cd autogen-agi
  • (optional) create a conda environment:
conda create --name autogen-agi python=3.11
conda activate autogen-agi
  • install dependencies
pip install -r requirements.txt
  • add environment variables
    • copy .env.example to .env and fill in your values
      cp .env.example .env
    • copy OAI_CONFIG_LIST.json.example to OAI_CONFIG_LIST.json and fill in your OPENAI_API_KEY (this will most likely be needed for the example task)
      cp OAI_CONFIG_LIST.json.example OAI_CONFIG_LIST.json

All set!! ๐ŸŽ‰โœจ

NOTE:

Getting Started ๐Ÿš€

  • To attempt to reproduce the functionality seen in the demo:
python autogen_modified_group_chat.py
  • If you would first like to see an example of the research/domain discovery functionality:
python example_research.py
  • If you want to see an example of the RAG functionality:
python example_rag.py
  • If you want to compare the demo functionality to standard autogen:
python autogen_standard_group_chat.py

Methodology ๐Ÿ”

The evolution of this project has kept to a simple methodology so far. Mainly:

  1. Test increasingly complex tasks.
  2. Observe the current limitations of the agents/framework.
  3. Add specific agents/features to overcome those limitations.
  4. Generalize features to be more scalable.

For an example of a future possible evolution: discover what team of agents seems most successful at accomplishing more and more complex tasks, then provide those agent prompts few-shot learning examples in a dynamic agent generation prompt.

Contributing ๐Ÿค

Contributions are welcome! Please read our contributing guidelines for instructions on how to make a contribution.

TODO ๐Ÿ“

  • Expand research and discovery to support more resources (such as arxiv) and select the resource dynamically.
  • Support chat history overflow. This would reflect a MemGPT like system where the overflow history would stay summarized in the context with relevant overflow data pulled in (via RAG) as needed.
  • If possible, support smaller context windows and open source LLMs.
  • Add ability to dynamically inject agents as needed.
  • Add ability to spawn off agent teams as needed.
  • Add support for communication and resource sharing between agent teams.

Support โญ

Love what we're building with AutoGen AGI? Star this project on GitHub! Your support not only motivates us, but each star brings more collaborators to this venture. More collaboration means accelerating our journey towards advanced AI and closer to AGI. Let's push the boundaries of AI together! โญ

News ๐Ÿ“ฐ

License

MIT License

Copyright (c) 2023 MetaMind Solutions

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

autogen-agi's People

Contributors

jkheadley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogen-agi's Issues

Use autogen-agi with ollama (orca-mini) but result is error, please help.

I already install ollama (From https://github.com/jmorganca/ollama) in Ubuntu Server 18.04 LTS.

ollama list

NAME                    ID              SIZE    MODIFIED
orca-mini:latest        2dbd9f439647    2.0 GB  2 hours ago

Test with curl, It's OK.

curl http://127.0.0.1:11434/api/generate -d '{
>   "model": "orca-mini",
>   "prompt": "Why is the sky blue?"
> }'
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.937363861Z","response":" The","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.963548218Z","response":" sky","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.98967223Z","response":" appears","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:55.015511833Z","response":" blue","done":false}
..........

{"model":"orca-mini","created_at":"2024-01-05T16:46:57.631165519Z","response":"","done":true,"context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,12056,322,266,7661,4842,31902,13,13,8458,31922,13166,31871,13,347,7661,4725,4842,1177,266,1124,906,287,260,1249,1676,6697,27554,27289,31843,1408,21062,16858,266,4556,31876,31829,7965,31844,357,19322,8634,12285,859,362,11944,291,22329,16450,31843,1872,16450,640,3304,266,1954,288,484,11468,31844,504,266,13830,4842,23893,31829,685,18752,541,4083,661,266,3002,2729,23893,31829,31843,672,1901,342,662,382,871,550,389,266,7661,31844,382,820,541,287,266,4842,23893,31829,661,266,2729,3688,31844,540,1988,266,7661,2024,4842,289,459,31843],"total_duration":4769137102,"load_duration":1973124460,"prompt_eval_count":46,"prompt_eval_duration":126935000,"eval_count":96,"eval_duration":2667461000}

Then I install autogen-agi in python 3.11 environment via Anaconda

conda create --name autogen-agi python=3.11
conda activate autogen-agi
python --version
Python 3.11.5

pip --version
pip 23.3.1 from /home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/pip (python 3.11)

Test with Autogen-AGI

In .env file

OPENAI_API_KEY=openai-api-key

GOOGLE_SEARCH_API_KEY=google-search-api-key
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=google-custom-search-engine-id
GITHUB_PERSONAL_ACCESS_TOKEN=github-personal-access-token

SERP_API_KEY=serp-api-key

# Recommended engine: google or serpapi
SEARCH_ENGINE=ddg

# Uncomment below if you want to use ollama on a remote host like google collab
# See: https://www.youtube.com/watch?v=Qa1h7ygwQq8&t=329s&ab_channel=TechwithMarco
OLLAMA_HOST=http://127.0.0.1:11434

In file OAI_CONFIG_LIST.json

[
    {
        "model": "orca-mini",
        "base_url": "http://127.0.0.1:11434/api/generate"
    }
]

The I run python autogen_test.py

It has error.

llm_config_user_proxy: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
llm_config_assistant: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
user_proxy (to assistant):

Please execute a python script that prints 10 dad jokes.

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/ssd-disk3-data/devteam/autogen-agi/autogen_test.py", line 51, in <module>
    user_proxy.initiate_chat(
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 544, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 344, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 475, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 887, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 619, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 244, in create
    response = self._completions_create(client, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 314, in _completions_create
    response = completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_utils/_utils.py", line 299, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 556, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 1055, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 834, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 877, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

Please suggest how to fix this issue. Thank you.

Resume chat when last call was to archive bot function duplicates embeddings

If the last call before a rate limit error or other error terminates execution was a call to the archive bot, the embeddings remain in the prompt when chat is resumed, but the agents are still waiting for the agent bot to return as completed. If the agent councel is instructed to run the bot again, the embeddings are duplicated.

Ideal behaviour: embeddings are evaluated for retention in context outside of chat history to ensure no duplication, or embedding de-duplication is performed on the chat history as part of the archive bots normal function

Error when running example_rag.py

Hi there - first congrats on putting this together, looks like a very promising project.

I tried running example_rag with some of my own PDFs, and it seemed to run fine as far as ingesting the documents and sending the info to chatgpt - but an error occurred in whatever the next step is - specifically "OpenAIWrapper' object has no attribute 'extract_text_or_function_call':

DEBUG:openai._base_client:HTTP Request: POST https://api.openai.com/v1/chat/completions "200 OK"
ERROR:utils.rag_tools:Error in RAG fusion: 'OpenAIWrapper' object has no attribute 'extract_text_or_function_call'
Traceback (most recent call last):
  File "C:\DEV\AutoGenProjects\autogen-agi\example_rag.py", line 55, in <module>
    main()
  File "C:\DEV\AutoGenProjects\autogen-agi\example_rag.py", line 39, in main
    answer = get_informed_answer(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 536, in get_informed_answer
    nodes = get_retrieved_nodes(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 339, in get_retrieved_nodes
    query_variations = rag_fusion(query_str, query_context)
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 288, in rag_fusion
    rag_fusion_response = light_gpt4_wrapper_autogen(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\misc.py", line 142, in light_gpt4_wrapper_autogen
    return light_gpt_wrapper_autogen(client, query, return_json, system_message)
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\misc.py", line 128, in light_gpt_wrapper_autogen
    response = client.extract_text_or_function_call(response)
AttributeError: 'OpenAIWrapper' object has no attribute 'extract_text_or_function_call'

This could be my error as I noticed you were relying on autogen v0.2.0b4 - I actually installed v0.2.2, but there doesn't seem to be much difference in that function between 0.2.0b4 and 0.2.2. My config json file has gpt-4-1106-preview and gpt-3.5-turbo in it. I see in rag_tools you're defining LLM_CONFIGS and then referencing 'llm4' in the 'get_informed_answer' function.

Is my use of 0.2.2 possibly the issue or is it that I need to define an LLM config key/value pair named just 'gpt-4"?

Local LLMs

Allow to specify URL to use to run it with local LLMs like Ooga Booga.

Rate limit error ungracefully handled

openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4-1106-preview in organization org-BLAHBLAHBLAH on tokens_usage_based per day: Limit 1500000, Used 1497556, Requested 20136. Please try again in 16m59.059s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens_usage_based', 'param': None, 'code': 'rate_limit_exceeded'}}

Error above terminates execution. Ideal behaviour, query user on duration of sleep before resuming execution. No input means to default to 3x the time in the message, so here 45+ minutes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.