GithubHelp home page GithubHelp logo

metamind-ai / autogen-agi Goto Github PK

View Code? Open in Web Editor NEW
221.0 221.0 37.0 464 KB

AutoGen AGI: Advancing AI agents using AutoGen towards AGI capabilities. Explore cutting-edge enhancements in group chat dynamics, decision-making, and complex task proficiency. Join our journey in shaping AI's future!

Home Page: https://www.metamindsolutions.ai/

License: MIT License

Python 100.00%
agi ai autogen machine-learning

autogen-agi's Issues

Use autogen-agi with ollama (orca-mini) but result is error, please help.

I already install ollama (From https://github.com/jmorganca/ollama) in Ubuntu Server 18.04 LTS.

ollama list

NAME                    ID              SIZE    MODIFIED
orca-mini:latest        2dbd9f439647    2.0 GB  2 hours ago

Test with curl, It's OK.

curl http://127.0.0.1:11434/api/generate -d '{
>   "model": "orca-mini",
>   "prompt": "Why is the sky blue?"
> }'
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.937363861Z","response":" The","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.963548218Z","response":" sky","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:54.98967223Z","response":" appears","done":false}
{"model":"orca-mini","created_at":"2024-01-05T16:46:55.015511833Z","response":" blue","done":false}
..........

{"model":"orca-mini","created_at":"2024-01-05T16:46:57.631165519Z","response":"","done":true,"context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,12056,322,266,7661,4842,31902,13,13,8458,31922,13166,31871,13,347,7661,4725,4842,1177,266,1124,906,287,260,1249,1676,6697,27554,27289,31843,1408,21062,16858,266,4556,31876,31829,7965,31844,357,19322,8634,12285,859,362,11944,291,22329,16450,31843,1872,16450,640,3304,266,1954,288,484,11468,31844,504,266,13830,4842,23893,31829,685,18752,541,4083,661,266,3002,2729,23893,31829,31843,672,1901,342,662,382,871,550,389,266,7661,31844,382,820,541,287,266,4842,23893,31829,661,266,2729,3688,31844,540,1988,266,7661,2024,4842,289,459,31843],"total_duration":4769137102,"load_duration":1973124460,"prompt_eval_count":46,"prompt_eval_duration":126935000,"eval_count":96,"eval_duration":2667461000}

Then I install autogen-agi in python 3.11 environment via Anaconda

conda create --name autogen-agi python=3.11
conda activate autogen-agi
python --version
Python 3.11.5

pip --version
pip 23.3.1 from /home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/pip (python 3.11)

Test with Autogen-AGI

In .env file

OPENAI_API_KEY=openai-api-key

GOOGLE_SEARCH_API_KEY=google-search-api-key
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=google-custom-search-engine-id
GITHUB_PERSONAL_ACCESS_TOKEN=github-personal-access-token

SERP_API_KEY=serp-api-key

# Recommended engine: google or serpapi
SEARCH_ENGINE=ddg

# Uncomment below if you want to use ollama on a remote host like google collab
# See: https://www.youtube.com/watch?v=Qa1h7ygwQq8&t=329s&ab_channel=TechwithMarco
OLLAMA_HOST=http://127.0.0.1:11434

In file OAI_CONFIG_LIST.json

[
    {
        "model": "orca-mini",
        "base_url": "http://127.0.0.1:11434/api/generate"
    }
]

The I run python autogen_test.py

It has error.

llm_config_user_proxy: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
llm_config_assistant: {'config_list': [{'model': 'orca-mini', 'base_url': 'http://127.0.0.1:11434/api/generate'}]}
user_proxy (to assistant):

Please execute a python script that prints 10 dad jokes.

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/ssd-disk3-data/devteam/autogen-agi/autogen_test.py", line 51, in <module>
    user_proxy.initiate_chat(
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 544, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 344, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 475, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 887, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 619, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 244, in create
    response = self._completions_create(client, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/autogen/oai/client.py", line 314, in _completions_create
    response = completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_utils/_utils.py", line 299, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 556, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 1055, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 834, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/devteam/anaconda3/envs/autogen-agi/lib/python3.11/site-packages/openai/_base_client.py", line 877, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

Please suggest how to fix this issue. Thank you.

Rate limit error ungracefully handled

openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4-1106-preview in organization org-BLAHBLAHBLAH on tokens_usage_based per day: Limit 1500000, Used 1497556, Requested 20136. Please try again in 16m59.059s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens_usage_based', 'param': None, 'code': 'rate_limit_exceeded'}}

Error above terminates execution. Ideal behaviour, query user on duration of sleep before resuming execution. No input means to default to 3x the time in the message, so here 45+ minutes

Error when running example_rag.py

Hi there - first congrats on putting this together, looks like a very promising project.

I tried running example_rag with some of my own PDFs, and it seemed to run fine as far as ingesting the documents and sending the info to chatgpt - but an error occurred in whatever the next step is - specifically "OpenAIWrapper' object has no attribute 'extract_text_or_function_call':

DEBUG:openai._base_client:HTTP Request: POST https://api.openai.com/v1/chat/completions "200 OK"
ERROR:utils.rag_tools:Error in RAG fusion: 'OpenAIWrapper' object has no attribute 'extract_text_or_function_call'
Traceback (most recent call last):
  File "C:\DEV\AutoGenProjects\autogen-agi\example_rag.py", line 55, in <module>
    main()
  File "C:\DEV\AutoGenProjects\autogen-agi\example_rag.py", line 39, in main
    answer = get_informed_answer(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 536, in get_informed_answer
    nodes = get_retrieved_nodes(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 339, in get_retrieved_nodes
    query_variations = rag_fusion(query_str, query_context)
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\rag_tools.py", line 288, in rag_fusion
    rag_fusion_response = light_gpt4_wrapper_autogen(
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\misc.py", line 142, in light_gpt4_wrapper_autogen
    return light_gpt_wrapper_autogen(client, query, return_json, system_message)
  File "C:\DEV\AutoGenProjects\autogen-agi\utils\misc.py", line 128, in light_gpt_wrapper_autogen
    response = client.extract_text_or_function_call(response)
AttributeError: 'OpenAIWrapper' object has no attribute 'extract_text_or_function_call'

This could be my error as I noticed you were relying on autogen v0.2.0b4 - I actually installed v0.2.2, but there doesn't seem to be much difference in that function between 0.2.0b4 and 0.2.2. My config json file has gpt-4-1106-preview and gpt-3.5-turbo in it. I see in rag_tools you're defining LLM_CONFIGS and then referencing 'llm4' in the 'get_informed_answer' function.

Is my use of 0.2.2 possibly the issue or is it that I need to define an LLM config key/value pair named just 'gpt-4"?

Local LLMs

Allow to specify URL to use to run it with local LLMs like Ooga Booga.

Resume chat when last call was to archive bot function duplicates embeddings

If the last call before a rate limit error or other error terminates execution was a call to the archive bot, the embeddings remain in the prompt when chat is resumed, but the agents are still waiting for the agent bot to return as completed. If the agent councel is instructed to run the bot again, the embeddings are duplicated.

Ideal behaviour: embeddings are evaluated for retention in context outside of chat history to ensure no duplication, or embedding de-duplication is performed on the chat history as part of the archive bots normal function

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.