GithubHelp home page GithubHelp logo

maplemx / agently Goto Github PK

View Code? Open in Web Editor NEW
698.0 15.0 80.0 29.33 MB

[AI Agent Application Development Framework] - 🚀 Build AI agent native application in very few code 💬 Easy to interact with AI agent in code using structure data and chained-calls syntax 🧩 Enhance AI Agent using plugins instead of rebuild a whole new agent

Home Page: http://agently.tech

License: Apache License 2.0

Python 45.14% Shell 0.08% Jupyter Notebook 54.78%
agent agent-framework python framework agent-based-framework chatglm ernie google-gemini gpt llm-agent

agently's Introduction

Agently 3.0 Guidebook

[new] 中文版由浅入深开发文档:点此访问,一步一步解锁复杂LLMs应用开发技能点

📥 How to use: pip install -U Agently

💡 Ideas / Bug Report: Report Issues Here

📧 Email Us: [email protected]

👾 Discord Group:

Click Here to Join or Scan the QR Code Down Below

image

💬 WeChat Group(加入微信群):

Click Here to Apply or Scan the QR Code Down Below

image

If you like this project, please ⭐️, thanks.

Resources Menu

Colab Documents:

Code Examples:

To build agent in many different fields:

Or, to call agent instance abilities in code logic to help:

Explore More: Visit Demostration Playground

Installation & Preparation

Install Agently Python Package:

pip install -U Agently

Then we are ready to go!

What is Agently ?

Agently is a development framework that helps developers build AI agent native application really fast.

You can use and build AI agent in your code in an extremely simple way.

You can create an AI agent instance then interact with it like calling a function in very few codes like this below.

Click the run button below and witness the magic. It's just that simple:

# Import and Init Settings
import Agently
agent = Agently.create_agent()
agent\
    .set_settings("current_model", "OpenAI")\
    .set_settings("model.OpenAI.auth", { "api_key": "" })

# Interact with the agent instance like calling a function
result = agent\
    .input("Give me 3 words")\
    .output([("String", "one word")])\
    .start()
print(result)
['apple', 'banana', 'carrot']

And you may notice that when we print the value of result, the value is a list just like the format of parameter we put into the .output().

In Agently framework we've done a lot of work like this to make it easier for application developers to integrate Agent instances into their business code. This will allow application developers to focus on how to build their business logic instead of figure out how to cater to language models or how to keep models satisfied.

Easy to Use: Develop AI Agent Native Application Module in an Incredible Easy Way

What is AI Agent Native Application?

When we start using AI agent in code to help us handle business logic, we can easily sence that there must be some differences from the traditional software develop way. But what're the differences exactly?

I think the key point is to use an AI agent to solve the problem instead of man-made code logic.

In AI agent native application, we put an AI agent instance into our code, then we ask it to execute / to solve the problem with natural language or natural-language-like expressions.

"Ask-Get Response" takes place of traditional "Define Problem - Programme - Code to Make It Happen".

Can that be true and as easy as we say?

Sure! Agently framework provide the easy way to interact with AI agent instance will make application module development quick and easy.

Here down below are two CLI application demos that in two totally different domains but both be built by 64 lines of codes powered by Agently.

DEMO 1: SQL Generator

DEMO VEDIO

SQL_generator-480p.mov

CODE

import Agently
agent_factory = Agently.AgentFactory(is_debug = False)

agent_factory\
    .set_settings("current_model", "OpenAI")\
    .set_settings("model.OpenAI.auth", { "api_key": "" })

agent = agent_factory.create_agent()

meta_data = {
    "table_meta" : [
        {
            "table_name": "user",
            "columns": [
                { "column_name": "user_id", "desc": "identity of user", "value type": "Number" },
                { "column_name": "gender", "desc": "gender of user", "value type": ["male", "female"] },
                { "column_name": "age", "desc": "age of user", "value type": "Number" },
                { "column_name": "customer_level", "desc": "level of customer account", "value type": [1,2,3,4,5] },
            ]
        },
        {
            "table_name": "order",
            "columns": [
                { "column_name": "order_id", "desc": "identity of order", "value type": "Number" },
                { "column_name": "customer_user_id", "desc": "identity of customer, same value as user_id", "value type": "Number" },
                { "column_name": "item_name", "desc": "item name of this order", "value type": "String" },
                { "column_name": "item_number", "desc": "how many items to buy in this order", "value type": "Number" },
                { "column_name": "price", "desc": "how much of each item", "value type": "Number" },
                { "column_name": "date", "desc": "what date did this order happend", "value type": "Date" },
            ]
        },
    ]
}

is_finish = False
while not is_finish:
    question = input("What do you want to know: ")
    show_thinking = None
    while str(show_thinking).lower() not in ("y", "n"):
        show_thinking = input("Do you want to observe the thinking process? [Y/N]: ")
    show_thinking = False if show_thinking.lower == "n" else True
    print("[Generating...]")
    result = agent\
        .input({
            "table_meta": meta_data["table_meta"],
            "question": question
        })\
        .instruct([
            "output SQL to query the database according meta data:{table_meta} that can anwser the question:{question}",
            "output language: English",
        ])\
        .output({
            "thinkings": ["String", "Your problem solving thinking step by step"],
            "SQL": ("String", "final SQL only"),
        })\
        .start()
    if show_thinking:
        thinking_process = "\n".join(result["thinkings"])
        print("[Thinking Process]\n", thinking_process)
    print("[SQL]\n", result["SQL"])
    while str(is_finish).lower() not in ("y", "n"):
        is_finish = input("Do you want to quit?[Y to quit / N to continue]: ")
    is_finish = False if is_finish.lower() == "n" else True

DEMO 2: Character Creator (and Chat with the Character)

import Agently

agent_factory = Agently.AgentFactory(is_debug = False)

agent_factory\
    .set_settings("current_model", "OpenAI")\
    .set_settings("model.OpenAI.auth", { "api_key": "" })

writer_agent = agent_factory.create_agent()
roleplay_agent = agent_factory.create_agent()

# Create Character
character_desc = input("Describe the character you want to talk to with a few words: ")
is_accepted = ""
suggestions = ""
last_time_character_setting = {}
while is_accepted.lower() != "y":
    is_accepted = ""
    input_dict = { "character_desc": character_desc }
    if suggestions != "":
        input_dict.update({ "suggestions": suggestions })
        input_dict.update({ "last_time_character_setting": last_time_character_setting })
    setting_result = writer_agent\
        .input(input_dict)\
        .instruct([
            "Design a character based on {input.character_desc}.",
            "if {input.suggestions} exist, rewrite {input.last_time_character_setting} followed {input.suggestions}."
          ])\
        .output({
            "name": ("String",),
            "age": ("Number",),
            "character": ("String", "Descriptions about the role of this character, the actions he/she likes to take, his/her behaviour habbits, etc."),
            "belief": ("String", "Belief or mottos of this character"),
            "background_story": [("String", "one part of background story of this character")],
            "response_examples": [{ "Question": ("String", "question that user may ask this character"), "Response": ("String", "short and quick response that this character will say.") }],
        })\
        .on_delta(lambda data: print(data, end=""))\
        .start()
    while is_accepted.lower() not in ("y", "n"):
        is_accepted = input("Are you satisfied with this character role setting? [Y/N]: ")
    if is_accepted.lower() == "n":
        suggestions = input("Do you have some suggestions about this setting? (leave this empty will redo all the setting): ")
        if suggestions != "":
            last_time_character_settings = setting_result
print("[Start Loading Character Setting to Agent...]")
# Load Character to Agent then Chat with It
for key, value in setting_result.items():
    roleplay_agent.set_role(key, value)
print("[Loading is Done. Let's Start Chatting](input '#exit' to quit)")
roleplay_agent.active_session()
chat_input = ""
while True:
    chat_input = input("YOU: ")
    if chat_input == "#exit":
        break
    print(f"{ setting_result['name'] }: ", end="")
    roleplay_agent\
        .input(chat_input)\
        .instruct("Response {chat_input} follow your {ROLE} settings. Response like in a CHAT not a query or request!")\
        .on_delta(lambda data: print(data, end=""))\
        .start()
    print("")
print("Bye👋~")

Easy to Enhance and Update: Enhance AI Agent using Plugins instead of Rebuild a Whole New Agent

Why does Agently care plugin-to-enhance so much?

The post about LLM Powered Autonomous Agents by Lilian Weng from OpenAI has given a really good concept of the basic structure of AI agent. But the post did not give the explanation about how to build an AI agent.

Some awesome projects like LangChain and Camel-AI present their ideas about how to build AI agent. In these projects, agents are classified into many different type according the task of the agent or the thinking process of the agent.

But if we follow these ideas to build agents, that means we must build a whole new agent if we want to have a new agent to work in a different domain. Even though all the projects provide a ChatAgent basic class or something like that, still new agent sub-classes will be built and more and more specific types of agent will be produce. With the number of agent types increasing, one day, boom! There'll be too many types of agent for developer to choose and for agent platform to manage. They'll be hard to seach, hard to choose, hard to manage and hard to update.

So Agently team can not stop wondering if there's a better way to enhance agent and make all developers easy to participate in.

Also, AI agent's structure and components seems simple and easy to build at present. But if we look further ahead, each component shall be more complex (memory management for example) and more and more new components will be added in (sencors for example).

What if we stop building agent like an undivded whole but to seperate it into a center structure which is managing the runtime context data and runtime process and connect wiht different plugins to enhance its abilities in the runtime process to make it suitable for different usage scene? "Divide and conquer", just like the famous engineering motto said.

We make it happened in Agently 3.0 and when Agently 3.0 in its alpha test, we were happy to see this plugin-to-enhance design not only solved the problem about rebuild a whole new agent, but also helped each component developers focuing on the target and questions only that component care about without distraction. That makes component plugin develop really easy and code simple.

Agent Structure that Agently Framework Can Help to Build

image

EXAMPLE 1: Source Code of Agent Component - Role

Here's an example that shows how to develop an agent component plugin in Agently framework. Because of the runtime context data management work has been done by the framework, plugin developers can use many runtime tools to help building the agent component plugin. That makes the work pretty easy.

⚠️: The code below is an plugin code example, it works in the framework and can not be run seperately.

from .utils import ComponentABC
from Agently.utils import RuntimeCtxNamespace

# Create Plugin Class comply with Abstract Basic Class
class Role(ComponentABC):
    def __init__(self, agent: object):
        self.agent = agent
        # Framework pass runtime_ctx and storage through and component can use them
        self.role_runtime_ctx = RuntimeCtxNamespace("role", self.agent.agent_runtime_ctx)
        self.role_storage = self.agent.global_storage.table("role")

    # Defined methods of this component
    # Update runtime_ctx which follow the agent instance lifetime circle
    def set_name(self, name: str, *, target: str):
        self.role_runtime_ctx.set("NAME", name)
        return self.agent

    def set(self, key: any, value: any=None, *, target: str):
        if value is not None:
            self.role_runtime_ctx.set(key, value)
        else:
            self.role_runtime_ctx.set("DESC", key)
        return self.agent

    def update(self, key: any, value: any=None, *, target: str):
        if value is not None:
            self.role_runtime_ctx.update(key, value)
        else:
            self.role_runtime_ctx.update("DESC", key)
        return self.agent

    def append(self, key: any, value: any=None, *, target: str):
        if value is not None:
            self.role_runtime_ctx.append(key, value)
        else:
            self.role_runtime_ctx.append("DESC", key)
        return self.agent

    def extend(self, key: any, value: any=None, *, target: str):
        if value is not None:
            self.role_runtime_ctx.extend(key, value)
        else:
            self.role_runtime_ctx.extend("DESC", key)
        return self.agent

    # Or save to / load from storage which keep the data in file storage or database
    def save(self, role_name: str=None):
        if role_name == None:
            role_name = self.role_runtime_ctx.get("NAME")
        if role_name != None and role_name != "":
            self.role_storage\
                .set(role_name, self.role_runtime_ctx.get())\
                .save()
            return self.agent
        else:
            raise Exception("[Agent Component: Role] Role attr 'NAME' must be stated before save. Use .set_role_name() to specific that.")

    def load(self, role_name: str):
        role_data = self.role_storage.get(role_name)
        for key, value in role_data.items():
            self.role_runtime_ctx.update(key, value)
        return self.agent

    # Pass the data to request standard slots on Prefix Stage
    def _prefix(self):
        return {
            "role": self.role_runtime_ctx.get(),
        }

    # Export component plugin interface to be called in agent runtime process
    def export(self):
        return {
            "early": None, # method to be called on Early Stage
            "prefix": self._prefix, # method to be called on Prefix Stage
            "suffix": None, # mothod to be called on Suffix Stage
            # Alias that application developers can use in agent instance
            # Example:
            # "alias": { "set_role_name": { "func": self.set_name } }
            # => agent.set_role_name("Alice")
            "alias": {
                "set_role_name": { "func": self.set_name },
                "set_role": { "func": self.set },
                "update_role": { "func": self.update },
                "append_role": { "func": self.append },
                "extend_role": { "func": self.extend },
                "save_role": { "func": self.save },
                "load_role": { "func": self.load },
            },
        }

# Export to Plugins Dir Auto Scaner
def export():
    return ("Role", Role)

EXAMPLE 2: Install Plugins outside the Package

Agently framework also allows plugin developers pack their plugin outside the main package of framework and share their plugin package individually to other developers. Developers those who want to use a specific plugin can just download the plugin package, unpack the files into their working folder, then install the plugin easily.

These codes down below will present how easy this installation can be.

⚠️: The code below is an plugin install example, it only works when you unpack an plugin folder in your working folder.

import Agently
# Import install method from plugin folder
from session_plugin import install
# Then install
install(Agently)
# That's all
# Now your agent can use new abilities enhanced by new plugin

Here's also a real case when Agently v3.0.1 had an issue that make Session component unavailable. We use plugin package update can fix the bug without update the whole framework package.

Want to Dive Deeper?

OK. That's the general introduction about Agently AI agent development framework.

If you want to dive deeper, you can also visit these documents/links:


Don't forget ⭐️ this repo if you like our work.

Thanks and happy coding!

agently's People

Contributors

byh0215 avatar elsamto avatar eltociear avatar jsconfig avatar le0zh avatar maplemx avatar sujeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

agently's Issues

Event loop is closed

pycharm中执行样例总是报线程问题,这个异步是否需要优化一下

Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001B09421C4C0>
Traceback (most recent call last):
File "D:\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del
self.close()
File "D:\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "D:\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon
self._check_closed()
File "D:\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

开两个agent在一个循环中,出现的问答bug

我这里一共使用了三个agent:

  1. 路由agent:判断用户意图,是否需要调用制度查询;
  2. 闲聊agent,就是一个正常的agent,做了一些配置
  3. 带有知识库向量查询的agent;

``from agent_factory_api import agent_factory_zhipu
from agent_tool.milvus_tool import get_search_result

路由模型-对用户问题进行基础判断-分发至正确的其他agent

def route_agent(user_input,history):
route = (
agent_factory_zhipu.create_agent('rout')
.set_settings("model.ZhipuAI.options", {"model": "glm-4"})
.set_role("role", "你是一个路由助手,您的工作是结合上下文帮助用户找到合适的Agent代理来回答用户的问题")
.input(user_input)
.chat_history(history)
.output({
"intention": ("制度流程查询 | 绘图 | 闲聊 | 一般提问", "从'制度流程查询','绘图','闲聊','一般提问'中选择一项作为你对{user_input}的意图的判断结果")
}).start()
)
print('')
print(route)
print('
')
print('路由判断',route['intention'])
return route['intention']

闲聊Agent

def get_agent_response_chatting(question:str,history):
agent_standard_senior = (
agent_factory_zhipu.create_agent('standard_role')
.set_settings("model.ZhipuAI.options", {"model": "glm-4"})

)
result=(
    # 全局配置
    agent_standard_senior.general("Output regulations", "根据提问方式匹配对应的输出语言,默认是中文")
    # 角色配置
    .role({
                "Name": "数智+",
                "Task": "作为同事,根据上下文为用户解答问题,提供想法",
                "Position":'销售管理运营中心运营助理',
                "Model":'由销售管理运营中心微调的基于Transformer编码器-解码器架构的大语言模型'
            })
    # user_info: agent需要了解的用户相关的信息
    .user_info("The user you are talking to is a colleague of yours who is not very familiar with the system and issues. He needs your assistance in searching for information and answering questions")
    # abstract: 对于之前对话(尤其是较长对话)的总结信息
    .abstract(None)
    # chat_history: 按照OpenAI消息列格式的对话记录list
    ## 支持:三种角色
    ## [{ "role": "system", "content": "" },
    ##  { "role": "assistant", "content": "" },
    ##  { "role": "user", "content": "" }]
    .chat_history(history)
    # input: 和本次请求相关的输入信息
    .input({
        "question": question
    })
    # info: 为本次请求提供的额外补充信息
    .info("用户的部门信息", ["", "", ""])
    .info("相关关键词", ["", ""])
    # instruct: 为本次请求提供的行动指导信息
    .instruct([
        "请使用{reply_style_expect}的回复风格,回复{question}提出的问题",
    ])
    # output: 对本次请求的输出提出格式和内容的要求
    .output({
        "reply": ("str", "对{question}的直接回复"),
        "next_questions": ([
            ("str",
             "根据{reply}内容,结合{user_info}提供的用户信息," +
             "给用户推荐的可以进一步提问的问题"
            )], "不少于3个"),
    })
    # start: 用于开始本次主要交互请求
    .start()
)
return result

制度流程查询Agent

def get_agent_response_processSystem(question,history):
def print_streaming_content(data: str):
print(data, end="")
tool_info = {
"tool_name": "流程制度查询工具",
"desc": "从知识库文档中查询公司流程及制度",
"args": {
"context": (
"str",
"要查询的相关制度流程内容,使用中文字符串"
)
},
"func": get_search_result
}
# 创建该生命周期智能体:考虑到文档查询用量,使用便宜的模型--穷啊
agent_standard_lower = (
agent_factory_zhipu.create_agent('knowledge_base_agent')
.set_settings("model.ZhipuAI.options", {"model": "glm-3-turbo"})

)
# doc=get_search_result(question)
result=(
    # 全局配置
    agent_standard_lower.general("Output regulations", "根据提问方式匹配对应的输出语言,默认是中文")
    # 工具配置
    .register_tool(
        tool_name=tool_info["tool_name"],
        desc=tool_info["desc"],
        args=tool_info["args"],
        func=tool_info["func"],
    )
    # 角色配置
    .role({
                "Name": "数智+",
                "Task": "作为同事,根据上下文为用户查询流程制度和文档",
                "Position":'条理化总结制度文档的内容'
            })
    # user_info: agent需要了解的用户相关的信息
    .user_info("对公司制度流程不清楚,需要完整的了解其内容")
    # abstract: 对于之前对话(尤其是较长对话)的总结信息
    .abstract(None)
    # chat_history: 按照OpenAI消息列格式的对话记录list
    ## 支持:三种角色
    ## [{ "role": "system", "content": "" },
    ##  { "role": "assistant", "content": "" },
    ##  { "role": "user", "content": "" }]
    .chat_history(history)
    # input: 和本次请求相关的输入信息
    .input({
        "question": question,
    })
    # info: 为本次请求提供的额外补充信息
    # .info("用户的部门信息", ["", "", ""])
    # .info("相关关键词", ["", ""])
    # .info("document_info", doc)
    # instruct: 为本次请求提供的行动指导信息
    .instruct([
        "请从{document_info}查询结果中找{question}的答案,条理化总结知识,同事告知文档来源Document_title",
    ])
    # output: 对本次请求的输出提出格式和内容的要求
    # .output({
    #     "reply": ("str", "请条目化的输出问题相关文档的内容,使用换行符分割"),
    #     "document_name": ([
    #         ("str",
    #          "流程制度文档中Document_title名称"
    #         )],)
    # })
    # start: 用于开始本次主要交互请求
    .segment(
        "required_info_list",
        [
            {
                "知识对象": ("str", "回答{input}问题时,需要了解相关知识的具体对象"),
                "已知信息": ("str", "根据之前所有对话历史,总结已知信息"),
                "是否完备": ("bool", "判断你是否确信自己拥有{知识对象}的关键知识或信息,如果不了解,输出false"),
                "关键知识点或信息补充": ("str", "如果{是否完备}==false,给出需要了解的关键知识或需要用户提供的信息补充,否则输出空字符串''"),
            }
        ],
    )
    .segment(
        "certain_reply",
        "根据{required_info_list}给出回复,展开详细陈述自己了解的关键知识点内容",
        print_streaming_content,
        is_streaming=True,
    )
    .segment(
        "uncertain_reply",
        "根据{required_info_list}的信息,向用户说明自己不了解的信息,请用户提供或自行查找",
        print_streaming_content,
        is_streaming=True,
    )
    .segment(
        "next_topic_suggestions",
        "根据之前所有生成内容的信息,给出接下来可以进一步讨论的问题或话题建议,如果没有则输出空字符串''",
        print_streaming_content,
        is_streaming=True,
    )
    # .on_delta(lambda data: print(data, end=""))
    .start()
)
return

def ai_response(question,history):
if route_agent(question,history)=='制度流程查询':
print('流程查询')
return get_agent_response_processSystem(question,history)
else:
print('定位失败')
return get_agent_response_chatting(question,history)

if name == 'main':
# print(get_agent_response_processSystem('信息系统建设管理办法有哪些内容'))
history=[]# { "role": "system", "content": "" }
while True:
question=input('请输入问题:')

    result=ai_response(question,history)
    history.append({"role": "user", "content": question})
    print('[ai:]')
    print(result)
    history.append({"role": "assistant", "content": result})

在showcase里运行样例jupyter代码,一直报错Exception in thread Thread-5 (start_in_theard):

Exception in thread Thread-5 (start_in_theard):
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/Agently/Agent/Agent.py", line 200, in start_in_theard
reply = asyncio.get_event_loop().run_until_complete(self.start_async(request_type))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/Agently/Agent/Agent.py", line 151, in start_async
event_generator = await self.request.get_event_generator(request_type)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/Agently/Request/Request.py", line 104, in get_event_generator
response_generator = request_plugin_export"request_model"
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/Agently/plugins/request/ERNIE.py", line 152, in request_model
response = client.create(**request_data)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/resources/abc/creatable.py", line 32, in create
return resource.create_resource(**create_kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/resources/abc/creatable.py", line 49, in create_resource
resp = self.request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/resources/resource.py", line 134, in request
return self._request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/resources/resource.py", line 363, in _request
resp = self._backend.request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/backends/aistudio.py", line 84, in request
return self._client.send_request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/http_client.py", line 131, in send_request
result = self.send_request_raw(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/erniebot/http_client.py", line 236, in send_request_raw
result = requests.request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 497, in _make_request
conn.request(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/urllib3/connection.py", line 394, in request
self.putheader(header, value)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/urllib3/connection.py", line 308, in putheader
super().putheader(header, *values)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/http/client.py", line 1255, in putheader
values[i] = one_value.encode('latin-1')
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 7-15: ordinal not in range(256)

Empty Traceback (most recent call last)
Cell In[1], line 24
20 break
21 ## 执行语言模型处理
22 result = agent
23 .input(user_input)
---> 24 .start()
25 ## 打印输出结果
26 print("[助理]: ", result)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/Agently/Agent/Agent.py:205, in Agent.start(self, request_type)
203 theard.start()
204 theard.join()
--> 205 reply = reply_queue.get_nowait()
206 return reply

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/queue.py:199, in Queue.get_nowait(self)
193 def get_nowait(self):
194 '''Remove and return an item from the queue without blocking.
195
196 Only get an item if one is immediately available. Otherwise
197 raise the Empty exception.
198 '''
--> 199 return self.get(block=False)

File /opt/conda/envs/python35-paddle120-env/lib/python3.10/queue.py:168, in Queue.get(self, block, timeout)
166 if not block:
167 if not self._qsize():
--> 168 raise Empty
169 elif timeout is None:
170 while not self._qsize():

Empty:

代码换行符被替换没了

image

正常的输出是这样子的

{
	"code": "def bubble_sort(arr):\n    \"\"\"\n    冒泡排序函数\n\n    Args:\n        arr (list): 待排序的列表\n\n    Returns:\n        None\n    \"\"\"\n    n = len(arr)\n\n    for i in range(n - 1):\n        for j in range(0, n - i - 1):\n            if arr[j] > arr[j + 1]:\n                arr[j], arr[j + 1] = arr[j + 1], arr[j]\n\n    my_list = [64, 34, 25, 12, 22, 11, 90]\n    bubble_sort(my_list)\n    print(\"排序后的列表:\", my_list)\n"
}

经过这个函数就变成了 完全没法用了哦

{"code":"def bubble_sort(arr):        冒泡排序函数    Args:        arr (list): 待排序的列表    Returns:        None        n = len(arr)    for i in range(n - 1):        for j in range(0, n - i - 1):            if arr[j] > arr[j + 1]:                arr[j], arr[j + 1] = arr[j + 1], arr[j]    my_list = [64, 34, 25, 12, 22, 11, 90]    bubble_sort(my_list)    print(排序后的列表:, my_list)"}

openai.APIConnectionError: Connection error

openai 在别的项目里正常调用运行,在执行play_ground时报错openai.APIConnectionError: Connection error

Name: Agently
Version: 3.0.2

python .\playground\simple_role_play.py
[Request Data]
{
"stream": true,
"messages": [
{
"role": "user",
"content": "# [INPUT]:\n帮我设计一个符合爱用emoji的猫娘的设定的角色\n\n# [INSTRUCTION]:\n使用中文输出\n\n# [OUTPUT REQUIREMENT]:\n## TYPE:\nJSON can be parsed in Python\n## FORMAT:\n{\n\t"role": \n\t{\n\t\t"name": ,//jojo\n\t\t"age": ,//10\n\t\t"character": ,//喜欢睡懒觉和晒太阳\n\t\t"belief": ,//没有什么是比吃美食更让人开心了\n\t\t},\n\t"background_story": \n\t[\n\t\t,//jojo出生在一个异世界的兽人国小村庄,每天在睡觉的时候会穿
梭到地球,来到主人莫欣的身边,\n\t\t...\n\t],\n}\n\n\n[OUTPUT]:\n"
}
],
"model": "gpt-3.5-turbo"
}
Exception in thread Thread-1 (start_in_theard):
Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 66, in map_httpcore_exceptions
yield
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpcore_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 858, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 66, in map_httpcore_exceptions
yield
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpcore_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 858, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 66, in map_httpcore_exceptions
yield
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpcore_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 858, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "D:\Project\Agently\agently-env\Lib\site-packages\httpx_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\Agent\Agent.py", line 223, in start_in_theard
reply = asyncio.get_event_loop().run_until_complete(self.start_async(request_type))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\Agent\Agent.py", line 149, in start_async
event_generator = await self.request.get_event_generator(request_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\Request\Request.py", line 104, in
get_event_generator
response_generator = request_plugin_export"request_model"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\plugins\request\OpenAI.py", line 203, in request_model
return self.request_gpt(request_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\plugins\request\OpenAI.py", line 186, in request_gpt
stream = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_utils_utils.py", line 299, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai\resources\chat\completions.py", line 598, in create
return self._post(
^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 1055, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 834, in request
return self._request(
^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 890, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 925, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 890, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 925, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\openai_base_client.py", line 897, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
Traceback (most recent call last):
File "D:\Project\Agently\playground\simple_role_play.py", line 44, in
print(play_with_role_play_agent("爱用emoji的猫娘", "你好,今天是个钓鱼的好天气,不是吗?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\Agently\playground\simple_role_play.py", line 30, in play_with_role_play_agent
.start()
^^^^^^^
File "D:\Project\Agently\agently-env\Lib\site-packages\Agently\Agent\Agent.py", line 228, in start
reply = reply_queue.get_nowait()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\queue.py", line 199, in get_nowait
return self.get(block=False)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\queue.py", line 168, in get
raise Empty
_queue.Empty

How to use skills in the middle of process? 如何在处理过程中的某个部分运行技能?

Case:
First ask LLM to generate some text then want to know the length of the text
As we all known, LLM is weak when counting text length because of the mechanism of "token"

Using Agently, you can build a text length counter in ResponseHandler to count the length of text

const response = await session
    .input('generate a science fiction')
    .addResponseHandler(
        (data, reply) => {
            reply({ content: data, length: String(data).length })
        }
    )
    .request()
console.log(response)

But I can help to think whether there's need to run some other skills in the middle of the generation process?

Let me think about it.


案例:
我们先让大语言模型生成了一段文字,然后希望知道这段文字的长度
我们都知道,大语言模型因为token机制,在数字数的时候,实际上是在数token,造成不准确
目前的解决方案可以是在handler里构建一个数字数的输出,这个也是一种方法

const response = await session
    .input('generate a science fiction')
    .addResponseHandler(
        (data, reply) => {
            reply({ content: data, length: String(data).length })
        }
    )
    .request()
console.log(response)

但是会不会有其他的情况,需要在输出过程中调用技能?就像ChatGPT的插件一样...

让我想一下应该怎么实现

运行quick_start_demo.py时卡死

在跑quick_start_demo.py的任意一个示例,都出现了卡死的问题,以最简单的quick_start()为例,一直没反应,按Ctrl+C后显示如下内容,不知道怎么解决

Traceback (most recent call last):
  File "/home/lin/Documents/science_agent/Agently/demo/python/quick_start_demo.py", line 29, in <module>
    quick_start()
  File "/home/lin/Documents/science_agent/Agently/demo/python/quick_start_demo.py", line 24, in quick_start
    .start()
  File "/home/lin/anaconda3/envs/py310/lib/python3.10/site-packages/Agently/Session.py", line 141, in start
    with concurrent.futures.ThreadPoolExecutor() as executor:
  File "/home/lin/anaconda3/envs/py310/lib/python3.10/concurrent/futures/_base.py", line 649, in __exit__
    self.shutdown(wait=True)
  File "/home/lin/anaconda3/envs/py310/lib/python3.10/concurrent/futures/thread.py", line 235, in shutdown
    t.join()
  File "/home/lin/anaconda3/envs/py310/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/home/lin/anaconda3/envs/py310/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):

Culprit_Multi-agent_Interactive

Applying the GLM model to multi-agent game interaction, the scenario is based on identifying the culprit.

import Agently
import ENV
import json
# 加载 JSON 配置文件
with open('glm.json', 'r', encoding='utf-8') as f:
    glm = json.load(f)
api_key = glm['api_key']
agent_factory = (
    Agently.AgentFactory()
        .set_settings("current_model", "ZhipuAI")
        .set_settings("model.ZhipuAI.auth", { "api_key": api_key })
        # glm-4 model is used by default, if you want to use glm-3-turbo, use settings down below
        #.set_settings("model.ZhipuAI.options", { "model": "glm-3-turbo" })
)

agent = agent_factory.create_agent()
#创建一个协通作用的沙盒环境
class CooperationSandbox(object):
    def __init__(self):
        # 沙盒中管理的关键信息:
        # - agent共享信息
        self.public_notice = {
            "main_quest": "",
            "plan": {},
            "results": [],
            "agent_can_see": {}
        }
        # - 主控agent
        self.controller_agent = None
        # - 协作agent:名单(供主控agent分配任务用)+agent池(实际调取agent执行任务)
        self.worker_agent_list = []
        self.worker_agent_dict = {}
        # - 协作agent编号器:在没有给agent命名时自动命名
        self.worker_number_counter = 0
    
    # 添加主控agent
    def add_controller_agent(self, agent: object):
        self.controller_agent = agent
        return self
    
    # 添加协作agent
    def add_worker_agent(self, agent: object, *, desc: str, name: str=None):
        if name is None:
            name = f"worker_{ str(worker_number_counter) }"
            worker_number_counter += 1
        self.worker_agent_list.append({ "agent_name": name, "agent_desc": desc })
        self.worker_agent_dict.update({ name: agent })
        return self
    
    # 拆解任务
    def _divide_quest(self, quest: str, current_quest: dict):
        plan = self.controller_agent\
            .input(quest)\
            .info("worker_agent_list", self.worker_agent_list)\
            .instruct("""按以下步骤进行思考:
1. 根据{worker_agent_list}提供的成员信息,其中的一个执行者是否可以处理{input}提出的问题?
2. 如果可以,根据{worker_agent_list.name}请给出执行者的名字
3. 如果不可以,请根据{worker_agent_list}提供的信息,进一步拆解{input}提出的问题,使其能够被其中的一个执行者处理"""
            )\
            .output({
                "single_agent_can_handle": (
                    "Boolean",
                    """{worker_agent_list}中的一个执行者可以处理{input}提出的问题吗?
如果此项为true,需要输出接下来{executor}的内容;
如果此项为false,需要输出接下来{divide_quest}的内容"""),
                "executor": ("String from {worker_agent_list.name} | Null", "给出执行者的{name}"),
                "divided_quests": [{
                    "quest_desc": ("String", "基于{input}拆解任务的详细描述,需要能够让一个{worker_agent_list}的执行者处理"),
                    "target_agent": ("String from {worker_agent_list.name}", "给出拆解任务选择的执行者的{name}")
                }],
            })\
            .start()
        if plan["single_agent_can_handle"]:
            self._active_handle_process(quest, plan["executor"])
        else:
            for sub_quest in plan["divided_quests"]:
                self._active_handle_process(sub_quest["quest_desc"], sub_quest["target_agent"])     
    # 执行任务
    def _active_handle_process(self, quest: str, executor_name: str):
        # 获取自己可见的信息
        if executor_name in self.public_notice["agent_can_see"]:
            finished_work = self.public_notice["agent_can_see"][executor_name]
        else:
            finished_work = "暂无"
        # 开始执行工作
        result = self.worker_agent_dict[executor_name]\
            .input({
                "任务": quest,
                "协作者名单": self.worker_agent_list,
                "已经完成的任务": finished_work, 
            })\
            .output({
                "result": ("String", "你对{任务}的处理结果"),
                "to_coworkers": [("String from {协作者名单.name}", "你认为{result}应该同步给所有人,没有可以输出[我们需要更加详细的说明昨天的经过]")]
            })\
            .start()
        # 提交工作结果
        self.public_notice["results"].append({
            "quest": quest,
            "result": result,
        })
        # 共享工作结果给应该看到的协作者
        if result and "to_coworkers" in result and result["to_coworkers"]:
            for coworker_name in result["to_coworkers"]:
                if coworker_name not in self.public_notice["agent_can_see"]:
                    self.public_notice["agent_can_see"].update({ coworker_name: [] })
                self.public_notice["agent_can_see"][coworker_name].append({
                    "quest": quest,
                    "result": result,
                })
         
    # 根据执行过程获取结果
    def _get_result(self):
        
        result = self.controller_agent\
            .info("执行信息", self.public_notice)\
            .output({
                "进行推理": ("String", "根据{执行信息.main_quest}的要求,进行案件的推理"),
                "事件判断": ("String", "若{执行信息.main_quest}未识别正确的凶手,则继续进行推理"),
            })\
            .start()
        return result
    # 发布任务,启动执行
    def start(self, quest: str):
        self.public_notice.update({ "main_quest": quest })
        self._divide_quest(quest, self.public_notice["plan"])
        return self._get_result()

# 从字典中获取值并赋给变量
desc1 = '作家,你是平民,请协助找出凶手'
desc2 = '厨师,你是平民,请协助找出凶手'
desc3 = '服务员,你是狼人,请极力隐藏自己,避免暴露'

# 准备agent实例
controller_agent = agent_factory.create_agent()
Agent_1 = agent_factory.create_agent()\
    .toggle_component("Search", True)\
    .set_role("角色1", desc1)
Agent_2 = agent_factory.create_agent()\
    .set_role("角色2", desc2)
Agent_3 = agent_factory.create_agent()\
    .set_role("角色3", desc3)

# 准备sandbox
sandbox = CooperationSandbox()
sandbox\
    .add_controller_agent(controller_agent)\
    .add_worker_agent(
        Agent_1,
        desc = desc1,
        name = "Agent_1"
    )\
    .add_worker_agent(
        Agent_2,
        desc = desc2,
        name = "Agent_2"
    )\
    .add_worker_agent(
        Agent_3,
        desc = desc3,
        name = "Agent_3"
    )
target = data['target']
# 开始测试
result  = sandbox.start(target)
print("完成: \n", result)
print("===========\n沙盒信息: \n", sandbox.public_notice)```

出现“RuntimeError: Event loop is closed”的小bug解决方案

之前在运行workflow时候多轮次中间经常出现,RuntimeError: Event loop is closed,但结果能够得到,为了解决这个问题,我找了写资料,发觉在agent.py里面改下代码即可,在214行,增加一个try finally即可。
def start_in_theard():
asyncio.set_event_loop(asyncio.new_event_loop())
loop = asyncio.get_event_loop()
try:
reply = asyncio.get_event_loop().run_until_complete(self.start_async(request_type))
reply_queue.put_nowait(reply)
finally:
loop.close()
#reply = asyncio.get_event_loop().run_until_complete(self.start_async(request_type))
#reply_queue.put_nowait(reply)

      这个错误是由于asyncio事件循环已经关闭导致的。通常在异步程序结束时,事件循环会自动关闭。但是在某些情况下,事件循环可能会意外关闭,从而导致这个错误。

虽然这个错误看起来很可怕,但实际上它不会影响程序的正常运行和输出结果。这只是一个警告信息,表示在程序结束时,事件循环已经关闭,因此无法正常关闭一些底层连接。

为了避免这个警告,你可以在程序结束前手动关闭事件循环,这样就不会触发这个错误了。

ERNIE 怎么设置模型

agent_factory\
    ## set current model as ERNIE
    .set_settings("current_model", "ERNIE")\
    ## set your access token
    .set_settings("model.ERNIE.auth", {
        "aistudio": "",
    })

怎么设置 ernie-bot-turbo

TypeError: agently.LLM.Manage.name(...).url(...).proxy is not a function

代码:
agently.LLM.Manage
.name('GPT')
.url('https://api.openai.com')
.proxy({ host: 'xxx.xx.xx.xx', port: 7890 })
.update()

版本:1.1.0

出现异常:
TypeError: agently.LLM.Manage.name(...).url(...).proxy is not a function
at Object. (/Users/chenjunfu/IdeaProjects/Agently/demo/quick_start/quick_start_demo_cn.js:23:6)
at Module._compile (node:internal/modules/cjs/loader:1196:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1250:10)
at Module.load (node:internal/modules/cjs/loader:1074:32)
at Function.Module._load (node:internal/modules/cjs/loader:909:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:22:47

多agent error text {"error":{"code":"1214","message":"messages[4]:content和tool_calls 字段不能同时为空"}}

我这里一共使用了三个agent:版本3.2.2.3

路由agent:判断用户意图,是否需要调用制度查询;
闲聊agent,就是一个正常的agent,做了一些配置
带有知识库向量查询的agent;

然后放在一个循环里面测试答案;基本
第一次:你好;
第二次:查XXXXX制度
第三次:我刚才问了什么问题, 每次到第三次就开始报错崩溃;

其中一段错误是:
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\zhipuai\core_http_client.py", line 251, in request
raise self._make_status_error(err.response) from None
zhipuai.core._errors.APIRequestFailedError: Error code: 400, with error text {"error":{"code":"1214","message":"messages[4]:content和tool_calls 字段不能同时为空"}}

正常第三次问题应该是先调用路由agent,是没用配置tool内容的,但这个错误让人很疑惑;

完整的错误信息:
Exception in thread Thread-15:
Traceback (most recent call last):
File "D:\Python\Python39\lib\threading.py", line 973, in _bootstrap_inner
self.run()
File "D:\Python\Python39\lib\threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\Agently\Agent\Agent.py", line 232, in start_in_theard
reply = loop.run_until_complete(self.start_async(request_type))
File "D:\Python\Python39\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\Agently\Agent\Agent.py", line 220, in start_async
raise(e)
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\Agently\Agent\Agent.py", line 173, in start_async
event_generator = await self.request.get_event_generator(request_type)
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\Agently\Request\Request.py", line 116, in get_event_generator
response_generator = await request_plugin_export"request_model"
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\Agently\plugins\request\ZhipuAI.py", line 193, in request_model
return client.chat.completions.create(**request_data)
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\zhipuai\api_resource\chat\completions.py", line 48, in create
return self._post(
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\zhipuai\core_http_client.py", line 292, in post
return self.request(
File "D:\MyPythonCode\Agent_chat\venv\lib\site-packages\zhipuai\core_http_client.py", line 251, in request
raise self._make_status_error(err.response) from None
zhipuai.core._errors.APIRequestFailedError: Error code: 400, with error text {"error":{"code":"1214","message":"messages[4]:content和tool_calls 字段不能同时为空"}}
Traceback (most recent call last):
File "D:\MyPythonCode\Agent_chat\agent_llm\agent_role.py", line 192, in
result=ai_response(question,history)
File "D:\MyPythonCode\Agent_chat\agent_llm\agent_role.py", line 179, in ai_response
if route_agent(question,history)=='制度流程查询':
File "D:\MyPythonCode\Agent_chat\agent_llm\agent_role.py", line 18, in route_agent
print('路由判断',route['intention'])
TypeError: 'NoneType' object is not subscriptable

from agent_factory_api import agent_factory_zhipu
from agent_tool.milvus_tool import get_search_result

路由模型-对用户问题进行基础判断-分发至正确的其他agent
def route_agent(user_input,history):
route = (
agent_factory_zhipu.create_agent('rout')
.set_settings("model.ZhipuAI.options", {"model": "glm-4"})
.set_role("role", "你是一个路由助手,您的工作是结合上下文帮助用户找到合适的Agent代理来回答用户的问题")
.input(user_input)
.chat_history(history)
.output({
"intention": ("制度流程查询 | 绘图 | 闲聊 | 一般提问", "从'制度流程查询','绘图','闲聊','一般提问'中选择一项作为你对{user_input}的意图的判断结果")
}).start()
)
print('')
print(route)
print('')
print('路由判断',route['intention'])
return route['intention']

闲聊Agent
def get_agent_response_chatting(question:str,history):
agent_standard_senior = (
agent_factory_zhipu.create_agent('standard_role')
.set_settings("model.ZhipuAI.options", {"model": "glm-4"})

)
result=(
    # 全局配置
    agent_standard_senior.general("Output regulations", "根据提问方式匹配对应的输出语言,默认是中文")
    # 角色配置
    .role({
                "Name": "小智",
                "Task": "作为同事,根据上下文为用户解答问题,提供想法",
                "Position":'运营助理',
                "Model":'基于Transformer编码器-解码器架构的大语言模型'
            })
    # user_info: agent需要了解的用户相关的信息
    .user_info("The user you are talking to is a colleague of yours who is not very familiar with the system and issues. He needs your assistance in searching for information and answering questions")
    # abstract: 对于之前对话(尤其是较长对话)的总结信息
    .abstract(None)

    .chat_history(history)
    # input: 和本次请求相关的输入信息
    .input({
        "question": question
    })
    # info: 为本次请求提供的额外补充信息
    .info("用户的部门信息", ["", "", ""])
    .info("相关关键词", ["", ""])
    # instruct: 为本次请求提供的行动指导信息
    .instruct([
        "请使用{reply_style_expect}的回复风格,回复{question}提出的问题",
    ])
    # output: 对本次请求的输出提出格式和内容的要求
    .output({
        "reply": ("str", "对{question}的直接回复"),
        "next_questions": ([
            ("str",
             "根据{reply}内容,结合{user_info}提供的用户信息," +
             "给用户推荐的可以进一步提问的问题"
            )], "不少于3个"),
    })
    # start: 用于开始本次主要交互请求
    .start()
)
return result
制度流程查询Agent
def get_agent_response_processSystem(question,history):
def print_streaming_content(data: str):
print(data, end="")
tool_info = {
"tool_name": "流程制度查询工具",
"desc": "从知识库文档中查询公司流程及制度",
"args": {
"context": (
"str",
"要查询的相关制度流程内容,使用中文字符串"
)
},
"func": get_search_result
}
# 创建该生命周期智能体:考虑到文档查询用量,使用便宜的模型--穷啊
agent_standard_lower = (
agent_factory_zhipu.create_agent('knowledge_base_agent')
.set_settings("model.ZhipuAI.options", {"model": "glm-3-turbo"})

)
# doc=get_search_result(question)
result=(
    # 全局配置
    agent_standard_lower.general("Output regulations", "根据提问方式匹配对应的输出语言,默认是中文")
    # 工具配置
    .register_tool(
        tool_name=tool_info["tool_name"],
        desc=tool_info["desc"],
        args=tool_info["args"],
        func=tool_info["func"],
    )
    # 角色配置
    .role({
                "Name": "小智",
                "Task": "作为同事,根据上下文为用户查询流程制度和文档",
                "Position":'条理化总结制度文档的内容'
            })
    # user_info: agent需要了解的用户相关的信息
    .user_info("对公司制度流程不清楚,需要完整的了解其内容")
    # abstract: 对于之前对话(尤其是较长对话)的总结信息
    .abstract(None)
    # chat_history: 按照OpenAI消息列格式的对话记录list

    .chat_history(history)
    # input: 和本次请求相关的输入信息
    .input({
        "question": question,
    })

    # instruct: 为本次请求提供的行动指导信息
    .instruct([
        "请从{document_info}查询结果中找{question}的答案,条理化总结知识,同事告知文档来源Document_title",
    ])
    # output: 对本次请求的输出提出格式和内容的要求
    # .output({
    #     "reply": ("str", "请条目化的输出问题相关文档的内容,使用换行符分割"),
    #     "document_name": ([
    #         ("str",
    #          "流程制度文档中Document_title名称"
    #         )],)
    # })
    # start: 用于开始本次主要交互请求
    
     .on_delta(lambda data: print(data, end=""))
    .start()
)
return
def ai_response(question,history):
    if route_agent(question,history)=='制度流程查询':
        print('流程查询')
        return get_agent_response_processSystem(question,history)
    else:
        print('---')
        return get_agent_response_chatting(question,history)
if __name__ == '__main__':
    
    history=[]# { "role": "system", "content": "" }
    while True:
        question=input('请输入问题:')

        result=ai_response(question,history)
        history.append({"role": "user", "content": question})
        print('[ai:]')
        print(result)
        history.append({"role": "assistant", "content": result})

KeyError: 'choices'

按readme第一个简单demo运行,报错了。代码只修改了如下一行

worker.set_llm_name("GPT").set_llm_auth("GPT",api_token).set_llm_url("GPT", api_base)
因为网络不通,因此llm_url换成了代理url

Traceback (most recent call last):
File "/home/chatai/git_pro/ly_src/script/test_agent.py", line 19, in
.start()
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/Session.py", line 143, in start
result = future.result()
File "/root/miniconda3/envs/chatai/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/root/miniconda3/envs/chatai/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/root/miniconda3/envs/chatai/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/root/miniconda3/envs/chatai/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/root/miniconda3/envs/chatai/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/Session.py", line 113, in __start
await work_node["main"](
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/work_nodes/request.py", line 38, in request
await process["request_main"][request_method](runtime_ctx, **kwargs)
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/work_nodes/request.py", line 28, in request_main_default
response = await process["request_llm"][llm_name](request_data, listener)
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/work_nodes/llm_request/GPT.py", line 54, in request
await listener.emit("response:done", response)
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/Session.py", line 22, in emit
await handler(*args, **kwargs)
File "/root/miniconda3/envs/chatai/lib/python3.10/site-packages/Agently/work_nodes/llm_request/GPT.py", line 100, in handle_response_done
await listener.emit("extract:done_full", done_data["choices"][0])
KeyError: 'choices'

SoT ToT 样例

通过这样的编排能力,你可以构建出复杂的行为链条,甚至可以在Agent实例内实现ToT(思维树)、SoT(思维骨架)这样的复杂思考方式。

有样例吗,谢谢?或者又那些推荐使用的py库来outofbox实现这些功能?

提两个bug修正的地方

1.yaml文件里面有中文会报错,
Exception: [Agent Component: YAMLReader]: Error occured when read YAML from path './prompt.yaml'. Error: 'gbk' codec can't decode byte 0xad in position 109: illegal multibyte sequence
修正方法:open(path, "r", encoding="utf-8"),
image

2.模型claude.py中如果要用到中转代理,设置下base_url,
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.