GithubHelp home page GithubHelp logo

fiditenemini / aios Goto Github PK

View Code? Open in Web Editor NEW

This project forked from agiresearch/aios

0.0 0.0 0.0 2.7 MB

AIOS: LLM Agent Operating System

License: MIT License

Python 99.45% Shell 0.36% Dockerfile 0.19%

aios's Introduction

AIOS: LLM Agent Operating System

Code License

agiresearch%2FAIOS | Trendshift

The goal of AIOS is to build a large language model (LLM) agent operating system, which intends to embed large language model into the operating system as the brain of the OS. AIOS is designed to address problems (e.g., scheduling, context switch, memory management, etc.) during the development and deployment of LLM-based agents, for a better ecosystem among agent developers and users.

๐Ÿ  Architecture of AIOS

AIOS provides the LLM kernel as an abstraction on top of the OS kernel. The kernel facilitates the installation, execution and usage of agents. Furthermore, the AIOS SDK facilitates the development and deployment of agents.

๐Ÿ“ฐ News

  • [2024-07-10] ๐Ÿ“– AIOS documentation template is up: Code and Website.
  • [2024-07-03] ๐Ÿ› ๏ธ AIOS Github issue template is now available template.
  • [2024-06-20] ๐Ÿ”ฅ Function calling for open-sourced LLMs (native huggingface, vllm, ollama) is supported.
  • [2024-05-20] ๐Ÿš€ More agents with ChatGPT-based tool calling are added (i.e., MathAgent, RecAgent, TravelAgent, AcademicAgent and CreationAgent), their profiles and workflows can be found in OpenAGI.
  • [2024-05-13] ๐Ÿ› ๏ธ Local models (diffusion models) as tools from HuggingFace are integrated.
  • [2024-05-01] ๐Ÿ› ๏ธ The agent creation in AIOS is refactored, which can be found in our OpenAGI package.
  • [2024-04-05] ๐Ÿ› ๏ธ AIOS currently supports external tool callings (google search, wolframalpha, rapid API, etc).
  • [2024-04-02] ๐Ÿค AIOS Discord Community is up. Welcome to join the community for discussions, brainstorming, development, or just random chats! For how to contribute to AIOS, please see CONTRIBUTE.
  • [2024-03-25] โœˆ๏ธ Our paper AIOS: LLM Agent Operating System is released and AIOS repository is officially launched!
  • [2023-12-06] ๐Ÿ“‹ After several months of working, our perspective paper LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem is officially released.

โœˆ๏ธ Getting Started

Please see our ongoing documentation for more information.

Installation

Git clone AIOS

git clone https://github.com/agiresearch/AIOS.git
conda create -n AIOS python=3.11
source activate AIOS
cd AIOS

If you have GPU environments, you can install the dependencies using

pip install -r requirements-cuda.txt

or else you can install the dependencies using

pip install -r requirements.txt

Quickstart

Tips(๐Ÿ’ก): For the config of LLM endpoints, multiple API keys may be required to set up. Here we provide the .env.example to for easier configuration of these API keys, you can just copy .env.example as .env and set up the required keys based on your needs.

Use with OpenAI API

You need to get your OpenAI API key from https://platform.openai.com/api-keys. Then set up your OpenAI API key as an environment variable

export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>

Then run main.py with the models provided by OpenAI API

python main.py --llm_name gpt-3.5-turbo # use gpt-3.5-turbo for example

Use with Gemini API

You need to get your Gemini API key from https://ai.google.dev/gemini-api

export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>

Then run main.py with the models provided by OpenAI API

python main.py --llm_name gemini-1.5-flash # use gemini-1.5-flash for example

If you want to use open-sourced models provided by huggingface, here we provide three options:

  • Use with ollama
  • Use with native huggingface models
  • Use with vllm

Use with ollama

You need to download ollama from from https://ollama.com/.

Then you need to start the ollama server either from ollama app

or using the following command in the terminal

ollama serve

To use models provided by ollama, you need to pull the available models from https://ollama.com/library

ollama pull llama3:8b # use llama3:8b for example

ollama can support CPU-only environment, so if you do not have CUDA environment

You can run aios with ollama models by

python main.py --llm_name ollama/llama3:8b --use_backend ollama # use ollama/llama3:8b for example

However, if you have the GPU environment, you can also pass GPU-related parameters to speed up using the following command

python main.py --llm_name ollama/llama3:8b --use_backend ollama --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256

Use with native huggingface llm models

Some of the huggingface models require authentification, if you want to use all of the models you need to set up your authentification token in https://huggingface.co/settings/tokens and set up it as an environment variable using the following command

export HF_AUTH_TOKENS=<YOUR_TOKEN_ID>

You can run with the

python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256

By default, huggingface will download the models in the ~/.cache directory. If you want to designate the download directory, you can set up it using the following command

export HF_HOME=<YOUR_HF_HOME>

Use with vllm

If you want to speed up the inference of huggingface models, you can use vllm as the backend.

Note(๐Ÿ“): It is important to note that vllm currently only supports linux and GPU-enabled environment. So if you do not have the environment, you need to choose other options.

Considering that vllm itself does not support passing designated GPU ids, you need to either setup the environment variable,

export CUDA_VISIBLE_DEVICES="0" # replace with your designated gpu ids

Then run the command

python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256

or you can pass the CUDA_VISIBLE_DEVICES as the prefix

CUDA_VISIBLE_DEVICES=0 python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256

Supported LLM Endpoints

๐Ÿ–‹๏ธ References

@article{mei2024aios,
  title={AIOS: LLM Agent Operating System},
  author={Mei, Kai and Li, Zelong and Xu, Shuyuan and Ye, Ruosong and Ge, Yingqiang and Zhang, Yongfeng}
  journal={arXiv:2403.16971},
  year={2024}
}
@article{ge2023llm,
  title={LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem},
  author={Ge, Yingqiang and Ren, Yujie and Hua, Wenyue and Xu, Shuyuan and Tan, Juntao and Zhang, Yongfeng},
  journal={arXiv:2312.03815},
  year={2023}
}

๐Ÿš€ Contributions

For how to contribute, see CONTRIBUTE. If you would like to contribute to the codebase, issues or pull requests are always welcome!

๐ŸŒ AIOS Contributors

AIOS contributors

๐Ÿค Discord Channel

If you would like to join the community, ask questions, chat with fellows, learn about or propose new features, and participate in future developments, join our Discord Community!

๐Ÿ“ช Contact

For issues related to AIOS development, we encourage submitting issues, pull requests, or initiating discussions in the AIOS Discord Channel. For other issues please feel free to contact Kai Mei ([email protected]) and Yongfeng Zhang ([email protected]).

aios's People

Contributors

dongyuanjushi avatar agiresearch avatar evison avatar tata0703 avatar brama10 avatar eltociear avatar lumiere-ml avatar jhsuyu avatar itsthemoon avatar justsujay avatar odysa avatar zzfoo avatar shuyuan-x avatar ivanbelenky avatar zhang-henry avatar wawa0210 avatar 1tylermitchell avatar peteryschneider avatar om-raheja avatar jgalego avatar jroell avatar arnoldioi avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.