GithubHelp home page GithubHelp logo

cyruschan360 / scalellm Goto Github PK

View Code? Open in Web Editor NEW

This project forked from vectorch-ai/scalellm

0.0 0.0 0.0 16.72 MB

A high-performance inference system for large language models, designed for production environments.

License: Apache License 2.0

Shell 0.33% C++ 75.57% C 0.24% Go 1.05% Rust 1.04% Cuda 16.27% CMake 3.56% Dockerfile 0.14% Python 1.80%

scalellm's Introduction

ScaleLLM: An efficient LLM Inference solution

License GitHub Repo stars build and test

Discord

ScaleLLM is a cutting-edge inference system engineered for large language models (LLMs), meticulously designed to meet the demands of production environments. It extends its support to a wide range of popular open-source models, including Llama3, Gemma, Bloom, GPT-NeoX, and more.

ScaleLLM is currently undergoing active development. We are fully committed to consistently enhancing its efficiency while also incorporating additional features. Feel free to explore our Roadmap for more details.

News:

Key Features

Table of contents

Supported Models

Models Tensor Parallel Quantization Chat API HF models examples
Aquila Yes Yes Yes BAAI/Aquila-7B, BAAI/AquilaChat-7B
Bloom Yes Yes No bigscience/bloom
Baichuan Yes Yes Yes baichuan-inc/Baichuan2-7B-Chat
ChatGLM3 Yes Yes Yes THUDM/chatglm3-6b
Gemma Yes Yes Yes google/gemma-2b
GPT_j Yes Yes No EleutherAI/gpt-j-6b
GPT_NeoX Yes Yes No EleutherAI/gpt-neox-20b
GPT2 Yes Yes No gpt2
InternLM Yes Yes Yes internlm/internlm-7b
Llama3/2 Yes Yes Yes meta-llama/Meta-Llama-3-8B-Instruct, meta-llama/Meta-Llama-3-8B, meta-llama/Llama-2-7b
Mistral Yes Yes Yes mistralai/Mistral-7B-v0.1
MPT Yes Yes Yes mosaicml/mpt-30b
Phi2 Yes Yes No microsoft/phi-2
Qwen Yes Yes Yes Qwen/Qwen-72B-Chat
Yi Yes Yes Yes 01-ai/Yi-6B, 01-ai/Yi-34B-Chat-4bits, 01-ai/Yi-6B-200K

If your model is not included in the supported list, we are more than willing to assist you. Please feel free to create a request for adding a new model on GitHub Issues.

Getting Started

The easiest way to get started with our project is by using the official Docker images. If you don't have Docker installed, please follow the installation instructions for your platform. Below, you will find a list of all available Docker images for our project:

Docker Image cuda 12.1 cuda 11.8
scalellm Yes No
scalellm_cu118 No Yes
scalellm-gateway - -
chatbot-ui - -

Docker Installation

You can download and install Docker from the official website: Docker Installation. To use GPUs in docker, you also need to install the NVIDIA Container Toolkit.

ScaleLLM server

Once you have Docker installed, you can run ScaleLLM Docker container with latest image using the following command:

docker pull docker.io/vectorchai/scalellm:latest
docker run -it --gpus=all --net=host --shm-size=1g \
  -v $HOME/.cache/huggingface/hub:/models \
  -e HF_MODEL_ID=meta-llama/Meta-Llama-3-8B-Instruct \
  -e DEVICE=cuda:0 \
  docker.io/vectorchai/scalellm:latest --logtostderr

This command starts the Docker container with GPU support and various configuration options.

  • HF_MODEL_ID specifies which Hugging Face model you want to run.
  • HF_MODEL_REVISION specifies which Hugging Face model revision you want to run. By default, it is set to "main".
  • DEVICE specifies the device on which this model should run. By default, it is set to "auto", using all available GPUs. You can also specify specific GPUs by using "cuda:0,cuda:1", or use CPU by using "cpu".
  • HF_MODEL_ALLOW_PATTERN specifies which types of files are allowed to be downloaded. By default, it will be configured automatically based on tensor type. Only use this option if the default configuration is not working for you.
  • HUGGING_FACE_HUB_TOKEN specifies the token from huggingface for gated models. -e HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN

Warning

  • The docker image with tag 'latest' could be changed to a new version upon new release. In order to use latest image, you may need to repull the image with specific tag.
  • Two version of docker images are provided for cuda 12.1 and cuda 11.8. Please choose the right image for your environment.
  • NCCL might fall back to using the host memory if NVLink or PCI is not available. To allow NCCL to use the host memory, we added '--shm-size=1g' to the docker run command.
  • Although ScaleLLM supports both CPU and GPU, we recommend using GPU for better performance. CPU support is mainly for debugging and testing purposes, so the performance might be sub-optimal.

Ports and Endpoints

After running the Docker container, two ports are exposed:

  1. Port 8888 for gRPC Server:

    The gRPC server is served on 0.0.0.0:8888 by default. You can use gRPC to interact with the service.

  2. Port 9999 for HTTP Server:

    The simple HTTP server for instrument will be served on 0.0.0.0:9999 by default. This server provides various endpoints for managing and monitoring the service:

    • Use curl localhost:9999/health to check the health status of the service.
    • Use curl localhost:9999/metrics to export Prometheus metrics.
    • Use curl localhost:9999/gflags to list all available gflags for configuration.
    • add more to come...

Rest API Server

You can also start a REST API gateway with latest image using the following command:

docker pull docker.io/vectorchai/scalellm-gateway:latest
docker run -it --net=host \
  docker.io/vectorchai/scalellm-gateway:latest --logtostderr

The REST API Server is available on localhost:8080. You can use REST API requests to interact with the system. Check out the Usage Examples section for more details.

Chatbot UI

A local Chatbot UI is also available on localhost:3000. You can start it with latest image using the following command:

docker pull docker.io/vectorchai/chatbot-ui:latest
docker run -it --net=host \
  -e OPENAI_API_HOST=http://127.0.0.1:8080 \
  -e OPENAI_API_KEY=YOUR_API_KEY \
  docker.io/vectorchai/chatbot-ui:latest

Docker Compose

Using Docker Compose is the easiest way to run ScaleLLM with all the services together. If you don't have Docker Compose installed, please follow the installation doc for your platform.

curl https://raw.githubusercontent.com/vectorch-ai/ScaleLLM/main/scalellm.yml -sSf > scalellm_compose.yml
HF_MODEL_ID=meta-llama/Meta-Llama-3-8B-Instruct DEVICE=cuda docker compose -f ./scalellm_compose.yml up

you will get following running services:

  • Chatbot UI on port 3000: localhost:3000
  • ScaleLLM gRPC server on port 8888: localhost:8888
  • ScaleLLM HTTP server for monitoring on port 9999: localhost:9999
  • ScaleLLM REST API server on port 8080: localhost:8080

Usage Examples

Chat Completions

You can get chat completions with the following example:

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3-8B-Instruct",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'
import os
import sys
import openai

openai.api_base = "http://localhost:8080/v1"

# List available models
print("==== Available models ====")
models = openai.Model.list()

model = "meta-llama/Meta-Llama-3-8B-Instruct"

completion = openai.ChatCompletion.create(
    model=model,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello"},
    ],
    max_tokens=256,
    stream=True,
)

print(f"==== Model: {model} ====")
for chunk in completion:
    content = chunk["choices"][0]["delta"].get("content")
    if content:
        print(content, end="")

Completions

For regular completions, you can use this example:

curl http://localhost:8080/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3-8B-Instruct",
    "prompt": "hello",
    "max_tokens": 32,
    "temperature": 0.7,
    "stream": true
  }'
import os
import sys
import openai

openai.api_base = "http://localhost:8080/v1"

# List available models
print("==== Available models ====")
models = openai.Model.list()

model = "meta-llama/Meta-Llama-3-8B-Instruct"

completion = openai.Completion.create(
    model=model,
    prompt="hello",
    max_tokens=256,
    temperature=0.7,
    stream=True,
)

print(f"==== Model: {model} ====")
for chunk in completion:
    content = chunk["choices"][0].get("text")
    if content:
        print(content, end="")

Quantization

Quantization is a crucial process for reducing the memory footprint of models. ScaleLLM offers support for two quantization techniques: Accurate Post-Training Quantization (GPTQ) and Activation-aware Weight Quantization (AWQ), with seamless integration into the following libraries: autogptq, exllama, exllamav2, and awq.

By default, exllamav2 is employed for GPTQ 4-bit quantization. However, you have the flexibility to choose a specific implementation by configuring the "--qlinear_gptq_impl" option, which allows you to select from exllama, exllamav2, or auto option.

Limitations

There are several known limitations we are looking to address in the coming months, including:

  • Only supports GPUs that newer than Turing architecture.

Contributing

If you have any questions or want to contribute, please don't hesitate to ask in our "Discussions" forum or join our "Discord" chat room. We welcome your input and contributions to make ScaleLLM even better. Please follow the Contributing.md to get started.

Acknowledgements

The following open-source projects have been used in this project, either in their original form or modified to meet our needs:

License

This project is released under the Apache 2.0 license.

scalellm's People

Contributors

guocuimi avatar liutongxuan avatar cyruschan360 avatar spencerli82 avatar 936187425 avatar dongxianzhe avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.