GithubHelp home page GithubHelp logo

fazziekey / energonai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from hpcaitech/energonai

0.0 0.0 0.0 582 KB

Large-scale model inference.

License: Apache License 2.0

C++ 12.34% Python 59.80% C 0.05% Cuda 27.81%

energonai's Introduction

Energon-AI

GitHub license

A Large-scale Model Inference System. Energon-AI provides 3 levels of abstraction for enabling the large-scale model inference:

  • Runtime - tensor parallel operations, pipeline parallel wrapper, distributed message queue, distributed checkpoint loading, customized CUDA kernels.
  • Engine - encapsulate the single instance multiple devices (SIMD) execution with the remote procedure call, which acts as the single instance single device (SISD) execution.
  • Serving - batching requests, managing engines.

For models trained by Colossal-AI, they can be seamlessly transferred to Energon-AI. For single-device models, they require manual coding works to introduce tensor parallelism and pipeline parallelism.

At present, we provide distributed Bert, GPT, and ViT models.
For GPT, it extends to at most 175B parameters, which is called GPT3.
For Bert, Google reports a super-large Bert with 481B parameters in MLPerf-Training v1.1 open, indicating that Bert can also extend to large-scale.

Installation

$ git clone [email protected]:hpcaitech/EnergonAI.git
$ pip install -r requirements.txt
$ pip install .

Huggingface GPT2 Generation Task Case

# Download checkpoint
$ wget https://huggingface.co/gpt2/blob/main/pytorch_model.bin
# Download files for tokenizer
$ wget https://huggingface.co/gpt2/blob/main/tokenizer.json
$ wget https://huggingface.co/gpt2/blob/main/vocab.json
$ wget https://huggingface.co/gpt2/blob/main/merges.txt

# Launch the service
export PYTHONPATH=~/EnergonAI/examples/hf_gpt2
energonai service init --config_file=~/EnergonAI/hf_gpt2/hf_gpt2_config.py

# Request for the service
Method 1: 
    FastAPI provides an automatic API docs, you can forward 
    http://127.0.0.1:8005/docs and make request with the graphical interface.
Method 2:
    curl -X 'GET' \
    'http://127.0.0.1:8005/run_hf_gpt2/I%20do%20not?max_seq_length=16' \
    -H 'accept: application/json' 

Large-scale Model Inference Performance

Scaling Ability

Here GPT3-12-layers in FP16 is adopted.
Here a node with 8 A100 80 GB GPUs is adopted. GPUs are fully connected with NvLink.
Energon-AI adopts the redundant computation elimination method. The method is first raised in EffectiveTransformer, and our implementation refers to TurboTransformer.
Here the sequence length is set the half of the padding length.

Architecture

Latency

Here GPT3 in FP16 is adopted.
Here a node with 8 A100 80 GB GPUs is adopted. Every two GPUs are connected with NvLink.
Here the sequence length is set the half of the padding length when using redundant computation elimination method, which is the Energon-AI(RM).
Here FasterTransformer is adopted in comparison and it does not support the redundant computation elimination method in the distributed execution.

Architecture

Batching

Energon-AI dynamically selects the batch processing with the highest priority regarding the waiting time, batch size, batch expansion possibility (based on the sentence length after padding). Our dynamic batching method is inspired by the DP algorithm from TurboTransformer.
Here FIFO batching is selected in comparison.

Architecture

Contributing

If interested in making your own contribution to the project, please refer to Contributing for guidance.

Thanks so much!

Technical Overview

Architecture

energonai's People

Contributors

dujiangsu avatar maruyamaaya avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.