GithubHelp home page GithubHelp logo

pku-dair / hetu-galvatron Goto Github PK

View Code? Open in Web Editor NEW

This project forked from afdwang/hetu-galvatron

9.0 1.0 0.0 990 KB

Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs).

Shell 1.34% C++ 1.90% Python 90.78% C 0.22% Cuda 5.39% Makefile 0.24% HTML 0.12%

hetu-galvatron's Introduction

Galvatron-2

Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). It leverages advanced automatic parallelism techniques to deliver exceptional training efficiency. This repository houses the official implementation of Galvatron-2, our latest version enriched with several new features.

Key Features

(1) Enhanced Efficiency via Automatic Parallelism

Enlarged Parallelism Search Space

Incorporate multiple popular parallelism dimensions of distributed training, including DP (Data Parallelism), SDP (Sharded Data Parallelism, support both ZeRO-2 & ZeRO-3), PP (Pipeline Parallelism, support both GPipe & Pipedream-flush / 1F1B-flush), TP (Tensor Parallelism). Also incorporate CKPT (Activation Checkpointing) as a special parallelism dimension.

Fine-grained Hybrid Parallelism

For each Transformer layer, support flexible and fine-grained hybrid parallelism strategies, contributing to the enhanced training efficiency.

Efficient Automatic Parallelism Optimization

For any given Transformer model, automatically and efficiently search for the optimal parallelism strategies, which provides the optimal training efficiency.

(2) Versatility

Suitable for a wide range of Transformer architectures, including language models, LLMs, vision models, multi-modality models, etc.

(3) User-Friendly Interface

Easy to use, even for those new to distributed training.

What's New in Galvatron-2

  • Support CKPT (Activation Checkpointing)
  • Support Mixed Precision (FP16, BF16)
  • Support more pipeline schedules (GPipe and pipedream-flush / 1F1B-flush)
  • Support PyTorch-2 (currently suppport 2.0.1)
  • Support FlashAttention-2 for more efficient attention kernel
  • Provide new Galvatron Profiler that profiles the model consumptions conveniently
  • Provide new Galvatron Search Engine with enhanced efficiency of parallelism optimization
  • Optimized user-friendly interfaces
  • Support more Transformer models (more models are comming soon...)

System Architecture

Galvatron is consisted of four modules, including an automatic Galvatron Profiler, a strategy cost estimator, Galvatron Search Engine that provides parallelism optimization, and Galvatron runtime framework. To train Transformer models over multiple GPUs using automatic parallelism with Galvatron, users only need to provide with hardware environment and the Transformer model configuration.

Installation

Requirements:

  • PyTorch 2.0.1 (we will support newer versions of pytorch soon)

To install Galvatron:

pip install hetu-galvatron

Alternatively, you can install Galvatron from source with pip install .

To use FlashAttention-2 features in Galvatron-2, you can either:

  • Install the FlashAttention-2 manually and then pip install hetu-galvatron.
  • Alternatively, you can install Galvatron-2 with FlashAttention-2 as follows:
  1. Make sure that PyTorch, packaging (pip install packaging), ninja is installed.
  2. Install Galvatron-2 with FlashAttention-2:
GALVATRON_FLASH_ATTN_INSTALL=TRUE pip install hetu-galvatron

Usage

Profiling with Galvatron

The first step to use Galvatron is to profile the hardware environment and the model computation time. Galvatron will automatically save the profiled results into config files.

(1) Firstly, to profile the hardward environment, cd galvatron/profile_hardware, write the host address into hostfile, set NUM_NODES, NUM_GPUS_PER_NODE, MPI_PATH in scripts/profile_hardware.sh and run:

sh scripts/profile_hardware.sh

Galvatron will call nccl-tests to profile the communication bandwidth.

(2) Secondly, to profile the model computation time, cd galvatron/models/model_name and run:

sh scripts/profile_computation.sh

Parallelism Optimizing with Galvatron

After profiling the environments, Galvatron is able to automatically optimize the parallelism strategy for the given Transformer model. Given the memory budget, Galvatron provides the fine-grained hybrid parallel strategy with maximum throughput. The optimized parallelism strategy will be saved in galvatron/models/model_name/configs for the training. Users can train the model with the provided optimal strategy to obtain the optimal throughput.

To conduct parallelim optimization, cd galvatron/models/model_name, customize NUM_NODES, NUM_GPUS_PER_NODE, MEMORY in scripts/search_dist.sh, run:

sh scripts/search_dist.sh

See more usage details of the customized parallelism optimization in Galvatron Model Usage.

Training with Galvatron

Galvatron provides a simple way to train Transformer models in fined-grained hybrid parallelism fashion. Users can either train Transformer models with the searched optimal parallel strategy by specifying argument galvatron_config_path to obtain the optimal throughput, or use any parallel strategies as they like. Galvatron support two hybrid parallel config modes, including JSON config mode and GLOBAL config mode. Users can specify parallel strategies by modifying only a few arguments.

To train the model with Galvatron, cd galvatron/models/model_name, set NUM_NODES, NUM_GPUS_PER_NODE, MASTER_ADDR, MASTER_PORT, NODE_RANK, and run:

sh scripts/train_dist.sh

See detailed guidance and more customized training options in Galvatron Model Usage.

hetu-galvatron's People

Contributors

afdwang avatar fizzmy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hetu-galvatron's Issues

How to use it?

"Could you please tell me how to use this library? I don't see the construct_parallel_models() API."

question about the config file

I saw the galvatron_config_xxx.json of the example models, the pp_degs and tp_sizes_enc are all "1", are these config files searched optimal parallelism strategy?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.