Alpa is a system for training and serving large-scale neural networks.
Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
The key features of Alpa include:
๐ป Automatic Parallelization. Alpa automatically parallelizes users' single-device code on distributed clusters with data, operator, and pipeline parallelism.
๐ Excellent Performance. Alpa achieves linear scaling on training models with billions of parameters on distributed clusters.
โจ Tight Integration with Machine Learning Ecosystem. Alpa is backed by open-source, high-performance, and production-ready libraries such as Jax, XLA, and Ray.
Alpa provides a free, unlimited OPT-175B text generation service. Try our service at https://opt.alpa.ai/ and share your prompting results!
Join Alpa slack and let us know any new features you want!
Use Alpa's decorator @parallelize
to scale your single-device training code to distributed clusters.
import alpa
# Parallelize the training step in Jax by simply using a decorator
@alpa.parallelize
def train_step(model_state, batch):
def loss_func(params):
out = model_state.forward(params, batch["x"])
return jnp.mean((out - batch["y"]) ** 2)
grads = grad(loss_func)(model_state.params)
new_model_state = model_state.apply_gradient(grads)
return new_model_state
# The training loop now automatically runs on your designated cluster
model_state = create_train_state()
for batch in data_loader:
model_state = train_step(model_state, batch)
Check out the Alpa Documentation site for installation instructions, tutorials, examples, and more.
The quickest way to get started with Alpa is via pip. We push the latest Alpa wheels to the PyPI index and the Alpa-modified jaxlib to our GitHub-hosted index.
# Install alpa
pip install alpa
# install alpa-jaxlib compatible with CUDA >= 11.1 and cuDNN >= 8.0.5,
pip install jaxlib==0.3.5+cuda111.cudnn805 -f https://alpa-projects.github.io/wheels.html
# You can install for other CUDA versions via:
pip install jaxlib==0.3.5+cuda{cuda_version}.cudnn{cudnn_version} -f https://alpa-projects.github.io/wheels.html
All supported CUDA and cuDNN versions are listed on the index page.
After installation, validate the installation works as expected:
# Start the ray cluster first
ray start --head
python -m alpa.test_install
You can also install Alpa from source for development or other CUDA versions. Follow this detailed guide to install Alpa from source or troubleshooting if you meet any errors during the process.
- Alpa OSDI 2022 paper
- Google AI blog
- Alpa talk slides
- ICML 2022 Big Model Tutorial slides
- ICML 2022 Big Model Tutorial video recording
- Prof. Ion Stoica introduces the Alpa system
- Please read the contributor guide if you are interested in contributing to Alpa.
- Connect to Alpa contributors via the Alpa slack.
Alpa is licensed under the Apache-2.0 license.