GithubHelp home page GithubHelp logo

intel / auto-round Goto Github PK

View Code? Open in Web Editor NEW
77.0 9.0 9.0 8.62 MB

SOTA Weight-only Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"

Home Page: https://arxiv.org/abs/2309.05516

License: Apache License 2.0

Python 97.54% Shell 2.46%
awq gptq int4 neural-compressor quantization rounding weight-only

auto-round's Introduction

AutoRound

Advanced Weight-Only Quantization Algorithm for LLMs

python version license

AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It's tailored for a wide range of models and consistently delivers noticeable improvements, often significantly outperforming SignRound with the cost of more tuning time for quantization.

Our method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, which competes impressively against recent methods without introducing any additional inference overhead. The below image presents an overview of AutoRound.

What's New

Prerequisites

  • Python 3.9 or higher

Installation

Build from Source

pip install -r requirements.txt
python setup.py install

Install from pypi

pip install auto-round

Model quantization

Gaudi2/ CPU/ GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

from auto_round import AutoRound

bits, group_size, sym = 4, 128, False
##device:Optional["auto", None, "hpu", "cpu", "cuda"]
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, sym=sym, device=None)
autoround.quantize()
output_dir = "./tmp_autoround"
autoround.save_quantized(output_dir)
Detailed Hyperparameters
  • model: The PyTorch model to be quantized.

  • tokenizer: An optional tokenizer for processing input data. If none, a dataset must be provided.

  • bits (int): Number of bits for quantization (default is 4).

  • group_size (int): Size of the quantization group (default is 128).

  • sym (bool): Whether to use symmetric quantization (default is False).

  • enable_quanted_input (bool): Whether to use the output of the previous quantized block as the input for the current block for tuning (default is True).

  • enable_minmax_tuning (bool): Whether to enable weight min-max tuning (default is True).

  • iters (int): Number of tuning iterations (default is 200).

  • lr (float): The learning rate for rounding value (default is None, it will be set to 1.0/iters automatically).

  • minmax_lr (float): The learning rate for min-max tuning (default is None, it will be set to lr automatically).

  • n_samples (int): Number of samples for tuning (default is 512).

  • seqlen (int): Data length of the sequence for tuning (default is 2048).

  • batch_size (int): Batch size for training (default is 8).

  • scale_dtype (str): The data type of quantization scale to be used (default is "float16"), different kernels have different choices.

  • amp (bool): Whether to use automatic mixed precision (default is True).

  • n_blocks (int): Packing several blocks as one for tuning together (default is 1).

  • gradient_accumulate_steps (int): Number of gradient accumulation steps (default is 1).

  • low_gpu_mem_usage (bool): Whether to save GPU memory at the cost of ~20% more tuning time (default is True).

  • dataset Union[str, list, tuple, torch.utils.data.DataLoader]: The dataset name for tuning (default is "NeelNanda/pile-10k"). Local json file and combination of datasets have been supported, e.g. "./tmp.json,NeelNanda/pile-10k:train, mbpp:train+validation+test"

  • weight_config (dict): Configuration for weight quantization (default is an empty dictionary), mainly for mixed bits or mixed precision.

  • device: The device to be used for tuning. The default is set to 'auto', allowing for automatic detection.

Model inference

Please run the quantization code first.

CPU

##Install the latest https://github.com/intel/intel-extension-for-transformers from source first.
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

Support List

Model Supported
Intel/neural-chat-7b-v3-3 HF-int4-model, accuracy, recipe, example
Intel/neural-chat-7b-v3-1 HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-v0.1 HF-int4-model, accuracy, recipe, example
microsoft/phi-2 HF-int4-model, accuracy, recipe, example
tiiuae/falcon-7b HF-int4-model, accuracy, recipe, example
google/gemma-2b HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-Instruct-v0.2 HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b-it HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-Instruct-v0.1 HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-v0.1 HF-int4-model (under review), accuracy, recipe, example
meta-llama/Meta-Llama-3-8B-Instruct accuracy, recipe, example
meta-llama/Llama-2-7b-chat-hf accuracy, recipe, example
Qwen/Qwen1.5-7B-Chat accuracy, sym recipe, asym recipe , example
baichuan-inc/Baichuan2-7B-Chat accuracy, recipe, example
01-ai/Yi-6B-Chat accuracy, recipe, example
facebook/opt-2.7b accuracy, recipe, example
bigscience/bloom-3b accuracy, recipe, example
EleutherAI/gpt-j-6b accuracy, recipe, example
Salesforce/codegen25-7b-multi example
huggyllama/llama-7b example
mosaicml/mpt-7b example
THUDM/chatglm3-6b example
MBZUAI/LaMini-GPT-124M example
EleutherAI/gpt-neo-125m example
databricks/dolly-v2-3b example
stabilityai/stablelm-base-alpha-3b example

Comparison with other methods

We provide a comprehensive analysis with other methods in our accuracy data section. In summary, our approach achieved superior performance compared to GPTQ, scoring 30/32, AWQ with 27/32, HQQ with 15/16, and OmniQuant with a perfect score of 16/16 across llamv1/llamav2/mistral-7b on W4G-1, W4G128, W3G128, and W2G128, based on the average accuracies of 11 zero-shot tasks.

Tips

1 Consider increasing tuning steps to achieve better results, albeit with increased tuning time.

2 Setting 'enable_quanted_input' to False has been observed to occasionally yield improved results.

3 Setting 'minmax_lr' to 2.0/iters has been observed to occasionally yield improved results.

Reference

If you find SignRound useful for your research, please cite our paper:

@article{cheng2023optimize,
  title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}

auto-round's People

Contributors

chensuyue avatar hshen14 avatar lkk12014402 avatar pre-commit-ci[bot] avatar pursure-d avatar rdower avatar weiweizhang1 avatar wenhuach21 avatar yiliu30 avatar yintong-lu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

auto-round's Issues

OPT model quantize_lm_head clarification

While testing for OPT with quant_lm_head=True, here are the result weights post quantize:

weight keys: ['lm_head.g_idx', 'lm_head.qweight', 'lm_head.qzeros', 'lm_head.scales', 'model.decoder.embed_positions.weight', 'model.decoder.embed_tokens.weight', ...

model.decoder.embed_tokens.weight is not quantized but lm_head is. Unforutnately vllm model code and maybe hf transformer also ignores this lm_head layer in weight load? I confirmed this for vllm but not 100% sure for transformer.

But opt's lm_head is actually the same as (soft lnked) model.decoder.embed_tokens in code in vllm and appears to be true in transformers as well. Checked original weights and lm_head exists in weights but size/values exactly same as embed_tokens so model coders appears to think lm_head should be ignored on load.

https://github.com/huggingface/transformers/blob/0ae789e04330e15a90e34cd723c851a8ab8d7ec5/src/transformers/models/opt/modeling_opt.py#L1001

In vllm's model loading code for OPT, the lm_head weights are skipped and soft-linked to embeddings. This appears to be the same for hf transformers as well.

https://github.com/vllm-project/vllm/blob/26f2fb51133c85ad8a57a87c8037f750dda757f4/vllm/model_executor/models/opt.py#L288

So my naive question is who is correct? Autoround correctly finding and quantizing the lm_head layer but this layer is actually ignored by model loaders? ={

This is relation to the testing I am doing for vllm PR: vllm-project/vllm#4442 (comment)

This becomes an issue loading the quant as vllm and completedly skip lm_head layers (pre or post-quant) since I assume the model code writer assumed why load the same equivalent weights twice when tensor size and values are exactly the same.

I am new to all the layers/modules so forgive me if my question itself is based on false premises. Thank you! I hope to have intel/autoround model support merged into vllm soon.

Here is the original weights before quantization:

https://huggingface.co/facebook/opt-125m

key model.decoder.embed_tokens.weight torch.Size([50272, 768])
key lm_head.weight torch.Size([50272, 768])

So in original OPT-127M model weights, the model.decoder.embed_tokens.weight and lm_head.weight both exists and size and even values of all tensors are exactly the same!

@robertgshaw2-neuralmagic Is this a bug in vllm OPT model code? Why is it skipping lm_head layer when it actually exists (even though it is an duplicate of embed_tokens)?

@wenhuach21 @WeiweiZhang1

GPU memery use

How much GPU memory and CPU size are required when I quantize the Chatglm3-6B model,I used A100-40G,but get an error ”killed“。

8-bit quantization support

I was wondering if it supports 8-bit quantization. The example code is as below.

python main.py \ --model_name "" \ --bits 8 \ --group_size 128 \ --train_bs 8 \ --gradient_accumulate_steps 8 \ --deployment_device 'gpu' \ --output_dir "./save_ckpt"

Quantization/layer speed is very slow

Current testing PR #87 and running into very slow quants for a Tinyllama 1.1B test model.

I am geting ~96s per layer in quantization on 4090 gpu with n_blocks = 1 and ~75s per layer with n_blocks = 2.

Is this the norm? I am new to autoround so don't have a baseline metric to go by.

Is gpu quant faster or slower than cpu quant? Are there optimizations to make it faster other than using a smaller model for testing? Thanks!

Env:
Ubuntu 22.04
Torch 2.2.2
Cpu: AMD ZEN3

env CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=5 python3 main.py \
--model_name /local/TinyLlama-1.1B-intermediate-step-1341k-3T \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--use_quant_input \
--quant_lm_head \
--n_blocks 4 \
--deployment_device 'gpu' \
--disable_low_gpu_mem_usage \
--output_dir "./tmp_autoround"

Merge dataloader to dataset

There are two args to handle calibration dataset, this may confuse the users. Let's transfer all the combability of dataloader to dataset, and delete dataloader

Set the default scale_dtype to FP16

There's no necessity to use FP32 scale for packing with the autogptq Triton backend. We can instead set FP16 scale dtype as the default. Nonetheless, it's essential to validate accuracy for some models.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.