GithubHelp home page GithubHelp logo

Comments (17)

RonanKMcGovern avatar RonanKMcGovern commented on August 17, 2024 2

If you merge a quantized (transformers) model then it will become a 4 or 8 bit model, which you can't then do AWQ on.

Instead, you would need to reload a base model in 16 bit and merge your LoRA to that (using merge and unload). Then you can AWQ that merged model. More info in this vid.

from autoawq.

RanchiZhao avatar RanchiZhao commented on August 17, 2024 1

Hi, I'm also interested to know whether LoRA + AWQ is already available now. Thanks!

@RicardoHalak see this, is runnable huggingface/transformers#28987

from autoawq.

casper-hansen avatar casper-hansen commented on August 17, 2024

I would love to add LoRA and make AutoAWQ compatible with PEFT. This is something that I have thought about but currently it’s more important for me to see what I can do a high throughput quantized model.

from autoawq.

RonanKMcGovern avatar RonanKMcGovern commented on August 17, 2024

Ok cool, I think supporting QLoRA merging is underappreciated though. I don't know of any way to do this and it means there isn't a good open source way to serve QLoRA tuned open source models.

BTW, when you say high throughput, do you mean batch size larger than 8? so bf16 implementation?

from autoawq.

s4rduk4r avatar s4rduk4r commented on August 17, 2024

I could probably look into it during next week. Maybe the autograd_4bit code from here could be adapted somehow

from autoawq.

casper-hansen avatar casper-hansen commented on August 17, 2024

In general, I think we should integrate with PEFT. From my understanding, this requires our WQLinear modules to generate gradients during a backward pass - so you would have to implement that functionality. It may turn out to be easy enough since autograd works pretty well - maybe look to AutoGPTQ to see how they integrated with PEFT.

from autoawq.

K024 avatar K024 commented on August 17, 2024

@casper-hansen AutoGPTQ implements QLinear with various underlying QGEMM implementations (cuda, exllama, qigen, openai/triton) and most of them did not implement the backward kernel except for triton. The triton kernel is currently the only one could be used for training a quantized model in AutoGPTQ, though not the most optimal.

FYI the autograd_4bit mentioned above simply unpacks the weights into fp and calls torch.matmul

from autoawq.

casper-hansen avatar casper-hansen commented on August 17, 2024

I welcome any work on a backward pass function for AWQ. There are many ways to go about it. Just keep in mind the AWQ kernel does not scale well with larger batch sizes, above batch size 16 and it will be slower than FP16. I found some code where someone did the backward pass:

https://github.com/compressa-ai/llm-awq/tree/dev

from autoawq.

K024 avatar K024 commented on August 17, 2024

@casper-hansen FYI the above one still unpacks and gemms everything in fp...

And I noticed the pack order had changed in the llm-awq repo since Sep 7

I see the changes gemmv2_forward_cuda vs gemm_forward_cuda

from autoawq.

casper-hansen avatar casper-hansen commented on August 17, 2024

Yes, I see that, they dequantize to run FP16. I’m pretty sure this is normal for training?

I created v2 based on their new GEMM kernel but it’s way slower and only compatible with GEMV where it processes the context. GEMV is 20% faster at small prompts but not great for high throughput or deployments.

from autoawq.

Ph0rk0z avatar Ph0rk0z commented on August 17, 2024

ime, triton was never faster for anything. exclusionary high compute requirements and slower speed, oh my.

The only one who has pulled off merging adapters into quantized models is GGUF. With that alpaca_lora_4bit repo + extensions I can merge LoRA together but not to the model.

from autoawq.

cassianlewis avatar cassianlewis commented on August 17, 2024

AFAIK you can merge the LoRA weights and unquantised base model (even if you fine-tuned in 4/8 bit) using model.merge_and_unload(). You can then quantise this model using AQW and run as normal.

I guess this only really applies if you don't have the VRAM to train the model without PEFT though.

from autoawq.

RonanKMcGovern avatar RonanKMcGovern commented on August 17, 2024

@cassianlewis yeah in bnb but not gptq AFAIK.

Not ideal to merge to unquantified either.

from autoawq.

sd3ntato avatar sd3ntato commented on August 17, 2024

AFAIK you can merge the LoRA weights and unquantised base model (even if you fine-tuned in 4/8 bit) using model.merge_and_unload(). You can then quantise this model using AQW and run as normal.

I guess this only really applies if you don't have the VRAM to train the model without PEFT though.

Hi, I'm trying to do it with Mixtral, but i get the following output / error:

Downloading and preparing dataset json/mit-han-lab--pile-val-backup to /root/.cache/huggingface/datasets/mit-han-lab___json/mit-han-lab--pile-val-backup-39bc257d0ce73de2/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
 82 Downloading readme: 100%|██████████| 167/167 [00:00<00:00, 112kB/s]
 83 Downloading data files:   0%|          | 0/1 [00:00<?, ?it/s]
 84 Downloading data:  48%|████▊     | 225M/471M [00:01<00:01, 125MB/s]
 85 Extracting data files: 100%|██████████| 1/1 [00:02<00:00,  2.22s/it]
 86 AWQ:   0%|          | 0/32 [00:06<?, ?it/s]
 87 Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/mit-han-lab___json/mit-han-lab--pile-val-backup-39bc257d0ce73de2/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
 88 AWQ:   0%|          | 0/32 [00:00<?, ?it/s]
 89 Traceback (most recent call last):
 90   File "/opt/ml/code/run_clm_awq.py", line 339, in <module>
 91     main()
 92   File "/opt/ml/code/run_clm_awq.py", line 324, in main
 93     training_function(run, args)
 94   File "/opt/ml/code/run_clm_awq.py", line 292, in training_function
 95     model.quantize(
 96   File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
 97     return func(*args, **kwargs)
 98   File "/opt/conda/lib/python3.10/site-packages/awq/models/base.py", line 95, in quantize
 99     self.quantizer.quantize()
100   File "/opt/conda/lib/python3.10/site-packages/awq/quantize/quantizer.py", line 107, in quantize
101     module_config: List[Dict] = self.awq_model.get_layers_for_scaling(
102   File "/opt/conda/lib/python3.10/site-packages/awq/models/mixtral.py", line 46, in get_layers_for_scaling
103     inp=input_feat['self_attn.q_proj'],
104 KeyError: 'self_attn.q_proj'

could anyone please help me out with this?

from autoawq.

Ph0rk0z avatar Ph0rk0z commented on August 17, 2024

Having the base model becomes unmanageable with 70b+, that's part of the issue. They're 160gb+

from autoawq.

RanchiZhao avatar RanchiZhao commented on August 17, 2024

Hi! any progress? is train LoRA modules with AWQ available now?

from autoawq.

RicardoHalak avatar RicardoHalak commented on August 17, 2024

Hi, I'm also interested to know whether LoRA + AWQ is already available now. Thanks!

from autoawq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.