Comments (4)
Let me fix that part and test it on cpu
from hqq.
The main goals are saving vram on GPUs + use fused CUDA/Triton kernels for faster inference, that's why we focus on GPUs.
I can adapt the code to support CPU by changing cuda()
to to(device=...)
and enable fp32 during the quantization step, but that's gonna be very slow and goes against the main goals of the library. Any particular motivation to use the CPU? RAM is not a big issue since in the worst case can you can just increase the swap.
from hqq.
Thanks for your reply.
I tried to load model with HQQ and Deepspeed Zero3 using transformers. I met the following error:
File "/workspace/model.py", line 66, in create_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3692, in from_pretrained
) = cls._load_pretrained_model(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4126, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 889, in _load_state_dict_into_meta_model
hf_quantizer.create_quantized_param(model, param, param_name, param_device, state_dict, unexpected_keys)
File "/usr/local/lib/python3.10/dist-packages/transformers/quantizers/quantizer_hqq.py", line 141, in create_quantized_param
hqq_layer = HQQLinear(
File "/usr/local/lib/python3.10/dist-packages/hqq/core/quantize.py", line 391, in __init__
self.initialize()
File "/usr/local/lib/python3.10/dist-packages/hqq/core/quantize.py", line 399, in initialize
else self.linear_layer.bias.to(self.compute_dtype).cuda(self.device)
RuntimeError: Invalid device, must be cuda device
The error is cased by self.device='cpu'. I found that the model will be loaded on CPU firstly.
If there is any way to quantize models on GPU with Zero3?
from hqq.
This should do it: b21fe87
I just tested it and I was able to quantize and do inference on CPU.
Just make sure you use compute_dtype=torch.float32
and use the HQQLinear.set_backend(HQQBackend.PYTORCH)
because the default backend is for CUDA
.
In fact, I will switch by default to PYTORCH
to avoid this kind of issues: 9f32c5a
Also, moved cuda streams outside init so it wouldn't complain when there's no gpu available: d310842
This is also required when there's no gpu available: cc2b944
from hqq.
Related Issues (20)
- Question about quantization. HOT 2
- Is HQQLinearLoRAWithFakeQuant differentiable? HOT 1
- hqq+ lora ValueError || ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' HOT 3
- Activation quantization HOT 9
- Group_Size setting HOT 1
- 1 bit inference HOT 4
- Weird problem in loading quantized_model + lora_adpter
- 2-bit quantization representation HOT 3
- module 'torch.library' has no attribute 'custom_op' HOT 4
- bitblas introduces dependency on CUDA version HOT 3
- OSError: libnvrtc.so.12: cannot open shared object file: No such file or directory HOT 1
- About the implentation of .cpu() HOT 1
- 3-bit quantization weight data type issue HOT 10
- RuntimeError: Expected in.dtype() == at::kInt to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) HOT 1
- Weight Sharding HOT 1
- Support Gemma quantization HOT 2
- Bug of the saved model when applying zero and scale quantization HOT 1
- Expected in.dtype() == at::kInt to be true, but got false HOT 14
- Easy way to run lm evaluation harness HOT 1
- Warning: failed to import the BitBlas backend
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hqq.