Comments (2)
Hi!
TL;DR alas, we do not support multi-node calibration. We support multi-gpu nodes, but not the other way around.
This is because our calibration is implemented with threading, not multiprocessing. We currently have no bandwidth for that, but if anyone intends to implement it, we'd gladly accept a pull request. However, we expect that this will need a lot of manual use of torch.distributed to work efficiently.
For instance, we rely on threading so that we can reuse shared memory, but that won't work multi-node, so it will need a different workaround. In other words, this may be tricky to get right.
If your model does not fit into a single node, the simplest workaround is to modify model loading so that it only loads one layer at a time. This is because main.py processes one transformer layer at a time, as can be seen in this for loop. Thus, if you load the appropriate layer (e.g. from disk) at the beginning of that loop, and deleting it once it after quantized layer has been saved, you can quantize very large models in due time.
If time is what you're after, we'd recommend running AQLM on a single node with higher mse tolerances (i.e. much faster), then running better/longer PV fine-tuning , which does support multi-node HW https://github.com/Vahe1994/AQLM/tree/pv-tuning?tab=readme-ov-file#3-refining-quantized-model-with-pv-tuning
from aqlm.
Thanks for the suggestion. I'll proceed as suggested then! 😄
from aqlm.
Related Issues (20)
- Minor race condition in CPU 2x8 inference code HOT 4
- Finetuning ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8: RuntimeError: CUDA error: invalid argument HOT 3
- Actual bitrate of models on github? HOT 5
- Request for the Llama-2-13B with AQLM (2x8 scheme) HOT 3
- How to run perplexity eval on HF hub models? HOT 3
- when load Llama, AutoConfig will occur error. HOT 2
- Request for Nvidia's RAG Implementation of Llama-3-70B "ChatQA 1.5" HOT 9
- Can you please share the *end-to-end* quantization script+config (including data used) for each model you've already quantized? (particularly llama-3 and miqu - i.e. 70B models) HOT 6
- FV tuning based on GPTQ HOT 6
- aqlm/inference_kernels/cuda_kernel.py HOT 2
- NaNs in sequence classifier output HOT 3
- Using pv-tuning on other quantization methods HOT 3
- How to import and use it in my existent code that loads LLMs? HOT 2
- [Feature Request] Gemma2 support & models HOT 2
- Performance issues with ~2bit quantization HOT 6
- Llama mixtral quantaniz
- how to get detailed results about the codebooks and codes parameters HOT 2
- Conda package
- Codebook size for LLama 2 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aqlm.