Comments (4)
Thanks for this smart fix. @michael-heinrich : Do you see a similar workaround to provide backwards compatibility to awq/kernels as well for running VILA using AWQ & TinyChat?
This will help to run video inference demo on ancient GPUs faster ;-)
Refer: https://github.com/mit-han-lab/llm-awq/
At present, the installation of awq/kernels fails due to following error
Feature '.m16n8k16' requires .target sm_80 or higher
I will definitely look into it. Last night I already spent a few hours on getting the AWQ quants running, but no luck so far. From the source code / documentation of the transformers library, it appears to have AWQ support built in and with a few changes to the HF repo, I could partially load the AWQ checkpoint using the video inference demo. However, in the end the shapes of the tensors did not match. But maybe it's possible to load it like this.
In the end, I was not sure that's even remotely the right direction.
Transformers also allows to quantize a model when loading using bitsandbytes. That might work on an older card but would not have the accuracy of an AWQ quant.
from vila.
I see. Thanks! Could you submit a PR?
from vila.
Thanks for this smart fix. @michael-heinrich : Do you see a similar workaround to provide backwards compatibility to awq/kernels as well for running VILA using AWQ & TinyChat?
This will help to run video inference demo on ancient GPUs faster ;-)
Refer: https://github.com/mit-han-lab/llm-awq/
At present, the installation of awq/kernels fails due to following error
Feature '.m16n8k16' requires .target sm_80 or higher
from vila.
Your version of transformers forces LlamaFlashAttention2 in the constructor of LlamaDecoderLayer in transformers/models/llama/modeling_llama.py which requires Ampere or newer to work. Just by using the old LlamaAttention class instead of LlamaFlashAttention2 here, I could make the video inference demo run on an ancient GTX1060 (even if it's very slow). The current main branch of transformers uses a mechanism to decide which is the best compatible attention for this purpose. If you don't want to backport that, you could use a very simple logic to decide which class to use here. Something like this:
def is_at_least_ampere(): if torch.cuda.is_available(): num_of_gpus = torch.cuda.device_count() # Loop over each GPU for i in range(num_of_gpus): gpu_properties = torch.cuda.get_device_properties(i) # Compute capability is major.minor version format # Convert it to a float for comparison compute_capability = float(f"{gpu_properties.major}.{gpu_properties.minor}") # If compute capability is less than 8.0 (Ampere or newer), return False if compute_capability < 8.0: return False # If all GPUs are Ampere or newer, return True return True else: # If CUDA is not available, return False return False class LlamaDecoderLayer(nn.Module): def __init__(self, config: LlamaConfig): super().__init__() self.hidden_size = config.hidden_size ampere_or_newer = is_at_least_ampere() self.self_attn = ( LlamaFlashAttention2(config=config) if ampere_or_newer else LlamaAttention(config=config) # LlamaAttention(config=config) # LlamaFlashAttention2(config=config) ) self.mlp = LlamaMLP(config)
`
Hi!
Thanks for your piece of code. Have you changed anything apart from that? I am encountering an issue when running inference on the Llama-3-VILA1.5-8B model. The error message I receive is:
RuntimeError: FlashAttention only supports Ampere GPUs or newer.
I am using a V100 GPU, which is not an Ampere GPU. Could you please provide guidance on how to disable Flash Attention for this model, and if there are any other steps besides what you have already provided? Thanks.
from vila.
Related Issues (20)
- Is there any way to increase the context window? HOT 4
- Question re. LanguageModel vs LanguageModelForCausalLM functionalies HOT 2
- Llama2 or Llama3
- What is the conv_mode for VILA-1.5-3B ? HOT 1
- AttributeError: 'Image' object has no attribute 'shape' HOT 6
- Support VILA with lmdeploy
- Image text retrieval support
- COYO-700M Dataset Download Script Error
- Issue with Flash Attention on V100 GPU for Llama-3-VILA1.5-8B Model HOT 3
- [Help] Using VILA1.5-40b model for Video Descriptions
- [HELP] Do we have any docker image for Jetson platform ?
- No training scripts in scripts/v1_5/paper/
- About sharegpt_video. How do you make video file from jpeg images?
- How to convert model to gguf HOT 3
- Deployment to SageMaker and/or HuggingFace Inference Endpoints Fails With Error HOT 5
- Whether the visual encoder participates in training HOT 1
- Support for multi-video captioning with multiple grid image inputs? HOT 1
- Multi-Image or Multi-Video Inference Example HOT 2
- question: what does 'repack_multimodal_data' function do? HOT 1
- release schedule for the "VILA1.5-34b-4bit-AWQ" model. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vila.