Comments (2)
@Alignment-Lab-AI thanks for your insightful questions,
Q1.: have you attempted reducing the precision then training each layer on an adapter and swapping in the adapters on the needed layers during inference?
We don't reduce the precision then training, could you discuss more insight of this method? if we use the low precision in training and then swapping the adapters, it can improve the performance or others?
Q2: what have you tried so far to sparsity it?
Currently, we don't sparsify the models. In our second version, we introduce the scale layer, the scale layer may is an important metric to remove the unimportant neuron. And you can also use other sparse method to reduce the model size.
from llama-adapter.
sorry! i missed the notification!, i explained the process poorly, i meant to ask if you had attempted to quantize the full model and returned adapters to the important layers during inference that had been trained more accurately.
https://www.deepspeed.ai/tutorials/MoQ-tutorial/
however this may honestly work better alone. sorry for the out of scope line of questioning, haha i was working on the outline for my next project and it is always important to me to make them as small as possible so i dont have to pay for as many a100s!
im sure you are quite busy but i was going to engage in my own project concerning a multimodal model inspired by this repository and a few others, would it be appropriate to discuss it?
from llama-adapter.
Related Issues (20)
- How to fine-tune the llama-adapter-v2 for llama-2 7b models. HOT 1
- I have problem with downloading 7B_chinese in imagebind_LLM. HOT 1
- The true weight file not find? HOT 4
- Could you please provide these weight with me? HOT 2
- Unable to produce the result between LLaMA-Adapter V1 and Alpaca HOT 1
- question about Pretrained LLAMA applicable to Llama_adapter model. thanks HOT 1
- I don't know which data to use to reproduce the model llama-adapter-multimodal-v2.
- Does storage space in the paper mean the capacity of checkpoint file? HOT 2
- Inquiry on Loading LLaMa-2 Model Parameters HOT 1
- how to set llama adapter max_seq_len = 4096
- [LLaMA Adapter V2] Evaluation on multiple choice questions. HOT 1
- AssertionError: Loading a checkpoint for MP=0 but world size is 1 HOT 2
- Don't find save path"ADAPTER_PATH" HOT 1
- Getting error "AF_UNIX path too long"
- Loss is nan, stopping training, while trying to reproduce alpaca_finetuning_v1 results. HOT 1
- Simple question about llama adapter v1 transformer forward function
- imagebind_LLM中的get_chinese_llama.py文件丢失,可以补充一下吗? HOT 1
- Getting weird output for multimodal 7B adapter HOT 3
- Assertation Error start_pos- AdapterV2 Multimodal
- The meaning of C_loss and M_loss HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama-adapter.