Comments (4)
You are right. I can't remember where we got 92.7, and it should be 92.0.
Yes, the bias term is learnable here, even though it is not in the code used in our experiments. This seems to be a good idea in practice and has a minimal overhead. The checkpointing utility functions should take care of saving/loading biases. Please let me know if you encounter any issues :)
from lora.
Thanks for your questions.
- I believe these are validation numbers since the test set is not public. That is also done by prior work.
- Nope, that's not a typo. You can verify it with our checkpoint :)
from lora.
Wow, thanks for your quick response. I got two more questions.
- If I understand correctly, in table 2, the numbers of
BitFit
were taken from the original paper. But actually, there are some numbers I can not find in the original paper. For example, you mentioned theRoBbase (BitFit) on MRPC task
results in 92.7, but I think the original paper reported this number as 92.0 in their Table 2. Could you specify more details about this? - Do you fine-tune the bias terms? cause, I understand you don't require gradients for the weight terms, but I did not see you turn off this for bias terms.
Line 116 in 33b9536
from lora.
Thanks~
from lora.
Related Issues (20)
- how to improve the memory ability of lora fine tuning? HOT 1
- models are the same after loading lora parameters using peft library
- Is it necessary to add `model = model.merge_and_unload()` when training a new LoRA adapter?
- How to adjust LoRA into nn.ConvTranspose2d? HOT 2
- Cannot implement LoRA on a custom model containing transformer encoder from pytorch
- _conv_forward() error
- Dynamic Lora Selection In Runtime❓ HOT 1
- Reproduce Lora results is close but not accurate HOT 3
- Guidance Needed on Continuing Training with a New Dataset via LoRA
- After joining Lora, the first few layers show a gradient of 0
- lora-dim == lora-r ?
- LORA on T5 model
- [Question about multi-gpu training] HOT 2
- question for scale!
- Parameter count on GPT-2 medium HOT 1
- Where is the LoRA matrices saved?
- Questions about running the cola dataset script HOT 1
- Questions about replicating NLU experiments
- NLU experiments error HOT 1
- How are the pretrained weigts set to be fixed during training? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lora.