microsoft / lora Goto Github PK
View Code? Open in Web Editor NEWCode for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Home Page: https://arxiv.org/abs/2106.09685
License: MIT License
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Home Page: https://arxiv.org/abs/2106.09685
License: MIT License
Please avoid using .T
RuntimeError: Exporting the operator numpy_T to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.
Thanks :)
Conv2D seems broken with groups > 1, for example with Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, r=8)
and an input of size input of shape torch.Size([32, 256, 64, 64])
I get the following error:
File "/home/miniconda3/envs/pytorch2/lib/python3.10/site-packages/loralib/layers.py", line 319, in forward
self.weight + (self.lora_B @ self.lora_A).view(self.weight.shape) * self.scaling,
RuntimeError: shape '[256, 1, 3, 3]' is invalid for input of size 589824
I believe the first dimension should be calculated as out_channels//self.groups*kernel_size
here: https://github.com/microsoft/LoRA/blob/main/loralib/layers.py#L280
Here's a full example:
import torch
import loralib as lora
conv = lora.Conv2d(256, 256, 3, groups = 1, r=8)
grouped_conv = lora.Conv2d(256, 256, 3, groups = 2, r=8)
input_tensor = torch.randn((8, 256, 32, 32))
conv(input_tensor)
grouped_conv(input_tensor)
Good job! I extremely like LoRA. After a shot glimpse of the code, I find some config is related to lora_moe
in model.py.
But I did not see any arguments related to lora_moe
in gpt2_ft.py
. Can you give more introductions about lora_moe
? Is it designed for models which are trained with moe? Or is it just a deprecated feature of LoRA?
Thanks for the nice repo! Currently, the readme states:
This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in HuggingFace.
However, from the examples, it seems like in order to use loralib
with a HuggingFace model, we need to actually change the code implementation of each model, replacing each nn.Linear with the lora equivalent. If this is the case, I think it's a bit confusing to say the examples show integration with HuggingFace, because as far as I can tell, the examples use re-implementation of GPT2. I was hoping there may be some mechanism to do this automatically, e.g.
import transformers, loralib
model = transformers.AutoModel.from_pretrained("gpt2")
lora_model = loralib.wrap(model) # wrap all nn.Linear modules
lora_params = loralib.get_lora_only_params(lora_model)
Is this possible? Thanks a lot!
My steps:
git clone https://github.com/microsoft/LoRA.git
cd LoRA
pip install -e .
cd examples/NLU
pip install -e .
Change export num_gpus=8
to export num_gpus=1
in roberta_large_cola.sh
Then CUDA_VISIBLE_DEVICES=0 bash roberta_large_cola.sh
Running on a single A100
Using:
During training, the eval_matthews_correlation
is stuck to 0 at all epochs. I actually had the same issue on the current transformers version, and decreasing the learning rate + no warmup helped to regain OKeyish numbers during training, but not as shiny as 0.68.
Do you have an idea of what I could be doing wrong?
Update: using
export num_gpus=1
export CUBLAS_WORKSPACE_CONFIG=":16:8" # https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility
export PYTHONHASHSEED=0
export output_dir="./roberta_cola_custom_sh"
python -m torch.distributed.launch --nproc_per_node=$num_gpus \
examples/text-classification/run_glue.py \
--model_name_or_path roberta-large \
--task_name cola \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \ # original: 4
--learning_rate 2e-5 \ # original: 3e-4
--num_train_epochs 20 \
--output_dir $output_dir/model \
--logging_steps 10 \
--logging_dir $output_dir/log \
--evaluation_strategy epoch \
--save_strategy epoch \
--warmup_ratio 0.0 \ # original: 0.06
--apply_lora \
--lora_r 8 \
--lora_alpha 16 \
--seed 0 \
--weight_decay 0.0 # original: 0.1
trains just fine, I have no eval_matthews_correlation = 0 during training.
I dont konow
the reason is:
model.eval() does not call recursively modules.eval() but modules.train(false)
so the lora.eval() is never called unless you call them unitary.
cf: https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/module.py#L2306
Are there any TF implementations available that you are aware of?
Also, do you see any specific limitations in converting this repo to TF?
hi, I use the 'from_pretrain ' func to load the pretrain model ,but I found the linear param will be re-init when I simply replace the nn.Linear with lora.Linear
layers.py, line 269,
File "/storage_fast/jzzhang/loralib/layers.py", line 308, in __init__
super(Conv2d, self).__init__(nn.Conv2d, *args, **kwargs)
File "/storage_fast/jzzhang/loralib/layers.py", line 269, in __init__
self.conv.weight.new_zeros((out_channels//self.groups*kernel_size, r*kernel_size))
File "/storage/jzzhang/miniconda3/envs/lora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2d' object has no attribute 'groups'
In the latest commit, the change in line 257 make lora.Conv2d has no more inheritance from nn.Conv2d, but instead nn.Modules, which havn't a attribute called groups.
change line 269 to
self.conv.weight.new_zeros((out_channels//self.conv.groups*kernel_size, r*kernel_size))
Hi, I meet a issue:
after_B = F.conv1d(
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [50, 14, 16, 14]
result = F.linear(x, T(self.weight), bias=self.bias) if self.r > 0: after_A = F.linear(self.lora_dropout(x), self.lora_A) after_B = F.conv1d( after_A.transpose(-2, -1), self.lora_B.unsqueeze(-1), groups=sum(self.enable_lora) ).transpose(-2, -1) result += self.zero_pad(after_B) * self.scaling
x.shape is [50, 14, 14, 768]
lora_A.shape is [16, 768]
after_A.shape is ([50, 14, 14, 16])
How to fix it?
I'm trying to switch all my embedding and Linear layers by Lora layers.
Although the GPU size needed reduces, the training time remains the same, even with less trainable weights. Is it expected?
Hi,
Thank you for this really nice paper.
This is not an issue but a general question, why is there a Linear and MergedLinear class?
Thank you,
Maxime.
Hi~
Thanks for your excellent work.
I have a question about the Table 7, where you calculate Frobenius norm. In my view, setting the rank as 4 or 64 only affects
Is there anything I understood wrong?
I try to run the code using
python -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py \
--train_data ./data/e2e/train.jsonl \
--valid_data ./data/e2e/valid.jsonl \
--train_batch_size 8 \
--grad_acc 1 \
--valid_batch_size 4 \
--seq_len 512 \
--model_card gpt2.md \
--init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin \
--platform local \
--clip 0.0 \
--lr 0.0002 \
--weight_decay 0.01 \
--correct_bias \
--adam_beta2 0.999 \
--scheduler linear \
--warmup_step 500 \
--max_epoch 5 \
--save_interval 1000 \
--lora_dim 4 \
--lora_alpha 32 \
--lora_dropout 0.1 \
--label_smooth 0.1 \
--work_dir ./trained_models/GPT2_M/e2e \
--random_seed 110
But it didn't work and I noticed that in the data folder the files are e2e/train.txt e2e/test.txt ... Do you make any changes to these files?
I understand why we need MergedLinear but is there a simple example of how the forward pass works for a MergedLinear? Specifically this line -> https://github.com/microsoft/LoRA/blob/main/loralib/layers.py#L248. I'm struggling to understand what the 1d conv is doing here.
I would also appreciate a mathematical explanation. For the Linear case, I understand the simple matrix multiplication of deltaW * x = B * A * x. But for MergedLinear, what would be the equation for deltaW?
Hi,
Just wanted to point out that Lora is already used and can possibly generate confusion
Hi, I'm curious if LoRA can provide any benefits in reducing model size or latency during inference? Could this or related techniques help make deploying LLMs to edge devices more feasible?
My current understanding is that this mostly benefits training rather than inference, because the model is reconstructed to full size after the low-rank adaptation.
It also seems to provide benefit when it's desired to have multiple models for different tasks. However if the desire is to have a single LLM deployed for inference more efficiently on an edge device, are there any benefits to be had in this case?
I appreciate any clarification you can provide, thanks!
Upon checking modeling_roberta.py, it seems that only the query and value matrices using LoRA?
Does that mean only the query and value matrices are fine-tuned in RoBERTa for GLUE tasks?
Are other parameters frozen in RoBERTa or they just don't use LoRA?
Hi,
Thank you for sharing the source code. I really enjoy the work you propose.
While reading the paper and reproducing the results I got a couple of questions:
evaluate
library and got same (very close) results. So, how did you obtain such METEOR score?Does lora can be run on M1 Pro MacBook? 14gb gpu
There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can learn more about this effort and share feedback within the open source guidance available internally.
Hi, thanks for sharing the source code.
Is this correct? or Should lora_B be zero?
Lines 58 to 60 in 375704a
Dear Edward,
Thanks for your contribution to the community.
But I couldn't re-implement your experiments by using the scripts you wrote in the LoRA/examples/NLG.
I feel down and don't know what to do.
......
:(
Hi,
I've been trying to apply LoRA to the VITS model (hence the pull request for the conv1d). Turns out just using Lora for the text encoder transformer isn't enough, and Im not sure if I should replace all the layers I can with Lora layes. Could you guide me on how I can do it.
The repo is here
https://github.com/nivibilla/efficient-vits-finetuning
Thanks!
how to convert Megatron-DeepSpeed ColumnParallelLinear and RowParallelLinear to lora linear layer?
ColumnParallelLinear defined in: https://github.com/microsoft/Megatron-DeepSpeed/blob/main/megatron/mpu/layers.py#L206
It's straightfoward for the regular Linear lora layer, but I'm struggling to understand how the merged layer is reconstructed into the weights delta
Thanks for releasing the code! I noticed that the reporting number of parameters of Lora module is 0.3 M for roberta-base. After experiments, I found that there are 0.5M parameters tunable in the sequence classification head (but it's the same for all baselines, so I am not arguing about the fairness). I wonder was I correct about the setting? Did the performances in the paper were the ones that also tune a classification head for classification tasks?
For now, most of open LLM models have token size of 2048. I need to expand it to 4096 or more.
Is is possible expand token size by fine-tuning with longer texts exceed 2048 using LoRA?
Or does it need re-training from scratch?
I searched about this but unable to find the information.
The LoRA freezes original model weight and add trainable layers, so I feel like it's bit difficult.
(gh_LoRA) ub2004@ub2004-B85M-A0:~/llm_dev/LoRA/examples/NLG$ python3 -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py --train_data ./data/e2e/train.jsonl --valid_data ./data/e2e/valid.jsonl --train_batch_size 8 --grad_acc 1 --valid_batch_size 4 --seq_len 512 --model_card gpt2.md --init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin --platform local --clip 0.0 --lr 0.0002 --weight_decay 0.01 --correct_bias --adam_beta2 0.999 --scheduler linear --warmup_step 500 --max_epoch 5 --save_interval 1000 --lora_dim 4 --lora_alpha 32 --lora_dropout 0.1 --label_smooth 0.1 --work_dir ./trained_models/GPT2_M/e2e --random_seed 110
/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank
argument to be set, please
change it to read from os.environ['LOCAL_RANK']
instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
(gh_LoRA) ub2004@ub2004-B85M-A0:~/llm_dev/LoRA/examples/NLG$
Excuse me, has the LoRA paper been accepted yet? Thank you
Hello, thank you for sharing the source code. While trying to reproduce SST2 task result with RoBERTa-base model, I've encountered some questions regarding the hyper-parameters, lora_alpha, and a global batch size.
Since the paper's hyper-parameter setting and the reproducing script which does both training and evaluation (examples/NLU/roberta_base_sst2.sh
) had some conflict.
First of all, is the reproducing script the actual script that you used for creating the numbers for the paper?
examples/NLU/roberta_base_sst2.sh
, it is written that lora-alpha is 16.When I tried evaluation, lora-alpha 16 gave the better result.
Maybe you used lora_alpha as 8 in training, but lora_alpha was 16 in evaluation or else... it's a little bit confusing.
examples/NLU/roberta_base_sst2.sh
, it is written that per_device_train_batch_size
is 16 and the number of gpu is 8. (So the global batch size should be 128) Moreover, the explanation in https://github.com/microsoft/LoRA/tree/main/examples/NLU#adapting-to-the-glue-benchmark said that the number of gpu used is 4. (So the global batch size should be 64)When the global batch size was 128, my reproduction result was lower than in paper. (94.5 accuracies) Thanks.
examples/NLU/roberta_base_sst2.sh
, but was not present in the paper (for the GLUE tasks)I wrote down your hyper-parameter setting like this, and I'd appreciate the specification.
Hi,
In loralib's layer modules,
Line 138 in 33b9536
It seems like eval()
function which merges W+BA is never called.
This is because when changing the model to evaluation mode in torch by calling model.eval()
, submodules of the model calls module.train(mode=False)
, not module.eval()
https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
I think this may be a bug and the weight was never merging in the evaluation mode.
Is it possible to make edits to implement current eval()
function in train(mode=False)
?
In the scripts of the implementaion of LoRA in GLUE Benchmark, for instance in "roberta_base_mrpc.sh" and "roberta_base_rte.sh", you include the args "--lora_path roberta_base_lora_mnli.bin", whose final result is pretty high. But without initializing LoRA layers with "roberta_base_lora_mnli.bin", the result goes down. And I wonder why we need to initialize LoRA layer with it.
Hi,
I really enjoy the work you propose! In learning the paper and the code, I have a question regarding the implementation of GPT2LMModel's forward function (
LoRA/examples/NLG/src/model.py
Line 396 in aa68d8a
shift_logits = lm_logits[..., :-1, :].contiguous(); shift_labels = labels[..., 1:].contiguous()
May I ask whether the shift is necessary in your code, or in which part you have implemented the shift?
Besides, I also fail to obtain the expected result of LoRA on WebNLG (Table 14 in the paper, LoRA 0.35M) with the checkpoint provided in this repo. The script and the hyperparameters I use is
`python3 -m torch.distributed.launch --nproc_per_node=1 src/gpt2_beam.py
--data ./data/webnlg_challenge_2017/test.jsonl
--batch_size 1
--seq_len 512
--eval_len 64
--model_card gpt2.md
--init_checkpoint ./trained_models/GPT2_M/webnlg/gpt2_md_lora_webnlg.pt
--platform local
--lora_dim 4
--lora_alpha 32
--beam 10
--length_penalty 0.8
--no_repeat_ngram_size 4
--repetition_penalty 1.0
--eos_token_id 628
--work_dir ./trained_models/GPT2_M/webnlg
--output_file predict.lora.md.jsonl
python3 src/gpt2_decode.py
--vocab ./vocab
--sample_file ./trained_models/GPT2_M/webnlg/predict.lora.md.jsonl
--input_file ./data/webnlg_challenge_2017/test_formatted.jsonl
--ref_type webnlg
--ref_num 6
--output_ref_file eval/GenerationEval/data/references_webnlg
--output_pred_file eval/GenerationEval/data/hypothesis_webnlg
--tokenize --lower`
Does the hyperparameters I use seem right? The final metric I got is
BLEU Seen: 59.66
BLEU Unseen: 45.47
BLEU All: 53.27
METEOR Seen: 0.43
METEOR Unseen: 0.38
METEOR All: 0.41
TER Seen: 0.40
TER Unseen: 0.52
TER All: 0.45
(I modify gpt2_beam.py a little bit (see below), to first load the parameters from "./pretrained_checkpoints/gpt2-medium-pytorch_model.bin", and then from "gpt2_md_lora_webnlg.pt", the checkpoint provided. Is the modification sensible, or how would you recommend to load the model?
original:
LoRA/examples/NLG/src/gpt2_beam.py
Line 381 in aa68d8a
new:
` lm_net = GPT2LMModel(config)
cp = torch.load("./pretrained_checkpoints/gpt2-medium-pytorch_model.bin", map_location=torch.device('cpu'))
lm_net.load_weight(cp)
if args.init_checkpoint is not None:
print('loading model pretrained weight.')
cp = torch.load(args.init_checkpoint, map_location=torch.device('cpu'))
lm_net.load_weight(cp)
lm_net = lm_net.cuda()`
)
Is it possible to use LoRA to fine tune GPT NeoX 20B?
I thought this might be interesting as an alternate implementation of LoRA leveraging tensor subclasses and reparametrization.
https://gist.github.com/Chillee/a8d2070b1b7b3f97d8c87bac3c366f8e
The main idea here is that we can leverage parametrization in order to transform our parameter in a manner that's composable with existing modules (i.e. we don't need to use a totally new layer).
Then, since LoRA also requires us to leverage special matrix structure for efficiency, we return a tensor subclass that has special handling when we encounter F.linear(x: Tensor, weight: LoraTensor, bias: Tensor)
. This tensor subclass composes with things like autograd and such, so we can still differentiate through our tensor.
Thanks for your nice work.
I am try to replicate result on webNLG, but the finnal epochs of checkpoint is only 11270, different from 20000. This results in a significant difference in the accuracy of the reproduction compared to your results.
Here is the my instruct:
python -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py
--train_data ./data/webnlg_challenge_2017/train.jsonl
--valid_data ./data/webnlg_challenge_2017/valid.jsonl
--train_batch_size 8
--grad_acc 1
--valid_batch_size 4
--seq_len 512
--model_card gpt2.md
--init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin
--platform local
--clip 0.0
--lr 0.0002
--weight_decay 0.01
--correct_bias
--adam_beta2 0.999
--scheduler linear
--warmup_step 500
--max_epoch 5
--save_interval 1000
--lora_dim 4
--lora_alpha 32
--lora_dropout 0.1
--label_smooth 0.1
--work_dir ./trained_models/GPT2_M/webnlgv9
--random_seed 110
Is there any implementation of LoRA on a TPU device?
The paper says that it only need 350G VRAM to train 175B GPT3 with rank =4. Can you elaborate more about how this is done? Like, do you use Megraton-deepspeed?
In my experiment with bloom-3b, fintuning all parameters need 29G. After using lora with different experiment set, trainable parameters differ form 10M to 0.8M. But they all need around 20G VRAM. I find this a little bit weird.
Where in the paper it says how the merge is being done?
Hi,
For LoRA/loralib/layers.py line 151-155, why the feedforward implementation of the Linear layer is to first go through the original network and then add LoRA to result. This is different from the implementation of the Conv layer, where the weight is added with LoRA before feedforward.
Did I make a mistake? Or is there no difference between the two implementations for the Linear layer?
Thank you!
This repository is currently missing a LICENSE.MD file outlining its license. A license helps users understand how to use your project in a compliant manner. You can find the standard MIT license text at the Microsoft repo templates LICENSE file: https://github.com/microsoft/repo-templates/blob/main/shared/LICENSE.
If you would like to learn more about open source licenses, please visit the document at https://aka.ms/license.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.