Comments (6)
I totally agree.. what we have done for LLama2 is the QLora(Qunatization+Lora) and only applicable for LLama2. I think with this trend of public LLM released so often, and all of them need Lora for fine-tuning, we need to have an unified class that could handle Lora(and Qlora) for different models. In HF, sft_trainer.py could handle this efficient fine-tuning. We also need to have an easy interface in SB for efficient fine-tuning. It should also work well with DDP.
from speechbrain.
I think it is very important too. We can support at least LoRA and adapters. I'm not sure what could be the best way to go to support it elegantly. One idea could be to implement a wrapper (e.g., Adapter) to apply to our models, which can plug the necessary new modules. However, it seems to me quite hard to create something easy to use and flexible at the same time. Any idea from the community?
from speechbrain.
I have used PEFT to apply Lora to an existing model and it was pretty straightforward to use. You just pass the model and it automatically replaces all relevant layers with Lora layers. We could do something similar
from speechbrain.
We could even import and use peft if the dependencies are light
from speechbrain.
Hi @TParcollet, I agree with your strategy.
I have used PEFT to apply Lora to an existing model and it was pretty straightforward to use. You just pass the model and it automatically replaces all relevant layers with Lora layers. We could do something similar
I don't think that PEFT library is necessary since the theory/implementation is quite "easy" to reproduce. For instance, ESPNet has their own impl of LORA etc (https://github.com/espnet/espnet/blob/47de29c21f5a7db22089717e92add5e9604fcd48/espnet2/layers/create_adapter_fn.py#L224). We should follow the same strategy and provide our own adapters because many researchers may want to develop their own design/modify code etc which may be harder if we have speechbrain and peft.
Note: we could have our own implementation and also provide a PEFT compatible wrapper. But I don't know if this make sense/necessary.
from speechbrain.
Alright, I think that there are two problems here.
- Building a wrapper that will naively replace Linear layers with LoRA / Houlsby and others is simple, I did it, and it's not that hard. However, this is only valid for basic research purposes, and making it slightly more evolved e.g. only specifying some linear layers etc, may be a bit harder.
- Building something that is usable in practice, with quantisation, fast implementation etc may be MUCH more complicated. I mean, the literature on adapters for LLM is absurdly massive, and supporting this is impossible. Here, I'd say that supporting an extra lib could make sense, but it would need to be something very stable, and reliable AND that covers more than what we can do in 1. Otherwise, there is no point losing control over the code.
I am happy to do a PR for 1., but it will remain shallow IMHO.
from speechbrain.
Related Issues (20)
- [Feature Request]: AdaMER-CTC for ASR task training
- Cannot reproduce DPRNN results on WSJ0-2Mix (Speech Separation) HOT 6
- ModuleNotFoundError: No module named 'speechbrain.pretrained' HOT 4
- Fix obsolete uses of `speechbrain.pretrained` in documentation
- `speechbrain/__init__.py` is ignored in pip editable installs for scripts out of repo HOT 1
- PyPI install incorrectly ships a `tests` package
- Circular Import Error HOT 8
- Circular import in ESC-50 classification recipe HOT 2
- Tacotron2.decoder.infer behaves incorrectly HOT 2
- Can't reproduce pretraining results for Wav2vec2 using LibriSpeech recipe HOT 9
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 2
- not able to import 'HuggingFaceWhisper' from speechbrain.lobes.models.huggingface_whisper HOT 7
- Torch 2.3 breaks DDP? HOT 7
- Training twice as long with Torch > 1.11 HOT 10
- Training regression for Conformer-Transducer models HOT 2
- Math Domain Error in Pretraining tutorial. HOT 1
- Typing syntax not supported in 3.7/3.8 HOT 8
- Potential `SpectrogramDrop` bugs HOT 1
- dtype mismatch in AttentiveStatisticsPooling with FP16 training mode HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from speechbrain.