Comments (8)
Update: I downgraded my PEFT to 10.0 and Transformers to 4.39.0 and it is working fine now
from transformers.
and when i set :
device_map["model.embed_tokens"] = 0
device_map["model.norm.weight"] = 0
it will not error at start ,but it will error after:
from transformers.
Hi @Hongjie1Chu !
In principle the device order shouldn't affect the training behaviour - can you let us know what happens when you run the training script with CUDA_LAUNCH_BLOCKING=1
? Also do you run your training script with accelerate launch xxx
or python xxx.py
?
from transformers.
I too am facing a similar issue.
I haven't made any changes to my code but all of a sudden, my code gives this error after training for like 30 steps.
from transformers.
thanks for your answer!
from transformers.
Has there been a solution for this yet? I tried using the latest version of transformers and it still gave this issue. I want to use some of the new quantization methods.
from transformers.
@ArthurZucker @younesbelkada @muellerzr
from transformers.
Hi !
It is hard for us to debug without a proper error trace, can you re-run the training script with CUDA_LAUNCH_BLOCKING=1
and paste the error trace here?
from transformers.
Related Issues (20)
- Problem with the masked language modeling tutorial HOT 1
- When running `ruff format src/transformers`, some files needs to be reformatted HOT 2
- Something wrong for `StoppingCriteria` HOT 5
- Index out of range when generate using optimum HOT 1
- Fail to load model without .safetensors file
- GGUFTokenizerSkeleton AttributeError during conversion HOT 3
- Fixing Tensor Shape/Dimension Mismatch Errors in TimeSeries Transformer for Stock Price Prediction HOT 9
- You can't train a model that has been loaded with `device_map='auto'` in any distributed mode. HOT 2
- NotImplementedError: Cannot copy out of meta tensor; no data when embedding to meta HOT 3
- Add argument to set number of eval steps in Trainer HOT 2
- First token optimization in beam search
- Transformers master version breaks compatibility with `torch<2.3` HOT 1
- Missing upper bound in numpy requirements breaks transformers HOT 5
- Trainer: To keep unused columns for `compute_metrics` HOT 1
- RuntimeError: slow_conv2d_forward_mps: input(device='cpu') and weight(device=mps:0') HOT 1
- OOM when loading 300B models with `AutoModelForCausalLM.from_pretrained` and `BitsAndBytesConfig` quantization.
- A question about the implementation of Sinkcache. HOT 2
- Multi-GPU inference affects LLM's (Llama2-7b-chat-hf) generation.
- `pip install accelerate` (and similar) error messages should specify min version HOT 1
- Incorrect docstring of `get_anyres_image_grid_shape` HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.