Comments (6)
Hi,
Have you found a fix to this? I am having a similar issue. It was working on pytorch 0.4.1, the compact weight warning was displayed once at the beginning only, and it continues normally until the end of the training.
However, I updated to pytorch 1.2 and I am facing the same issue as yours. The warning is displayed at each call of the forward, and it stops training with OOM after around 100 epochs. I tried to call flatten_parameters() at the forward function of WeightDrop class. But I still get the warning.
Thanks a lot.
from awd-lstm-lm.
Also having this issue. I don't think it's related to the flatten_parameters() warnings. It seems to be correlated with the optimizer--specifically, the memory usage only starts to increase after it is switched to ASGD.
from awd-lstm-lm.
I have found a solution. If it works for others as well, this issue can be closed.
I have modified the ASGD optimizer using @mourga's port of AWD-LSTM for PyTorch 1.2.0, from: https://github.com/mourga/awd-lstm-lm
In particular, in main.py
, you have to replace:
- lines 243-245 with :
for prm in model.parameters():
if prm in optimizer.state.keys():
tmp[prm] = prm.data.detach()
prm.data = optimizer.state[prm]['ax'].detach()
- lines 259-260 with:
for prm in model.parameters():
if prm in tmp.keys():
prm.data = tmp[prm].detach()
prm.requires_grad = True
del tmp
from awd-lstm-lm.
I guess it may relate to RNN compact weight, but I don't know how to fix it.
from awd-lstm-lm.
@rewicks good call, the memory usage increases only with the ASGD optimizer. I think I have found the problem with it, but I am not sure how to solve it.
I printed the tensors living in memory using the GPU memory profiling code mentioned at https://discuss.pytorch.org/t/how-to-debug-causes-of-gpu-memory-leaks/6741/3 , and used the PyCharm debugger to see the variables during training.
The ASGD optimizer
is an object that contains:
- defaults: its default settings
- param_groups = list containing 1 dictionary, with the hyperparameters ‘lr’, ‘alpha’, ‘lambd’, ‘t0’, ‘weight_decay’ and ’params’=a list of 14 Parameters w/Tensors
- state = defaultdict:20 {Parameter containing Tensor, Parameter containing Tensor, etc etc.}
As the epochs go on and on, optimizer.state
will contain 20,23,26,29,... (un-named) Tensors.
My hypothesis:
- either the ASGD averages over all the previous epochs, and thus eventually breaks memory
- or, more likely, the Tensors with the past gradients are never de-allocated from memory, we always allocate into new ones
Should we change the t0
parameter, increasing it by 1 each epoch? Or should we delete manually tensors from optimizer.state
?
I would like to hear your opinions, and possibly from the authors as well - although maybe they didn't have this problem because they did not have resource constraints (I break memory on a GPU that has 10GB of memory)
from awd-lstm-lm.
I have found a solution. If it works for others as well, this issue can be closed.
I have modified the ASGD optimizer using @mourga's port of AWD-LSTM for PyTorch 1.2.0, from: https://github.com/mourga/awd-lstm-lm
In particular, in
main.py
, you have to replace:
- lines 243-245 with :
for prm in model.parameters(): if prm in optimizer.state.keys(): tmp[prm] = prm.data.detach() prm.data = optimizer.state[prm]['ax'].detach()
- lines 259-260 with:
for prm in model.parameters(): if prm in tmp.keys(): prm.data = tmp[prm].detach() prm.requires_grad = True del tmp
hi, @AndreaLK3. It works for me as well. However, I don‘t achieve the perplexities that this instruction declares.
The instruction below trains a PTB model that without finetuning achieves perplexities of approximately 61.2 / 58.8 (validation / testing) .
python main.py --batch_size 20 --data data/penn --dropouti 0.4 --dropouth 0.25 --seed 141 --epoch 500 --save PTB.pt
I just achieve perplexities of 64.74/62.23(validation/testing) with the same command.
My torch version is 1.5.0 and cuda version is 10.1.
I'd like to know your experiment result and your advice.
from awd-lstm-lm.
Related Issues (20)
- save model
- Hyper parameter settings for PyTorch 1.0? HOT 2
- possible error on the neural cache model, pointer? Why use the groundtruth data? HOT 1
- Problems with the next word predictions
- Language model infer
- Score a test sentence
- AttributeError: 'Program' object has no attribute '_program' HOT 1
- AssertionError HOT 1
- why is hidden layer initialized only on beginning of an epoch? HOT 2
- Dropconnect layer implementation.
- TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight_hh_l0' (torch.nn.Parameter or None expected) HOT 1
- Hidden state init of LSTM layers (after the first) HOT 3
- torch.cuda.FloatTensor
- AttributeError: 'LSTM' object has no attribute 'weight_hh_l0' HOT 1
- A UsingWarning about call flatten_parameters()
- val_loss suddenly increases to a larger number(from ~5 to more than 10000) HOT 1
- The model behaves normally during training, but during prediction, the weightdrop mechanism cannot be stopped and an error is reported
- The might be a bug in splitcross.py
- initial input dropout mask is not considered.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from awd-lstm-lm.