Comments (7)
@tb2-sy Hi, have you solved this issue? : )
from multi-task-transformer.
Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_t
rain, tb_writer_test, iter_count)
File "/public/home/ws/InvPT/utils/train_utils.py", line 41, in train_phase
5%|████▊ | 9/198 [00:08<03:04, 1.02it/s]
Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_train, tb_writer_test, iter_count)
File "/public/home/ws/vpt/InvPT/utils/train_utils.py", line 41, in train_phase
optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrappe[47/1951] return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
eps=group['eps'])
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam eps=group['eps']) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam param.addcdiv(exp_avg, denom, value=-step_size) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam[33/1951] param.addcdiv(exp_avg, denom, value=-step_size) RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10) param.addcdiv(exp_avg, denom, value=-step_size)
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 182046) of binary: /public/home/ws/Anacondas
/anaconda3/envs/invpt/bin/python
Traceback (most recent call last):
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main() File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in ma[19/1951] launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args))
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:main.py FAILED
main.py FAILED [5/1951]
Failures:
[1]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.htmlRoot Cause (first observed failure):
[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
host : ai_gpu02 [0/1951]
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.htmlRoot Cause (first observed failure):
[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 182046)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Excuse me, I have this bug after training 39000+ iterations, how to solve it? Is this problem related to the number of computing cards used? I used two cards in training.
I seem to have solved this problem. It should be that the number of iterations has reached 40k. According to the characteristics of the ploy schedule, the learning rate is close to 0 at this time, but the program has not yet reached the max epoch.
from multi-task-transformer.
@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?
from multi-task-transformer.
@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?
oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?
from multi-task-transformer.
@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?
oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?
I don't think so. LR should be adjusted based on iteration no.
from multi-task-transformer.
@tb2-sy Hi, have you solved this issue? : )
I have never encountered this problem again. It is speculated that this error was caused by changing the number of config iterations during the training resume.
from multi-task-transformer.
@tb2-sy Great. I will close this issue for now. Thanks.
from multi-task-transformer.
Related Issues (20)
- Inference HOT 7
- t-SNE HOT 9
- Human Parts HOT 1
- ViT-B version HOT 3
- Error with reading pre trained weight HOT 1
- Request for using newer version of openmm packages HOT 8
- Cityscapes 3D transfer into real world images HOT 5
- model weigths
- how to get bodyparts from images? HOT 3
- We assume all the images have the same size!!! HOT 1
- The inference process is stuck. HOT 7
- How to use nyud's pre training model in the inference.py HOT 4
- Dataset link is down HOT 2
- TypeError: TaskPrompter.__init__() got an unexpected keyword argument 'default_cfg' HOT 2
- depth files on NYUDv2 HOT 2
- Issue running inference.py - proper requirements file needed
- cannot import name 'iou3d_cuda'
- Where is the code of paper TaskExpert: Dynamically Assembling Multi-Task Representations with Memorial Mixture-of-Experts HOT 2
- How does the swin transformer deal with the input image with the size of 448 x 576 ? HOT 5
- epoch HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from multi-task-transformer.