GithubHelp home page GithubHelp logo

RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10) about multi-task-transformer HOT 7 CLOSED

prismformore avatar prismformore commented on May 28, 2024
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)

from multi-task-transformer.

Comments (7)

prismformore avatar prismformore commented on May 28, 2024 2

@tb2-sy Hi, have you solved this issue? : )

from multi-task-transformer.

tb2-sy avatar tb2-sy commented on May 28, 2024

Traceback (most recent call last):

File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_t
rain, tb_writer_test, iter_count)
File "/public/home/ws/InvPT/utils/train_utils.py", line 41, in train_phase
5%|████▊ | 9/198 [00:08<03:04, 1.02it/s]
Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_train, tb_writer_test, iter_count)
File "/public/home/ws/vpt/InvPT/utils/train_utils.py", line 41, in train_phase
optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrappe[47/1951] return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
eps=group['eps'])
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam eps=group['eps']) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam param.addcdiv(exp_avg, denom, value=-step_size) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam[33/1951] param.addcdiv(exp_avg, denom, value=-step_size) RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10) param.addcdiv(exp_avg, denom, value=-step_size)
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 182046) of binary: /public/home/ws/Anacondas
/anaconda3/envs/invpt/bin/python
Traceback (most recent call last):
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main() File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in ma[19/1951] launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args))
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

main.py FAILED

main.py FAILED [5/1951]

Failures:

[1]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):

[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
host : ai_gpu02 [0/1951]
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):

[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 182046)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Excuse me, I have this bug after training 39000+ iterations, how to solve it? Is this problem related to the number of computing cards used? I used two cards in training.

I seem to have solved this problem. It should be that the number of iterations has reached 40k. According to the characteristics of the ploy schedule, the learning rate is close to 0 at this time, but the program has not yet reached the max epoch.

from multi-task-transformer.

prismformore avatar prismformore commented on May 28, 2024

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

from multi-task-transformer.

tb2-sy avatar tb2-sy commented on May 28, 2024

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?

from multi-task-transformer.

prismformore avatar prismformore commented on May 28, 2024

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?

I don't think so. LR should be adjusted based on iteration no.

from multi-task-transformer.

tb2-sy avatar tb2-sy commented on May 28, 2024

@tb2-sy Hi, have you solved this issue? : )

I have never encountered this problem again. It is speculated that this error was caused by changing the number of config iterations during the training resume.

from multi-task-transformer.

prismformore avatar prismformore commented on May 28, 2024

@tb2-sy Great. I will close this issue for now. Thanks.

from multi-task-transformer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.