Comments (5)
I have tested several parameters.
Single GPU:
batch size 32:
epoch[1/30] 1417.54 examples/sec, 0.59 min/epoch. Rest runtime is 0.28 hour
batch_size 64:
epoch[1/30] 2640.38 examples/sec, 0.32 min/epoch. Rest runtime is 0.15 hour
4 GPUs:
batch size 128 (32 on each):
epoch[1/30] 845.78 examples/sec, 0.99 min/epoch. Rest runtime is 0.48 hour
batch size 256 (64 on each):
epoch[1/30] 1402.12 examples/sec, 0.60 min/epoch. Rest runtime is 0.29 hour
As shows, by running the generator and discriminator on 4 GPU will even make it worse.
I have no idea whether we can put EMA as DataParallel on multiple GPU as well. A simple:
ema = torch.nn.DataParallel(ema)
will not work. Since PyTorch will try to devide the input for ema, which are the current parameters of network, as 4 different batches into different 4GPUs to update the moving parameters. However parameters are not databatches, I am not sure whether we can find the correct part of the parameter for ema to update distributedly.
Best,
Shu
from pro_gan_pytorch.
Without Ema, the things does not speed up significantly:
4 GPUs with EMA:
batch size 128 (32 on each):
epoch[1/30] 845.78 examples/sec, 0.99 min/epoch. Rest runtime is 0.48 hour
4 GPUs without EMA:
batch size 128 (32 on each):
epoch[1/30] 926.86 examples/sec, 0.90 min/epoch. Rest runtime is 0.43 hour
EMA might not be the problem.
from pro_gan_pytorch.
Oops, as the depth goes deeper, the utilization of GPUs increases, and I think it should be much faster than on single card.
Best,
Shu
from pro_gan_pytorch.
Well, that is indeed too much to process for me 😆 (it was night when you opened the issues and I was sleeping). Still, thanks for doing all the analysis. You cannot wrap ema
inside torch.nn.Dataparalllel
because, it is not a torch.nn.Module
. It is a mechanism for applying small exponential moving average decay over the parameters, where we do not need to calculate gradients (That would really slow things down). About utilisation, well, you need to adjust batch-size for every depth. You can use the --start_depth argument to check utilisation directly at a higher depth. Hope this helps.
Best regards,
@akanimax
from pro_gan_pytorch.
Well, that is indeed too much to process for me laughing (it was night when you opened the issues and I was sleeping). Still, thanks for doing all the analysis. You cannot wrap
ema
insidetorch.nn.Dataparalllel
because, it is not atorch.nn.Module
. It is a mechanism for applying small exponential moving average decay over the parameters, where we do not need to calculate gradients (That would really slow things down). About utilisation, well, you need to adjust batch-size for every depth. You can use the --start_depth argument to check utilisation directly at a higher depth. Hope this helps.Best regards,
@akanimax
@akanimax No worries! I just put what I think on this! You always response really fast, and many thanks for your concern!
from pro_gan_pytorch.
Related Issues (20)
- Runtime error related to tensor shapes when training ProGAN HOT 2
- If you don't mind, please let me know what environment you are using. HOT 1
- fan-in calculation in _equalized_deconv2d HOT 2
- Memory issues/requirements
- model training plateaus HOT 2
- module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu HOT 1
- no module name pro_gan_pytorch.PRO_GAN HOT 1
- square() not exist? HOT 4
- 256x256 pretrained model HOT 2
- Calculating fan_in for EqualizedConvTranspose2d HOT 2
- Missing LReLU activation after from_rgb conv layers HOT 4
- Output size HOT 6
- Training the model HOT 1
- Training the model
- handling grayscale image
- not "pip install pro-gan-path" but "pip install pro-gan-pth" HOT 1
- Can i use well trained progan generator do gan inversion? HOT 2
- Code profiling
- Could you help upgrade the vulnerble dependency in pro-gan-pth HOT 2
- pretained model HOT 13
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pro_gan_pytorch.