odegeasslbc / progressive-gan-pytorch Goto Github PK
View Code? Open in Web Editor NEWA pytorch implementation of Progressive-GAN that is actually works, readable and simple to customize
A pytorch implementation of Progressive-GAN that is actually works, readable and simple to customize
Hello sir I hope you will be good sir It's very nicely coded and very easy to understand can you also tell me how to apply performance metrics in this code like Inception score or FID.
Hi,
Your code looks very clean, so I tried to test it but got this error:
Using pytorch 1.6.0
python train.py --path data --trial pgan
[ some warnings deleted (about add_ and interpolate) ]
Traceback (most recent call last):
File "train.py", line 246, in
train(generator, discriminator, args.init_step, loader, args.total_iter)
File "train.py", line 124, in train
real_predict.backward(mone)
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 121, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 34, in _make_grads
+ str(out.shape) + ".")
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([])
When uncommenting the line for using multiple gpus using dataparallel, the accumulation function shows index error. after changing it to -
def accumulate(model1, model2, decay=0.999):
par1 = dict(model1.named_parameters())
par2 = dict(model2.named_parameters())
print(len(par1.keys()))
print(len(par2.keys()))
for k in par1.keys():
k_module = "module." + k
par1[k].data.mul_(decay).add_(1 - decay, par2[k_module].data)
now new error has cropped up,
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [55], in <cell line: 27>()
24 g_optimizer = optim.Adam(generator.parameters(), lr=args.lr, betas=(0.0, 0.99))
25 d_optimizer = optim.Adam(discriminator.parameters(), lr=args.lr, betas=(0.0, 0.99))
---> 27 accumulate(g_running, generator, 0)
29 loader = imagefolder_loader(args.path)
31 print(loader.__len__)
Input In [54], in accumulate(model1, model2, decay)
9 for k in par1.keys():
10 k_module = "module." + k
---> 11 par1[k].data.mul_(decay).add_(1 - decay, par2[k_module].data)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
Response welcome.
!git clone https://github.com/odegeasslbc/Progressive-GAN-pytorch.git
pth = '/content/drive/My Drive/gan-data/earings'
os.chdir('/content/Progressive-GAN-pytorch')
!python train.py --path '/content/drive/My Drive/gan-data' --trial_name rough1 --lr 0.003 --z_dim 100 --channel 512 --batch_size 32 --n_critic 1 --init_step 1 --total_iter 400000 --pixel_norm --tanh
used this code construct and my image folder is in the structure that you mentioned. it throws the following error. also as far as -h shows, i have no control over the final resolution, if at all. help me in either case.
Namespace(batch_size=32, channel=512, gpu_id=0, init_step=1, lr=0.003, n_critic=1, path='/content/drive/My Drive/gan-data', pixel_norm=True, tanh=True, total_iter=400000, trial_name='rough1', z_dim=100)
0% 0/400000 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 246, in <module>
train(generator, discriminator, args.init_step, loader, args.total_iter)
File "train.py", line 124, in train
real_predict.backward(mone)
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 93, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 29, in _make_grads
+ str(out.shape) + ".")
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).
Hi there,
Really great code, but it looks like you don't have any license. I would strongly recommend you add one, as having no license means someone else could copy your code, re-upload it to GitHub, and license it, removing your rights. I would hate it if someone made it so I could no longer use your code.
Cheers,
Charles.
Hi !
Training seems to be working like a charm. But how do we generate an output once the training is done ? Thanks :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.