xiaoming-yu / dmit Goto Github PK
View Code? Open in Web Editor NEWMulti-mapping Image-to-Image Translation via Learning Disentanglement. NeurIPS2019
License: MIT License
Multi-mapping Image-to-Image Translation via Learning Disentanglement. NeurIPS2019
License: MIT License
Hi, thanks for sharing your codes.
I came across that you added something like our mode-seeking loss in your codes, but you mentioned it neither in your Github project nor your paper.
Could you please cite our Github project or our paper in your paper, thanks!
were there any attempts to train a city landscape?
Could you give any guidance on resources needed to achieve the mountain landscape ?
is it 8 x GPUs v100 for a week or two?
page 7 table1 , so the way to get LPIPS is real image & generated image or generated image & generated image
Higher means further/more different. Lower means more similar.
I saw in #2 you said you were busy with another project, and it would come out sometime before November, but it is now a week into November, do you know when it might be released?
Hello, I implemented it according to the method described in the paper, but the result is different from the one shown in the paper. Could you please share your implementation and I can find out where I misunderstood.THX!
In the page.16 of your paper dmit, a lpips of real data is listed in a table, but in my opinion there is no pairs in real data, so how do you get this value.
Thank you and hoping for you reply.
When training using a slightly modified season_transfer code (see my fork), everything quickly becomes NaN on my dataset:
(epoch: 1, iters: 20, time: 1.415) , D_rand_prior: 3.942, G_rand_prior: 0.426, D_rand_enc: 3.968, G_rand_enc: 0.356, D_content: 1.079, G_content: 0.930, errRec: 2.804, errKl: 0.005, errStyle: 0.600, errContent: 1.191, errCyc: 3.077, errDiv: 0.298
(epoch: 1, iters: 40, time: 1.413) , D_rand_prior: 3.905, G_rand_prior: 0.371, D_rand_enc: 3.860, G_rand_enc: 0.659, D_content: 1.675, G_content: 2.137, errRec: 4.170, errKl: 0.003, errStyle: 0.836, errContent: 0.912, errCyc: 4.638, errDiv: 0.621
(epoch: 1, iters: 60, time: 1.409) , D_rand_prior: 3.911, G_rand_prior: 0.255, D_rand_enc: 3.873, G_rand_enc: 0.415, D_content: 0.725, G_content: 0.183, errRec: 3.414, errKl: 0.033, errStyle: 0.749, errContent: 0.760, errCyc: 3.771, errDiv: 0.406
(epoch: 1, iters: 80, time: 1.414) , D_rand_prior: 3.803, G_rand_prior: 0.625, D_rand_enc: 3.784, G_rand_enc: 0.628, D_content: 0.579, G_content: 0.180, errRec: 5.343, errKl: 0.018, errStyle: 0.949, errContent: 0.653, errCyc: 5.572, errDiv: 0.617
(epoch: 1, iters: 100, time: 1.416) , D_rand_prior: 3.807, G_rand_prior: 0.434, D_rand_enc: 3.769, G_rand_enc: 0.575, D_content: 0.616, G_content: 0.819, errRec: 3.402, errKl: 0.030, errStyle: 0.615, errContent: 0.664, errCyc: 4.447, errDiv: 0.444
(epoch: 1, iters: 120, time: 1.409) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 140, time: 1.408) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 160, time: 1.400) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 180, time: 1.409) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
Are there any hyperparams you recommend tweaking to fix this? (My dataset has one domain of vector drawings, and one of images, if that's important)
Hey there, there is a bug when one wants to continue to train a model. Simply adding --continue_train flag in the command line cannot succeed.
To continue training, one has to
net_name
. for state in net_optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.cuda()
For the second step, it is my temporary solution. Without it, one might encounter a warning like: RuntimeError: expected device cpu but got device cuda:0
. Though it can solve the error, I think there should be a more elegant way existing.
In the semantic image synthesis task, how to select datas to calculate FID, PSNR and SSIM indicators
/DMIT/models/base_model.py", line 246, in update_model
errDec_total.backward(retain_graph=True)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8, 256]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Anybody has this question?
Nice paper!
Could you please release the code asap?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.