GithubHelp home page GithubHelp logo

xiaoming-yu / dmit Goto Github PK

View Code? Open in Web Editor NEW
111.0 14.0 15.0 11.02 MB

Multi-mapping Image-to-Image Translation via Learning Disentanglement. NeurIPS2019

License: MIT License

Python 95.51% Shell 4.49%
image-to-image-translation season-transfer semantic-image-synthesis text-to-image

dmit's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmit's Issues

The mode seeking loss in your codes

Hi, thanks for sharing your codes.
I came across that you added something like our mode-seeking loss in your codes, but you mentioned it neither in your Github project nor your paper.
Could you please cite our Github project or our paper in your paper, thanks!

Not an issue - question on training set

were there any attempts to train a city landscape?
Could you give any guidance on resources needed to achieve the mountain landscape ?
is it 8 x GPUs v100 for a week or two?

the way to get LPIPS ?

page 7 table1 , so the way to get LPIPS is real image & generated image or generated image & generated image
Higher means further/more different. Lower means more similar.

Updated code release date?

I saw in #2 you said you were busy with another project, and it would come out sometime before November, but it is now a week into November, do you know when it might be released?

Help for the code

Hello, I implemented it according to the method described in the paper, but the result is different from the one shown in the paper. Could you please share your implementation and I can find out where I misunderstood.THX!

About a metric in your paper.

In the page.16 of your paper dmit, a lpips of real data is listed in a table, but in my opinion there is no pairs in real data, so how do you get this value.
Thank you and hoping for you reply.

Loss quickly becomes NaN

When training using a slightly modified season_transfer code (see my fork), everything quickly becomes NaN on my dataset:

(epoch: 1, iters: 20, time: 1.415) , D_rand_prior: 3.942, G_rand_prior: 0.426, D_rand_enc: 3.968, G_rand_enc: 0.356, D_content: 1.079, G_content: 0.930, errRec: 2.804, errKl: 0.005, errStyle: 0.600, errContent: 1.191, errCyc: 3.077, errDiv: 0.298
(epoch: 1, iters: 40, time: 1.413) , D_rand_prior: 3.905, G_rand_prior: 0.371, D_rand_enc: 3.860, G_rand_enc: 0.659, D_content: 1.675, G_content: 2.137, errRec: 4.170, errKl: 0.003, errStyle: 0.836, errContent: 0.912, errCyc: 4.638, errDiv: 0.621
(epoch: 1, iters: 60, time: 1.409) , D_rand_prior: 3.911, G_rand_prior: 0.255, D_rand_enc: 3.873, G_rand_enc: 0.415, D_content: 0.725, G_content: 0.183, errRec: 3.414, errKl: 0.033, errStyle: 0.749, errContent: 0.760, errCyc: 3.771, errDiv: 0.406
(epoch: 1, iters: 80, time: 1.414) , D_rand_prior: 3.803, G_rand_prior: 0.625, D_rand_enc: 3.784, G_rand_enc: 0.628, D_content: 0.579, G_content: 0.180, errRec: 5.343, errKl: 0.018, errStyle: 0.949, errContent: 0.653, errCyc: 5.572, errDiv: 0.617
(epoch: 1, iters: 100, time: 1.416) , D_rand_prior: 3.807, G_rand_prior: 0.434, D_rand_enc: 3.769, G_rand_enc: 0.575, D_content: 0.616, G_content: 0.819, errRec: 3.402, errKl: 0.030, errStyle: 0.615, errContent: 0.664, errCyc: 4.447, errDiv: 0.444
(epoch: 1, iters: 120, time: 1.409) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 140, time: 1.408) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 160, time: 1.400) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan
(epoch: 1, iters: 180, time: 1.409) , D_rand_prior: nan, G_rand_prior: nan, D_rand_enc: nan, G_rand_enc: nan, D_content: nan, G_content: nan, errRec: nan, errKl: nan, errStyle: nan, errContent: nan, errCyc: nan, errDiv: nan

Are there any hyperparams you recommend tweaking to fix this? (My dataset has one domain of vector drawings, and one of images, if that's important)

Bug with --continue_train and Its solution

Hey there, there is a bug when one wants to continue to train a model. Simply adding --continue_train flag in the command line cannot succeed.

To continue training, one has to

  1. Go to /models/base_model.py, modify the line 64-65. Remove the quotation marks around net_name.
  2. And add the following code snippet immediately after line 65:
          for state in net_optimizer.state.values():
                for k, v in state.items():
                    if isinstance(v, torch.Tensor):
                        state[k] = v.cuda()

For the second step, it is my temporary solution. Without it, one might encounter a warning like: RuntimeError: expected device cpu but got device cuda:0. Though it can solve the error, I think there should be a more elegant way existing.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

/DMIT/models/base_model.py", line 246, in update_model
errDec_total.backward(retain_graph=True)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8, 256]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Anybody has this question?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.