Comments (7)
Hello. During training the model size is bigger, because our method uses a model reparameterization trick as described in the section 3.3 of the paper. Once the model training is finished you need to fuse the weights in order to decrease the model size for inference. For fusion, you can use the scripts/export_inference_model.py
script. You can find the details of how to use that script at the end of README's Training section.
from mi-gan.
Thank you so much for the clarification. I wanted to fine tune the model instead of training the whole model again. The layers don't match exactly so I was wondering the best way to do it?
from mi-gan.
Training checkpoints are provided in this Google Drive folder. You can download the corresponding (unfused) training checkpoint from there and use it for fine-tuning.
from mi-gan.
Thank you @AndranikSargsyan! One more question, in the code, can you tell me what the G_ema variable signifies? I thought the G variable was the MI-GAN generator model and D is the discriminator.
from mi-gan.
G_ema
keeps track of exponential moving average (ema) of the generator weights during training. Exponential moving averaging of the generator weights is a popular strategy for improving generator performance.
from mi-gan.
Thank you so much for your quick response @AndranikSargsyan. One final thing I wanted to ask you was about the training phase. In the code, I see that there are two instances of G (Gmain and GReg) and D (Dmain and DReg) since in the default setting regularization interval is set to 4 and 16 respective;y. I am unable to understand why we have two instances of the generator and discriminator models in this case. Really appreciate your help!
from mi-gan.
There're actually no two instances of generator and discriminator (except the G_ema for generator). Gmain/Greg and Dmain/Dreg are just strings which signify the current phase of the training: i.e. Gmain/Dmain signify that there should be no regularization in the loss computation of current iteration, while Greg/Dreg signify that loss computation should include regularization component such as R1 used in the training of the discriminator of MI-GAN.
from mi-gan.
Related Issues (11)
- How do I train myself with labeled data sets? HOT 1
- problem when loading model HOT 1
- problem when export inference model HOT 2
- How to train on custom dataset If I didn't have Co-Mod-GAN pretrain model on this dataset ? HOT 3
- IOPaint add MI-GAN model support HOT 1
- Some questions about MI-GAN
- License HOT 1
- Bad results HOT 1
- Differences in indicators between pt model and onnx model on the same data set HOT 1
- An Error in loading migan_512_places2.pt
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mi-gan.