GithubHelp home page GithubHelp logo

yzd-v / mgd Goto Github PK

View Code? Open in Web Editor NEW
204.0 1.0 23.0 1.77 MB

Masked Generative Distillation (ECCV 2022)

License: Apache License 2.0

Python 100.00%
image-classification instance-segmentation knowledge-distillation object-detection pytorch semantic-segmentation

mgd's People

Contributors

yzd-v avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mgd's Issues

Discussion on the effect of mask area

Hi Zhendong,
MGD is an amazing work for distillation. I also notice your new work named ViTKD.
In ViTKD, we only distill the knowledge from unmasked area, while full area in MGD.
My questions are:
1) Why ViTKD only distill the knowledge only from unmasked area
2) What is the difference and relationship between unmasked and masked area in distillation.

About Distillation Loss

I have another question, so I'm leaving an issue like this.

Screenshot from 2022-09-13 11-24-02

Upper graph is distillation loss loss_mgd_fpn_* graph. ( blue one is with random mask, red is without random mask)
I wonder that With random mask distillation loss is higher than Without it. but, with Random mask mAP(metric) is higher than without it. (w/ 41.4 -> w/o 41.3)
if proposed loss is working well, loss should be lower than without random mask.

can you explain about it? I'd appreciate it if you could recommend some materials for reference.

Sincerely,

Question about generative block

Thanks for your great work! I have some questions about the generative block (sec. 5.4). Did you keep the generative block simple on purpose? What do you think if we use more complex modules as generative blocks (such as GcBlock in FGD)? Thanks!

About the student network

Hi, I still have two questions as follow: (1) I can't know find out the detail design of the student network, including the Masked operation and the generation, since I do not use the MMClassification. (2) As I know, when test on test dataset, only the student network including the masked feature module and the Generation module can be used for test, right?

About the generative block

Thanks for your work.

I'm so curious why the generative block is just a simple stack of 3x3 Conv.

In addition, a convolution is usually followed by a BatchNorm module. But the generative block in your paper does not use any BatchNorm, what is the reason for this?

Looking forward to your reply!
Thank you~

About Experiment Details

First of all, Thank you for your awesome works!

I was very impressed by your work. So, I reproduce it by config/mgd/retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py
Actually, I got 41.4 mAP on batchsize=16 (samples_per_gpu=8 on A5000 * 2) and lower mAP than Paper on batchsize=32 (samples_per_gpu=16 on A6000 * 2).
It's strange that there are very different results depending on the batch size.
Can you tell me about your experiment detail?

Sincerely,

Maybe something error in api/train.py?

Hi, @yzd-v
I am trying to train your released code about MGD.
I am familiar with your FGD code, so I think it is easy tu run MGD too.

When I train retinanet, error happen! I use mmcv-full==1.4.0 and mmdet==2.17.0.
image
It seems that mmdet do not support runner_type.

I am not sure this is something error, so I ask you for some help.
After comment this line, the code is ok to train. (The default mode is EpochBasedRunner)

Thanks for you nice work and wish for your reply!

kd loss not decrease

Hi, have you tried to train from the scratch for student?

and plus...will your kd loss converge in the training process?

When I try it for a segmentation task (resnet50-18), the kd loss not converge if I distil the (N,512,36,48) feature, thus not help the student....or only slightly change if I distil the first layer that is larger.

since my feature map is smaller, is this the direct reason?

would you kindly give me some idea on this?

With best regards

how to get the distill loss weight

In the paper, it adopt 2x10(-5)as loss weights for detection, so before apply the weights, what is the correct MGD loss value? As for me, the loss value of MGD without loss weight is around 5 in initial, if i multiply with the wights, it will be a very small number...

mask率与生成块

您好,我尝试设置mask=0,并使用3x3Conv-ReLU-3x3Conv生成块进行蒸馏,似乎最终的精度并没有降低,和mask=0.5相当。
所以,我有些疑惑,mask在蒸馏中发挥了什么作用呢?是否可以认为精度提升主要依靠生成块呢?
还有,您在Fig5中mask=0情况下得到精度71.41%,请问这里的设置是什么呢?这里使用生成块了吗?
期待您的回复!谢谢

can not find dist_train.sh

Hi, it seems that the script dist_train.sh in cls task is not uploaded, could you please provide it? I want to reproduce the results, thanks!

The reproduced results for semantic segmentation

Hi, Thanks for your work!

I reproduced the results by psp_r101_distill_psp_r18_40k_512x512_city.py. I got 74.95 mIoU without any modification on the config file. Moreover, I set use_logit = False and obtained 73.46 mIoU.

Two questions are below:

  1. Are the reported results of 74.10 and 73.63 in the paper on val set or test set?
  2. In Table 4, the input size of the student is 512x512, while in Section 4.3, the input size is 512x1024. Is this a typo?

It is appreciated if you can help me! Thank you!

About the student loss

Hi, thank you for your nice work.
I am not familiar with Knowledge Distilling,so I can't not understance what “the original loss” in paper refers to. For example, it means classification loss for classification task of student model, right? hope that you can give me some guidence.
image

Some confusion about the semantic segmentation in MGD

Hi, Thank you for your interesting work!
I have encountered some confusion and seek your help.

  1. In your paper, the teacher PspNet-Res101 is trained for 80k iterations, while in the code a 40k pre-trained model is used, so is it a typo in the paper?

  2. I refer to your training log "PspR18_TPspR101_73.73.log". It doesn't seem to match the config file “psp_r101_distill_psp_r18_40k_512x512_city.py”, and there is no module like “neck_fpn_convs" found in pspnet. So, in the semantic segmentation task, which features are actually distilled?
    image

Looking forward to your reply.
Thank you agian!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.