yzd-v / mgd Goto Github PK
View Code? Open in Web Editor NEWMasked Generative Distillation (ECCV 2022)
License: Apache License 2.0
Masked Generative Distillation (ECCV 2022)
License: Apache License 2.0
Hi Zhendong,
MGD is an amazing work for distillation. I also notice your new work named ViTKD.
In ViTKD, we only distill the knowledge from unmasked area, while full area in MGD.
My questions are:
1) Why ViTKD only distill the knowledge only from unmasked area
2) What is the difference and relationship between unmasked and masked area in distillation.
I have another question, so I'm leaving an issue like this.
Upper graph is distillation loss loss_mgd_fpn_*
graph. ( blue one is with random mask, red is without random mask)
I wonder that With random mask distillation loss is higher than Without it. but, with Random mask mAP(metric) is higher than without it. (w/ 41.4 -> w/o 41.3)
if proposed loss is working well, loss should be lower than without random mask.
can you explain about it? I'd appreciate it if you could recommend some materials for reference.
Sincerely,
Thanks for your great work! I have some questions about the generative block (sec. 5.4). Did you keep the generative block simple on purpose? What do you think if we use more complex modules as generative blocks (such as GcBlock in FGD)? Thanks!
Hi, I still have two questions as follow: (1) I can't know find out the detail design of the student network, including the Masked operation and the generation, since I do not use the MMClassification. (2) As I know, when test on test dataset, only the student network including the masked feature module and the Generation module can be used for test, right?
how to gaurantee the heights and weights of student and teacher feature maps are equal ?
Thanks for your work.
I'm so curious why the generative block is just a simple stack of 3x3 Conv.
In addition, a convolution is usually followed by a BatchNorm module. But the generative block in your paper does not use any BatchNorm, what is the reason for this?
Looking forward to your reply!
Thank you~
First of all, Thank you for your awesome works!
I was very impressed by your work. So, I reproduce it by config/mgd/retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py
Actually, I got 41.4 mAP on batchsize=16 (samples_per_gpu=8 on A5000 * 2) and lower mAP than Paper on batchsize=32 (samples_per_gpu=16 on A6000 * 2).
It's strange that there are very different results depending on the batch size.
Can you tell me about your experiment detail?
Sincerely,
Hi, @yzd-v
I am trying to train your released code about MGD.
I am familiar with your FGD code, so I think it is easy tu run MGD too.
When I train retinanet, error happen! I use mmcv-full==1.4.0 and mmdet==2.17.0.
It seems that mmdet do not support runner_type
.
I am not sure this is something error, so I ask you for some help.
After comment this line, the code is ok to train. (The default mode is EpochBasedRunner
)
Thanks for you nice work and wish for your reply!
Hi, have you tried to train from the scratch for student?
and plus...will your kd loss converge in the training process?
When I try it for a segmentation task (resnet50-18), the kd loss not converge if I distil the (N,512,36,48) feature, thus not help the student....or only slightly change if I distil the first layer that is larger.
since my feature map is smaller, is this the direct reason?
would you kindly give me some idea on this?
With best regards
In the paper, it adopt 2x10(-5)as loss weights for detection, so before apply the weights, what is the correct MGD loss value? As for me, the loss value of MGD without loss weight is around 5 in initial, if i multiply with the wights, it will be a very small number...
您好,我尝试设置mask=0,并使用3x3Conv-ReLU-3x3Conv生成块进行蒸馏,似乎最终的精度并没有降低,和mask=0.5相当。
所以,我有些疑惑,mask在蒸馏中发挥了什么作用呢?是否可以认为精度提升主要依靠生成块呢?
还有,您在Fig5中mask=0情况下得到精度71.41%,请问这里的设置是什么呢?这里使用生成块了吗?
期待您的回复!谢谢
Hi, it seems that the script dist_train.sh
in cls task is not uploaded, could you please provide it? I want to reproduce the results, thanks!
Hi, Thanks for your work!
I reproduced the results by psp_r101_distill_psp_r18_40k_512x512_city.py
. I got 74.95 mIoU without any modification on the config file. Moreover, I set use_logit = False and obtained 73.46 mIoU.
Two questions are below:
It is appreciated if you can help me! Thank you!
Hi, Thank you for your interesting work!
I have encountered some confusion and seek your help.
In your paper, the teacher PspNet-Res101 is trained for 80k iterations, while in the code a 40k pre-trained model is used, so is it a typo in the paper?
I refer to your training log "PspR18_TPspR101_73.73.log". It doesn't seem to match the config file “psp_r101_distill_psp_r18_40k_512x512_city.py”, and there is no module like “neck_fpn_convs" found in pspnet. So, in the semantic segmentation task, which features are actually distilled?
Looking forward to your reply.
Thank you agian!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.