GithubHelp home page GithubHelp logo

Comments (8)

VelsLiu avatar VelsLiu commented on July 18, 2024 1

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

from weighted-soft-label-distillation.

woshichase avatar woshichase commented on July 18, 2024

Thanks for your attention. To keep consistency with ImageNet experiments, Cifar-100 experiments are also run on Overhaul repo(https://github.com/clovaai/overhaul-distillation). As described in our paper, set the training settings the same with CRD. The loss implementation is the same as that on ImageNet. Set alpha to 2.25 and T to 4 as described Sec 5. The pretrained teachers are re-trained on Overhaul using the same training settings as CRD. Note results are averaged over 5 runs for Cifar-100.

from weighted-soft-label-distillation.

summertaiyuan avatar summertaiyuan commented on July 18, 2024

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

from weighted-soft-label-distillation.

VelsLiu avatar VelsLiu commented on July 18, 2024

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

from weighted-soft-label-distillation.

summertaiyuan avatar summertaiyuan commented on July 18, 2024

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

from weighted-soft-label-distillation.

VelsLiu avatar VelsLiu commented on July 18, 2024

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

Yeah, the main idea of the method is the weight. Previously I was just curious about how CE+KL loss with an adaptive weight could achieve such a good performance. The author said they retrained the teacher. Probably the results can only be reproduced with their pretrained teachers. So just move on.

from weighted-soft-label-distillation.

woshichase avatar woshichase commented on July 18, 2024

@summertaiyuan @VelsLiu
1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster.
log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'.
Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate.
[1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

from weighted-soft-label-distillation.

summertaiyuan avatar summertaiyuan commented on July 18, 2024

@summertaiyuan @VelsLiu
1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster.
log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'.
Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate.
[1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

I sincerely apologize to you, I reproduce your results tonight.

withdraw the apology

from weighted-soft-label-distillation.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.