GithubHelp home page GithubHelp logo

Comments (8)

MendelXu avatar MendelXu commented on August 16, 2024

@duany049 First of all, I think you should update the code. There is a bug in the native code that makes the weight of unsupervised loss is always 4 while it should be controlled by semi_wrapper.train_cfg.unsup_weight.

Back to your case, the trend of the accuracy is determined mainly by the weight of unsupervised loss. When the weight is too large, the accuracy will go down first and raise later. So instead of adjusting the learning rate, I think you can adjust the weight at first. (Appending --cfg-options semi_wrapper.train_cfg.unsup_weight=<YOUR_WEIGHT> to the training command and change the value)

from softteacher.

duany049 avatar duany049 commented on August 16, 2024

@duany049 First of all, I think you should update the code. There is a bug in the native code that makes the weight of unsupervised loss is always 4 while it should be controlled by semi_wrapper.train_cfg.unsup_weight.

Back to your case, the trend of the accuracy is determined mainly by the weight of unsupervised loss. When the weight is too large, the accuracy will go down first and raise later. So instead of adjusting the learning rate, I think you can adjust the weight at first. (Appending --cfg-options semi_wrapper.train_cfg.unsup_weight=<YOUR_WEIGHT> to the training command and change the value)

OK.
I'll try you propose.I have two question, can you help me.
Q1: The final result is: unsup_acc: 94.8121, sup_acc: 94.1457; What should I do, increase the unsup_weight or lower it ?
Q2: How to confirm normal results? both unsup_acc and sup_acc indicators exceed 99, right?

from softteacher.

MendelXu avatar MendelXu commented on August 16, 2024

You can try to decrease it. But it may not reflect the real performance of the detector. So instead of tuning for better training acc, it Is better to prepare a validation set and see the true performance on the validation set. It is not necessary to tune the model for accuracy over 99.

from softteacher.

duany049 avatar duany049 commented on August 16, 2024

You can try to decrease it. But it may not reflect the real performance of the detector. So instead of tuning for better training acc, it Is better to prepare a validation set and see the true performance on the validation set. It is not necessary to tune the model for accuracy over 99.

Thank you.
This is the first time I use semi-supervised learning, I don't know if I understand it correctly.
Below is what I think:
Unlike the low precision of the training set in supervised learning, which means that the model fitting ability is insufficient, the low precision of the training set in semi-supervised learning is normal

Besides, I have two question for you:
Q1: If I have a large amount of unlabeled data, do I need to increase the sampling ratio of (unlabeled data / labeled data)
Q2: doese train.sample_ratio=[1, 1] represent labeld and unlabeled num respectively ?

from softteacher.

MendelXu avatar MendelXu commented on August 16, 2024

Q1: I think a larger sampling ratio is more efficient when the unlabeled data is far more than labeled data.
Q2: Yes. For example, if you change it to [1,2], then it will sample 1/3 labeled data and 2/3 unlabeled data in each batch.

from softteacher.

duany049 avatar duany049 commented on August 16, 2024

Q1: I think a larger sampling ratio is more efficient when the unlabeled data is far more than labeled data.
Q2: Yes. For example, if you change it to [1,2], then it will sample 1/3 labeled data and 2/3 unlabeled data in each batch.

Happy Mid-Autumn Festival~
Besides, unsup_weight have little effect on the final performance, it essentially adjust the learning rate of the model across different data sets, so I don't have much need to adjust them, do I ?

from softteacher.

MendelXu avatar MendelXu commented on August 16, 2024

It is hard to say the unsup_weight has little or heavy effect on the final performance. I think you should still try to adjust it when you find other factors are ready. ( Such as learning rate, pseudo label threshold, sampling ratio)

from softteacher.

duany049 avatar duany049 commented on August 16, 2024

It is hard to say the unsup_weight has little or heavy effect on the final performance. I think you should still try to adjust it when you find other factors are ready. ( Such as learning rate, pseudo label threshold, sampling ratio)

Thank you very much !

from softteacher.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.