GithubHelp home page GithubHelp logo

About ohem loss about torchsemiseg HOT 7 CLOSED

charlescxk avatar charlescxk commented on August 23, 2024
About ohem loss

from torchsemiseg.

Comments (7)

charlesCXK avatar charlesCXK commented on August 23, 2024 1

Hi, some researchers in semi-supervised segmentation area may like to use CE loss for CityScapes (as you mentioned). However, OHEM is a common setting in sueprvised training setting on CityScapes. Since it brings no computational cost/parameters during inference, why do we deliberately use a lower baseline (e.g. with CE loss)?

I also want to clarify two points:

  1. When compared with SOTA on Cityscapes, we use OHEM loss (on labeled set) for all the methods so the comparison is fair (their supervised baseline is exactly the same one).
  2. If the baseline for semi-supervised learning is very low, the gain may be large and seems that the semi-supervised method have a very large impact on the performance. However, it that true? I think, we study semi-supervised learning in order to use unlabeled data to improve the performance of the model, not to see large gains on a low baseline.

from torchsemiseg.

charlesCXK avatar charlesCXK commented on August 23, 2024

Hi, for supervised training, we use OHEM loss on CityScapes and CE loss on VOC dataset, which is a common setting in Semantic Segmentation. We haven't tried CE loss on CityScapes for supervised training.

For CPS loss, we use CE loss for both the two datasets.

from torchsemiseg.

jinhuan-hit avatar jinhuan-hit commented on August 23, 2024

In my opinion, for semi-supervised training, all methods use CE loss for both the two datasets. The baseline of Deeplab v3+ with ResNet-101 on 1/8 cityscapes setting is 72-73 in your paper. However, in A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation, the result is only 68.9.

from torchsemiseg.

charlesCXK avatar charlesCXK commented on August 23, 2024

I think if the supervised baseline is not trained well enough, then we cannot tell where the gain brought by semi-supervised learning actually comes from.

from torchsemiseg.

jinhuan-hit avatar jinhuan-hit commented on August 23, 2024

Yeah, I agree with you that studying semi-supervised learning on a higher baseline. I'm sorry for that I havn't noticed that you reproduce all the SOTA methods by yourself. Maybe you can point it on the benchmark, https://paperswithcode.com/task/semi-supervised-semantic-segmentation. Or other people may be confused by the big margin. That's only my own point and please forgive me if I bother you.

from torchsemiseg.

charlesCXK avatar charlesCXK commented on August 23, 2024

Hi, I know what you mean. However, the benchmark website you provide is just a reference. The comparisons in it are not fair at all, for example, they didn’t even use the same data partition (i.e. the same 1/8 subset of PASCAL VOC).

from torchsemiseg.

jinhuan-hit avatar jinhuan-hit commented on August 23, 2024

Thanks for your kind and quick reply.

from torchsemiseg.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.