GithubHelp home page GithubHelp logo

Comments (15)

niberger avatar niberger commented on June 21, 2024 2

Hello, I will give a try this week, inspired from the COCO implementation: https://github.com/cocodataset/panopticapi/blob/master/panopticapi/evaluation.py

Any preliminary comments are most welcome, especially on the signature that the methods should have.
In any case I will submit a draft PR soon.

from torchmetrics.

ddrevicky avatar ddrevicky commented on June 21, 2024 1

Hi @ananyahjha93 are you working on this? If you're busy with other things I could take a look at this metric :).

from torchmetrics.

stale avatar stale commented on June 21, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from torchmetrics.

justusschock avatar justusschock commented on June 21, 2024

@ddrevicky I think you can give it a shot, thanks!

cc @teddykoker

from torchmetrics.

ddrevicky avatar ddrevicky commented on June 21, 2024

I will most likely not have time to look at this now, if anyone else would like to take a look at this feel free to do so :)

from torchmetrics.

github-actions avatar github-actions commented on June 21, 2024

Hi! thanks for your contribution!, great first issue!

from torchmetrics.

stale avatar stale commented on June 21, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from torchmetrics.

InCogNiTo124 avatar InCogNiTo124 commented on June 21, 2024

A polite request for reopening the issue, PQ is an important metric and it would be great to have it

from torchmetrics.

niberger avatar niberger commented on June 21, 2024

Regarding the spirit of implementation to adopt I do have a few question since it is my first contribution to PL.

  • Should the metric return a single float (the actual panoptic quality) or should it return a dict of detailed intermediate results like in the reference implementation in COCO api?
  • If I see small bugs/differences between the reference implementation and the reference paper, which one should I follow?

Reference paper
Reference implementation
My work so far

from torchmetrics.

niberger avatar niberger commented on June 21, 2024

Answer from @justusschock on Discord, transcribed here for visibility :
Regarding your questions:

  • Metrics (after the full computation i.e after compute has been called) usually return a single float/scalar tensor so that these values can easily be logged to a logger of your choice. Sometimes (like for a PR-Curve) this is not feasible because you can’t reduce it to a single scalar, but if possible we should try to get it like that. Note that if reduction is None, we should get a scalar per sample of the current batch.
  • That’s a very good question. Not sure how much this potential difference impacts the overall value. Usually I’d go with the paper, but in your specific case, I’d opt for the reference implementation since coco is the established de-facto standard and for comparability and consistency I feel like we should match them

from torchmetrics.

bhack avatar bhack commented on June 21, 2024

For Boundary PQ (Panoptic Quality) see:
#1500 (comment)
https://bowenc0221.github.io/boundary-iou/

from torchmetrics.

justusschock avatar justusschock commented on June 21, 2024

Reopening for missing things

  • better test coverage
  • batched support

from torchmetrics.

marcocaccin avatar marcocaccin commented on June 21, 2024

Ready to close this issue now? πŸ˜‰

from torchmetrics.

tommiekerssies avatar tommiekerssies commented on June 21, 2024

So this does not include the matching of the predicted and ground truth thing segments?

from torchmetrics.

aymuos15 avatar aymuos15 commented on June 21, 2024

https://github.com/BrainLesion/panoptica

from torchmetrics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.