Comments (15)
Hello, I will give a try this week, inspired from the COCO implementation: https://github.com/cocodataset/panopticapi/blob/master/panopticapi/evaluation.py
Any preliminary comments are most welcome, especially on the signature that the methods should have.
In any case I will submit a draft PR soon.
from torchmetrics.
Hi @ananyahjha93 are you working on this? If you're busy with other things I could take a look at this metric :).
from torchmetrics.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
from torchmetrics.
@ddrevicky I think you can give it a shot, thanks!
cc @teddykoker
from torchmetrics.
I will most likely not have time to look at this now, if anyone else would like to take a look at this feel free to do so :)
from torchmetrics.
Hi! thanks for your contribution!, great first issue!
from torchmetrics.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
from torchmetrics.
A polite request for reopening the issue, PQ is an important metric and it would be great to have it
from torchmetrics.
Regarding the spirit of implementation to adopt I do have a few question since it is my first contribution to PL.
- Should the metric return a single float (the actual panoptic quality) or should it return a dict of detailed intermediate results like in the reference implementation in COCO api?
- If I see small bugs/differences between the reference implementation and the reference paper, which one should I follow?
Reference paper
Reference implementation
My work so far
from torchmetrics.
Answer from @justusschock on Discord, transcribed here for visibility :
Regarding your questions:
- Metrics (after the full computation i.e after compute has been called) usually return a single float/scalar tensor so that these values can easily be logged to a logger of your choice. Sometimes (like for a PR-Curve) this is not feasible because you canβt reduce it to a single scalar, but if possible we should try to get it like that. Note that if reduction is None, we should get a scalar per sample of the current batch.
- Thatβs a very good question. Not sure how much this potential difference impacts the overall value. Usually Iβd go with the paper, but in your specific case, Iβd opt for the reference implementation since coco is the established de-facto standard and for comparability and consistency I feel like we should match them
from torchmetrics.
For Boundary PQ (Panoptic Quality) see:
#1500 (comment)
https://bowenc0221.github.io/boundary-iou/
from torchmetrics.
Reopening for missing things
- better test coverage
- batched support
from torchmetrics.
Ready to close this issue now? π
from torchmetrics.
So this does not include the matching of the predicted and ground truth thing segments?
from torchmetrics.
https://github.com/BrainLesion/panoptica
from torchmetrics.
Related Issues (20)
- Discrepancy in optimal threshold calculation between sklearn and torchmetrics ROC implementations HOT 2
- Mean Average Detection ignores `warn_on_many_detections` set to False HOT 1
- `list` states leak (`Tensor`) memory HOT 1
- Behaviors of AUROC and Average Precision are inconsistent when all labels are equal HOT 4
- The default value of `compute_with_cache` should be `True` . HOT 2
- Bug in ERGAS HOT 2
- Ordinal classification metrics HOT 1
- Binary Classification Expected Calibration Error HOT 2
- `intersection_over_union` error HOT 2
- Clarify that nan is supported in zero_division HOT 4
- Add `_filter_kwargs` in `ClasswiseWrapper` metric wrapper HOT 3
- MetricTracker use higher_is_better as default for maximize HOT 2
- Unpredictable class order when `panoptic_quality(..., return_per_class=True)`
- torchmetrics Accuracy HOT 8
- BootStrapper.reset() does not reset properly
- segmentation.MeanIoU is wrong HOT 3
- DataLoader worker is killed in Docker HOT 4
- MeanAveragePrecision - bug in `max_detection_thresholds` HOT 3
- Documentation of ERGAS HOT 1
- Support for DLM (AIM) metric HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchmetrics.