GithubHelp home page GithubHelp logo

Comments (8)

tchaton avatar tchaton commented on September 22, 2024

cc @teddykoker

from torchmetrics.

ananyahjha93 avatar ananyahjha93 commented on September 22, 2024

@SkafteNicki

from torchmetrics.

rohitgr7 avatar rohitgr7 commented on September 22, 2024

I think the docstrings of the metric explains itself whether it can be logged or not. I mean only scalars can be logged, so one can just check the docstrings and see whether it returns a scaler or not. Also if you are using a metric, you should know what the metric is computing and what it returns right?
Regarding the .log it's already mentioned in the docs that it only logs scalar values. Other non-scalar values can be logged manually as mentioned in the docs.

from torchmetrics.

SkafteNicki avatar SkafteNicki commented on September 22, 2024

After giving this a though, I think @rohitgr7 is right that it is up to the user to deal with non-scalar tensors and to be aware what they are trying to log.
However, the error message when trying to log a non-scalar tensor with self.log could be better:

/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/logging.py in metrics_to_scalars(self, metrics)
     46         for k, v in metrics.items():
     47             if isinstance(v, torch.Tensor):
---> 48                 v = v.item()
     49 
     50             if isinstance(v, dict):

ValueError: only one element tensors can be converted to Python scalars

instead we could have something like:

try: 
    v = v.item()
except ValueError as e:
    raise ValueError('You are trying to log a non-scalar tensor with `self.log`, but only scalar tensors are supported. '
                             'Please consult the documentation on how to manually log other types of data.')

from torchmetrics.

NumesSanguis avatar NumesSanguis commented on September 22, 2024

With this issue I'm not asking self.log() to take care of the not-loggable issue. What I'm asking is a property to check against. Instead of having to add a self.log() line for every metric that returns a scalar, I just want to loop over all Metrics that are registered. However, if one of these Metrics does not return a scalar, the whole thing fails.

While as a single user you write all the code, so you can add explicit exceptions is fine, but when sharing a code basis among a team (in a .py file) and less ML technical people just want to say: "Add this metric" in a Jupyter Notebook, it wouldn't be ideal if they need to change stuff in the .py file, and possible break stuff for others.
A property that can be checked for would solve this.

Also if you are using a metric, you should know what the metric is computing and what it returns right?

True. I still think it's nicer to have it as a property though. If I have stored somewhere that I have 3 output classes, and therefore I can deduct I get a 3x3 matrix with AUROC, it still relies on where we've stored this "3". Maybe for some reason I want to change this variable name, and then this part of the code would break.
A property on the other hand would keep working (also when sharing on e.g. a forum), and would allow to check whether it can be logged or not.


Also, I cannot think of a specific metric now, but say for example that there is a metric that returns a scalar when there are only 2 classes, but returns a non-scalar when there are more. Then you can't simply check on the Metrics instance.

from torchmetrics.

SkafteNicki avatar SkafteNicki commented on September 22, 2024

@NumesSanguis Metrics like Precision, Recall ect will at some point be update such their average argument can be None (similar to sklearn), which corresponding to returning the score for each class i.e. this will be metrics that only can be logged using self.log for some parameter combinations.

I completely understand the problem at hand here. My only problem is that it will require going over each metric, specifying this property. Only automatic solution I can think of, would be to always do an internal compute on the first batch such that we can extract the shape before the user are going to do anything else.

from torchmetrics.

github-actions avatar github-actions commented on September 22, 2024

Hi! thanks for your contribution!, great first issue!

from torchmetrics.

SkafteNicki avatar SkafteNicki commented on September 22, 2024

With the separation of metrics from lightning this feature does not really fit this package any more as it is lightning specific.
Closing it.

from torchmetrics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.