GithubHelp home page GithubHelp logo

pytorch-goodies's People

Contributors

cnheider avatar kevinzakka avatar mfmezger avatar ranamihir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-goodies's Issues

Implementing Max Norm Correctly

Hi! Thanks for providing this resource! I think I found a slight error in one of your "goodies."

In your implementation of the max norm constraint, you take the norm across dimension 0 of your tensor, i.e. taking the norm of each column.

In the original paper that introduces the max norm constraint, the authors describe max norm as "constraining the norm of the incoming weight vector at each hidden unit to be upper bounded by a fixed constant c (Srivastava et al 2014).

Therefore, if layer $L$ has $n$ hidden units, each with a $k$ inputs, we want to take $n$ norms of $$k$$-dimensional weight vectors. The weight parameter for linear hidden units is stored as a two dimensional tensor (out_features x in_features). In terms of the above variables, this is an $n x k$ tensor; therefore, we want to take the norm of each row. To do this in pytorch, we need to take the norm across dimension 1.

Orthogonal regularization is wrong

According to the Arxiv paper https://arxiv.org/pdf/1609.07093.pdf, the orthogonal regularization should be:
orth_loss = orth_loss + (reg * sym.abs().sum())
instead of:
orth_loss = orth_loss + (reg * sym.sum())
since absolute opration was performed as per the paper.
without abs, the loss will introduce negative value as well.

One hot encoding

I had a question about how one hot encoding is done.

Why is it done like this:

true_1_hot = torch.eye(num_classes + 1)[true.squeeze(1)]

when it could be implemented like this

true_1_hot = F.one_hot(true, num_classes + 1)

Am I missing something or was F.one_hot added later?

Be ware of torch.FloatTensor(1). Should use torch.zeros(1)

Still for the orthogonal regularization piece of code
orth_loss = Variable(torch.FloatTensor(1), requires_grad=True)
Use torch.FloatTensor is dangerous as per soumith (pytorch/tutorials#41).

Basically, torch.FloatTensor will create a Tensor with uninitialized memory instead of zero. The memory can contain any garbage, as it is uninitialized. This is intentional. If you want zeroed Tensors, you can use torch.zeros(1)

Typo in dice_loss method

Hi,
Thanks for the awesome repository.

I happened to notice the following typo in the dice_loss method in losses.py

line 84: probas = F.softmax(probas, dim=1)

It should be probas = F.softmax(logits, dim=1)

Kindly make the change.
Thanks

error with dice loss

When I use dice loss for Pytorch 1.3 I get the following error when I try to call loss.backward().

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Any idea why?

error by using jaccard_loss

Very thanks for your code.
When I use the "jaccard_loss" function, some error happened:
Too many indices for tensor of dimension 2 in line 110.(The num_classes is 1)

My input size of true is 3177 and the input size of logits is 3177.
Can you help me to solve the problem~?
Thanks

dice loss for multi class

Hi

Thanks for sharing your work , your dice loss will also work for multiple labels correct?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.