kevinzakka / pytorch-goodies Goto Github PK
View Code? Open in Web Editor NEWPyTorch Boilerplate For Research
PyTorch Boilerplate For Research
Hi! Thanks for providing this resource! I think I found a slight error in one of your "goodies."
In your implementation of the max norm constraint, you take the norm across dimension 0 of your tensor, i.e. taking the norm of each column.
In the original paper that introduces the max norm constraint, the authors describe max norm as "constraining the norm of the incoming weight vector at each hidden unit to be upper bounded by a fixed constant c (Srivastava et al 2014).
Therefore, if layer
According to the Arxiv paper https://arxiv.org/pdf/1609.07093.pdf, the orthogonal regularization should be:
orth_loss = orth_loss + (reg * sym.abs().sum())
instead of:
orth_loss = orth_loss + (reg * sym.sum())
since absolute opration was performed as per the paper.
without abs, the loss will introduce negative value as well.
Is there a simple way to incorporate ignore_label into the tversky_loss function?
I am doing a personal project and need this losses metrics.
Could I use in my personal project??
I had a question about how one hot encoding is done.
Why is it done like this:
true_1_hot = torch.eye(num_classes + 1)[true.squeeze(1)]
when it could be implemented like this
true_1_hot = F.one_hot(true, num_classes + 1)
Am I missing something or was F.one_hot added later?
Still for the orthogonal regularization piece of code
orth_loss = Variable(torch.FloatTensor(1), requires_grad=True)
Use torch.FloatTensor
is dangerous as per soumith (pytorch/tutorials#41).
Basically, torch.FloatTensor
will create a Tensor with uninitialized memory instead of zero. The memory can contain any garbage, as it is uninitialized. This is intentional. If you want zeroed Tensors, you can use torch.zeros(1)
Hi,
Thanks for the awesome repository.
I happened to notice the following typo in the dice_loss
method in losses.py
line 84: probas = F.softmax(probas, dim=1)
It should be probas = F.softmax(logits, dim=1)
Kindly make the change.
Thanks
When I use dice loss for Pytorch 1.3 I get the following error when I try to call loss.backward()
.
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Any idea why?
Very thanks for your code.
When I use the "jaccard_loss" function, some error happened:
Too many indices for tensor of dimension 2
in line 110.(The num_classes is 1)
My input size of true is 3177 and the input size of logits is 3177.
Can you help me to solve the problem~?
Thanks
Hi
Thanks for sharing your work , your dice loss will also work for multiple labels correct?
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.