bhanml / co-teaching Goto Github PK
View Code? Open in Web Editor NEWNeurIPS'18: Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
NeurIPS'18: Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
Hello, I find that in your class "loss_coteaching", the parameters you passed are y_1, y_2, which are already the log_softmax results. But you then used the cross_entropy, which has combined the log_softmax and nll_loss. This will use log_softmax twice. I am not sure whether I am wrong. Or do this problem not occur in your pytorch version?
By the way, even though using log_softmax function twice, your code is still right~
How to ensure ‘noise rate’ if I use the method on my datasets?
Is {0.2, 0.45, 0.5} recommended? and how to ensure the parameter from validation set?
In line 29 of loss.py, there is no need to do torch.sum(loss_1_update)/num_remember. Because the result of F.cross_entropy with default parameters reduce = True and size_average = True is the average loss.
Hi,
thanks for sharing your implementation. I have some questions about it:
Thanks!
Oddly, if I add the most common standard data augmentation to CIFAR-10 training, namely:
transform=transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]),
test accuracy dramatically drops. Can you explain why?
How to inference given two trained nets (net1 & net2)? by using net1 only?
Hey guys, thanks for your nice work.
I am now trying to re-produce your result presented in the paper. However, I could not reach your result on cifar10 and cifar100. cifar10 is just a bit lower (71.8% with 45% pairflip noise), cifar100 is much lower than your presented result (31% with 45%pairflip noise).
In the paper, you mentioned the parameter of batch size, learning rate etc. I am wondering if you have changed settings for different datasets? If so, could please share them?
Cheers
Candice
In order to compare with co-teaching, i want to cite the figure 3, 4, 5, 6 in your paper. Can you share your detail experiment results with me? It‘s quite useful for me. Thank you very much.
Have you tried the experiment just using Q1 strategy(without Q2 strategy) to train the noisy data?
What about Q2's revenue on final accuracy?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.