GithubHelp home page GithubHelp logo

co-teaching's People

Contributors

bhanml avatar quanmingyao avatar xingruiyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

co-teaching's Issues

A question about your loss_coteaching function

Hello, I find that in your class "loss_coteaching", the parameters you passed are y_1, y_2, which are already the log_softmax results. But you then used the cross_entropy, which has combined the log_softmax and nll_loss. This will use log_softmax twice. I am not sure whether I am wrong. Or do this problem not occur in your pytorch version?
By the way, even though using log_softmax function twice, your code is still right~

tabular data/ noisy instances/ new datasets

Hi,
thanks for sharing your implementation. I have some questions about it:

  1. Does it also work on tabular data?
  2. Is the code tailored to the datasets used in the paper or can one apply it to any data?
  3. Is it possible to identify the noisy instances (return the noisy IDs or the clean set)?

Thanks!

data augmentation makes performance worse

Oddly, if I add the most common standard data augmentation to CIFAR-10 training, namely:

        transform=transforms.Compose([
            transforms.RandomCrop(32, padding=4),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
        ]),

test accuracy dramatically drops. Can you explain why?

cifar10 and cifar100 parameters

Hey guys, thanks for your nice work.
I am now trying to re-produce your result presented in the paper. However, I could not reach your result on cifar10 and cifar100. cifar10 is just a bit lower (71.8% with 45% pairflip noise), cifar100 is much lower than your presented result (31% with 45%pairflip noise).
In the paper, you mentioned the parameter of batch size, learning rate etc. I am wondering if you have changed settings for different datasets? If so, could please share them?

Cheers
Candice

Detail experiment results sharing

In order to compare with co-teaching, i want to cite the figure 3, 4, 5, 6 in your paper. Can you share your detail experiment results with me? It‘s quite useful for me. Thank you very much.

Only strategy Q1

Have you tried the experiment just using Q1 strategy(without Q2 strategy) to train the noisy data?
What about Q2's revenue on final accuracy?

Weird test accuracy curve

I tested your original codes and found out that the test accuracy dropped sharply around 40 epochs and fluctuated drastically until the final 20 epochs which does not match the curve in your paper. Can you explain why this happened?
tmp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.