GithubHelp home page GithubHelp logo

lee-plus-plus / papi Goto Github PK

View Code? Open in Web Editor NEW

This project forked from alphaxia/papi

0.0 0.0 0.0 251 KB

Code for "Towards Effective Visual Representations for Partial-Label Learning" in CVPR 2023.

License: Apache License 2.0

Python 100.00%

papi's Introduction

Towards Effective Visual Representations for Partial-Label Learning

framework

This is a PyTorch implementation of CVPR 2023 paper PaPi.

Title: Towards Effective Visual Representations for Partial-Label Learning

Authors: Shiyu Xia, Jiaqi Lv, Ning Xu, Gang Niu, Xin Geng

Affliations: Southeast University, RIKEN Center for Advanced Intelligence Project

If you use this code for a paper, please cite:

@inproceedings{
  xia2023papi,
  title={Towards Effective Visual Representations for Partial-Label Learning},
  author={S. Xia, J. Lv, N. Xu, G. Niu, and X. Geng},
  booktitle={Proceedings of 36th IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Running PaPi

We provide the following shell codes for PaPi running.

Getting started

  • Create directory ./data (if ./data does not exist)
  • Download and unpack data in data1 to ./data
  • Create directory ./pmodel (if ./pmodel does not exist)
  • Download models in pmodel1 to ./pmodel

Start Running

Run Fashion-MNIST with q=0.1, 0.3, 0.5, 0.7 and instance-dependent partial labels
python -u main.py --exp-type 'rand' --exp-dir './experiment/fmnist_rand_0.1' --dataset fmnist --data-dir '../data' --num-class 10 --tau_proto 1.0 --alpha_mixup 8.0 --dist-url 'tcp://localhost:12318' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.1


python -u main.py --exp-type 'rand' --exp-dir './experiment/fmnist_rand_0.3' --dataset fmnist --data-dir '../data' --num-class 10 --tau_proto 1.0 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12319' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-4 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.3


python -u main.py --exp-type 'rand' --exp-dir './experiment/fmnist_rand_0.5' --dataset fmnist --data-dir '../data' --num-class 10 --tau_proto 1.0 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12320' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.5


python -u main.py --exp-type 'rand' --exp-dir './experiment/fmnist_rand_0.7' --dataset fmnist --data-dir '../data' --num-class 10 --tau_proto 1.0 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12321' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.7


python -u main.py --exp-type 'ins' --exp-dir './experiment/fmnist_ins' --dataset fmnist --data-dir '../data' --num-class 10 --tau_proto 1.0 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12409' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.0
Run SVHN with q=0.1, 0.3, 0.5, 0.7 and instance-dependent partial labels
python -u main.py --exp-type 'rand' --exp-dir './experiment/SVHN_rand_0.1' --dataset SVHN --data-dir '../data' --num-class 10 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12528' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.1


python -u main.py --exp-type 'rand' --exp-dir './experiment/SVHN_rand_0.3' --dataset SVHN --data-dir '../data' --num-class 10 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12533' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.07 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.3


python -u main.py --exp-type 'rand' --exp-dir './experiment/SVHN_rand_0.5' --dataset SVHN --data-dir '../data' --num-class 10 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12538' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.07 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.5


python -u main.py --exp-type 'rand' --exp-dir './experiment/SVHN_rand_0.7' --dataset SVHN --data-dir '../data' --num-class 10 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12543' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.07 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.7


python -u main.py --exp-type 'ins' --exp-dir './experiment/SVHN_ins' --dataset SVHN --data-dir '../data' --num-class 10 --alpha_mixup 0.05 --dist-url 'tcp://localhost:12613' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.07 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.0
Run CIFAR-10 with q=0.1, 0.3, 0.5, 0.7 and instance-dependent partial labels
python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar10_rand_0.1' --dataset cifar10 --data-dir '../data' --num-class 10 --dist-url 'tcp://localhost:12348' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.1


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar10_rand_0.3' --dataset cifar10 --data-dir '../data' --num-class 10 --dist-url 'tcp://localhost:12353' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.3


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar10_rand_0.5' --dataset cifar10 --data-dir '../data' --num-class 10 --dist-url 'tcp://localhost:12358' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.5


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar10_rand_0.7' --dataset cifar10 --data-dir '../data' --num-class 10 --dist-url 'tcp://localhost:12363' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.7


python -u main.py --exp-type 'ins' --exp-dir './experiment/cifar10_ins' --dataset cifar10 --data-dir '../data' --num-class 10 --dist-url 'tcp://localhost:12418' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.05 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.0
Run CIFAR-100 with q=0.01, 0.05, 0.1, 0.2 and instance-dependent partial labels
python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100_rand_0.01' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12368' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.01


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100_rand_0.05' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12373' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.05


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100_rand_0.1' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12378' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.1


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100_rand_0.2' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12383' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.2


python -u main.py --exp-type 'ins' --exp-dir './experiment/cifar100_ins' --dataset cifar100 --data-dir '../data' --num-class 100 --tau_proto 1.0 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12493' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 200 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.0
Run CIFAR-100H with q=0.1, 0.3, 0.5, 0.7
python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100H_rand_0.1' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12368' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.1 --hierarchical


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100H_rand_0.3' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12373' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.3 --hierarchical


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100H_rand_0.5' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12378' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.5 --hierarchical


python -u main.py --exp-type 'rand' --exp-dir './experiment/cifar100H_rand_0.7' --dataset cifar100 --data-dir '../data' --num-class 100 --dist-url 'tcp://localhost:12383' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.1 --wd 1e-3 --cosine --epochs 500 --batch-size 256 --alpha_weight 1.0 --partial_rate 0.7 --hierarchical
Run Mini-Imagenet with q=0.01, 0.05, 0.1, 0.2
python -u main.py --exp-type 'rand' --exp-dir './experiment/miniImagenet_rand_0.01' --dataset miniImagenet --data-dir '../data/mini-imagenet/images' --num-class 100 --tau_proto 0.1 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12322' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.03 --wd 1e-4 --cosine --epochs 500 --batch-size 64 --alpha_weight 1.0 --partial_rate 0.01


python -u main.py --exp-type 'rand' --exp-dir './experiment/miniImagenet_rand_0.05' --dataset miniImagenet --data-dir '../data/mini-imagenet/images' --num-class 100 --tau_proto 0.1 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12332' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.03 --wd 1e-4 --cosine --epochs 500 --batch-size 64 --alpha_weight 1.0 --partial_rate 0.05


python -u main.py --exp-type 'rand' --exp-dir './experiment/miniImagenet_rand_0.1' --dataset miniImagenet --data-dir '../data/mini-imagenet/images' --num-class 100 --tau_proto 0.1 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12342' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.03 --wd 1e-4 --cosine --epochs 500 --batch-size 64 --alpha_weight 1.0 --partial_rate 0.1


python -u main.py --exp-type 'rand' --exp-dir './experiment/miniImagenet_rand_0.2' --dataset miniImagenet --data-dir '../data/mini-imagenet/images' --num-class 100 --tau_proto 0.1 --alpha_mixup 5.0 --dist-url 'tcp://localhost:12352' --multiprocessing-distributed --cuda_VISIBLE_DEVICES '0' --world-size 1 --rank 0 --seed 123 --arch resnet18 --workers 0 --lr 0.03 --wd 1e-4 --cosine --epochs 500 --batch-size 64 --alpha_weight 1.0 --partial_rate 0.2

We only resort to one single NVIDIA Tesla V100 GPU training. If you would like to use multiple GPUs, please carefully check the code.

Acknowledgment

J. Lv, M. Xu, L. Feng, G. Niu, X. Geng, and M. Sugiyama. Progressive identification of true labels for partial-label learning. In International Conference on Machine Learning, pages 6500โ€“6510, Virtual Event, July 2020. ACM.

Ning Xu, Congyu Qiao, Xin Geng, and Min-Ling Zhang. Instance-dependent partial label learning. In Advances in Neural Information Processing Systems, Virtual Event, December 2021. MIT Press.

Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. PiCO: Contrastive label disambiguation for partial label learning. In International Conference on Learning Representations, 2022.

Wu, Dong-Dong, Deng-Bao Wang, and Min-Ling Zhang. Revisiting consistency regularization for deep partial label learning. In International Conference on Machine Learning, 2022.

papi's People

Contributors

alphaxia avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.