GithubHelp home page GithubHelp logo

johnsk95 / pt4al Goto Github PK

View Code? Open in Web Editor NEW
55.0 5.0 6.0 1.13 MB

Official PyTorch implementation of "PT4AL: Using Self-Supervised Pretext Tasks for Active Learning (ECCV2022)"

Python 100.00%
active-learning eccv2022 pytorch self-supervised-learning

pt4al's Introduction

PWC

Update Note

  • We solved all problems. The issue is that the epoch of the rotation prediction task was supposed to run only 15 epochs, but it was written incorrectly as 120 epochs. Sorry for the inconvenience. [2023.01.02]
  • Add Cold Start Experiments
[solved problem]
We are redoing the CIFAR10 experiment.

The current reproduction result is the performance of 91 to 93.

We will re-tune the code again for stable performance in the near future.

The rest of the experiments confirmed that there was no problem with reproduction.

Sorry for the inconvenience.

Experiment Setting:

  • CIFAR10 (downloaded and saved in ./DATA
  • Rotation prediction for pretext task

Prerequisites:

Python >= 3.7

CUDA = 11.0

PyTorch = 1.7.1

numpy >= 1.16.0

Running the Code

To generate train and test dataset:

python make_data.py

To train the rotation predition task on the unlabeled set:

python rotation.py

To extract pretext task losses and create batches:

python make_batches.py

To evaluate on active learning task:

python main.py

To mask cold start experiments (random)

python main_random.py

image

To mask cold start experiments (PT4AL)

python main_pt4al.py

image

Citation

If you use our code in your research, or find our work helpful, please consider citing us with the bibtex below:

@inproceedings{yi2022using,
  title = {Using Self-Supervised Pretext Tasks for Active Learning},
  author = {Yi, John Seon Keun and Seo, Minseok and Park, Jongchan and Choi, Dong-Geol},
  booktitle = {Proc. ECCV},
  year = {2022},
}

pt4al's People

Contributors

kuangliu avatar seominseok0429 avatar johnsk95 avatar fducau avatar bearpaw avatar ypwhs avatar

Stargazers

yaling avatar scao010 avatar  avatar  avatar  avatar 陆晗 avatar Xuemei Jiang avatar Waleed Jmoona avatar Guofeng Cui avatar Gyeongho Kim (Jim Kim) avatar  avatar Deruo avatar  avatar Rémi avatar Phạm Văn Lĩnh avatar Bidur Khanal avatar  avatar Jacob A Rose avatar ChrisXue avatar  avatar  avatar kwnoh-aivis avatar  avatar mulkong_dev avatar Frank avatar  avatar Vanilet avatar Paolo Mandica avatar Shen Meng avatar Gyeongbo Sim avatar Akif avatar Yao Wu avatar  avatar 爱可可-爱生活 avatar Mike avatar Jongchan Park avatar Taechawat Konkaew avatar Bangyao Wang avatar ShuangWu avatar Chief Accelerator avatar wxyz avatar Jie-Jing Shao avatar  avatar Vikram Kalabi avatar Artem Gorshkov avatar Clement Tan avatar Xa9aX ツ avatar cin-hubert avatar Jaemin avatar hyobin avatar Hyoencheol avatar Daehan Kim avatar Liangyu Chen avatar Bowen Dong avatar

Watchers

Mike avatar  avatar Kostas Georgiou avatar  avatar  avatar

pt4al's Issues

Result of VAAL on CIFAR10

thanks so much for sharing your code. In your paper you reported result of VAAL on cifar 10 is around 86-87%, but how you get the result of vaal on cifar10?
many thanks.

Training Sorted batches for AL cycle

Hi, thank you so much for your sharing your work.

So I wanted to ask one thing once we are done with the pretext task and sorting with batches, we will train for an active learning cycle from scratch?

There is no information from the previous training to the next training which is Active Learning training. Am I getting correct?

Your code use trainset for test phase ?

image
image

In main.py, your testset looks like above: is_train=False, path=None. So, self.img_path in Loader2 will take data from glob.glob('./DATA/train//'). Which means, you are using traindataset for testing ?

'stty' is not recognized as an internal or external command, operable program or batch file.

I got this error when running the rotation.py

full traceback error:
'stty' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Omid\UPB\SVM\AL\PT4AL-main\rotation.py", line 159, in
train(epoch)
File "D:\Omid\UPB\SVM\AL\PT4AL-main\rotation.py", line 76, in train
for batch_idx, (inputs, inputs1, inputs2, inputs3, targets, targets1, targets2, targets3) in enumerate(trainloader):
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\CEOSpaceTech\Miniconda3\envs\env_pytorch\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Rotation loss of ImageNet-67

Thank you so much for your code! I was wondering if by any chance you could release the rotation loss of the Imagenet you used? That would be very helpful for reproducing and some follow-up works. Thank you in advance.

Test set

Hi, after every training epoch you test your model on a test set. Why do you load the full 50k training images and not the 10k test images?

Question abou main.py testset

Hi, thank you so much for your sharing your work.

main.py 44 line
testset = Loader2(is_train=False, transform=transform_test)

I think it works trainset, so should I fix it like this?
testset = Loader(is_train=False, transform=transform_test)

Extending current work

@johnsk95 thanks for sharing the code base, i have a query
can we use the existing source code to extend it to task like object detection and segmentation? if so what changes have to be made
Thanks in advance

Can not reproduce

"The current reproduction result is the performance of 91 to 93"
I can not reproduce above accuracy. I tried to reproduce three times. The final results that I got are 89.45, 89.71, 90.05

The Imagenet-67 dataset

Hi,

Thank you for the great work! I'm just wondering if by any chance you could release the mapping of the 67 superclasses of Imagenet you used? Eg, the mapping of each superclass to its corresponding Imagenet class (wordnet id if possible).

About Test Dataset

Hi,
I have a question about test in "main.py".

As I read your code, you use TRAIN dataset for test section(in main.py, line 44, 45)
Is there any reasons why you've done that?

thanks a lot in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.