GithubHelp home page GithubHelp logo

selfpatch's Introduction

Patch-level Representation Learning for Self-supervised Vision Transformers (SelfPatch)

PyTorch implementation for "Patch-level Representation Learning for Self-supervised Vision Transformers" (accepted Oral presentation in CVPR 2022)

thumbnail

Requirements

  • torch==1.7.0
  • torchvision==0.8.1

Pretraining on ImageNet

python -m torch.distributed.launch --nproc_per_node=8 main_selfpatch.py --arch vit_small --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir --local_crops_number 8 --patch_size 16 --batch_size_per_gpu 128 --out_dim_selfpatch 4096 --k_num 4

Pretrained weights on ImageNet

You can download the weights of the pretrained models on ImageNet. All models are trained on ViT-S/16. For detection and segmentation downstream tasks, please check SelfPatch/detection, SelfPatch/segmentation.

backbone arch checkpoint
DINO ViT-S/16 download (pretrained model from VISSL)
DINO + SelfPatch ViT-S/16 download

Evaluating video object segmentation on the DAVIS 2017 dataset

Step 1. Prepare DAVIS 2017 data

cd $HOME
git clone https://github.com/davisvideochallenge/davis-2017
cd davis-2017
./data/get_davis.sh

Step 2. Run Video object segmentation

python eval_video_segmentation.py --data_path /path/to/davis-2017/DAVIS/ --output_dir /path/to/saving_dir --pretrained_weights /path/to/model_dir --arch vit_small --patch_size 16

Step 3. Evaluate the obtained segmentation

git clone https://github.com/davisvideochallenge/davis2017-evaluation 
$HOME/davis2017-evaluation
python /path/to/davis2017-evaluation/evaluation_method.py --task semi-supervised --davis_path /path/to/davis-2017/DAVIS --results_path /path/to/saving_dir

Video object segmentation examples on the DAVIS 2017 dataset

Video (left), DINO (middle) and our SelfPatch (right)

img

Acknowledgement

Our code base is built partly upon the packages: DINO, mmdetection, mmsegmentation and XCiT

Citation

If you use this code for your research, please cite our papers.

@InProceedings{Yun_2022_CVPR,
    author    = {Yun, Sukmin and Lee, Hankook and Kim, Jaehyung and Shin, Jinwoo},
    title     = {Patch-Level Representation Learning for Self-Supervised Vision Transformers},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {8354-8363}
}

selfpatch's People

Contributors

hankook avatar silky1708 avatar sm3199 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

selfpatch's Issues

Some Questions

Hello, I would appreciate it if you could respond to some of my questions below:

  1. Could you clarify why SelfPatch loss is only evaluated when v == iq (student and teacher operate on the same view)?
  2. Why is teacher output viewed locally as well? According to what I understand of the original DINO, only 2 global views pass through the teacher
  3. Why is loc=False given to student_output?

student_output = [student(torch.cat(images[:2]), head_only=True, loc=False), student(torch.cat(images[2:]), head_only=True, loc=False)]

Thanks for your time and kindness!

Request pretrained models with patch size 8 * 8

Dear Authors,

I hope this message finds you well. I wanted to express my appreciation for your excellent work and express my interest in using your pretrained models for my own research.

Unfortunately, I have not been able to locate any pretrained models with a patch size of 8 x 8. Would it be possible for you to provide me with these models or suggest any alternative solutions?

Thank you very much for your time and consideration. I look forward to your response.

Best regards, Xin

Evaluation of COCO detection&segmentation

Hi, thanks for your excellent work!

I would like to know that if there are any ground truths involved for the DINO-SelfPatch fine-tuning with Mask R-CNN/FPN ?

Since I am collecting so-called fully self-supervised segmentation models where no any labels are used, I am wondering that if you simply followed MM-Detection and thus using ground truth for the fine-tuning of pertained model to adapt to COCO detection/segmentation tasks ?

Thanks !

[Quetion about the Dinoloss] Is this the correct way to implement original DINO LOSS??

From how i knew, In Dino loss Teacher Model uses only global views. and Student Model uses both global and local views.
However in your code. Teacher Model uses both global and local views like student model.

Did you intentionally add local views in teacher model?
If so how is this same as original Dino loss?

Input code :
teacher_output = [teacher(torch.cat(images[:2]), head_only=True, loc=True), teacher(torch.cat(images[2:]), head_only=True, loc=True)]
loss code :
teacher_cls = teacher_output[0][0].chunk(2) + teacher_output[1][0].chunk(self.ncrops-2)

Support for Patch size 8

Hello! Very interesting work.

I would like to try it out using a patch size 8. Would you be able to add support to it as it not currently compatible?

Thanks!

Should I add '--epochs 200' in the provided pre-training command?

Thanks for your exciting work!!
The provided pre-training command seems for training 300epochs on Imagenet. I wonder whether the provided checkpoint file 'dino_selfpatch.pth' is pre-trained with 200epochs or 300epochs. As mentioned in the paper, you pre-train selfpatch in imagenet for 200epochs. And if I want to reproduce your results, shouldI add '--epochs 200' in the provided pre-training command?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.