GithubHelp home page GithubHelp logo

unicl's Introduction

This is the official Pytorch implementation of UniCL:

"Unifiled Contrastive Learning in Image-Text-Label Space. CVPR 2022" by Jianwei Yang*, Chunyuan Li*, Pengchuan Zhang*, Bin Xiao*, Ce Liu, Lu Yuan and Jianfeng Gao.

Introduction

In this paper, we introduce a new perspective on commonly used image-label and image-text data by residing them in an image-text-label space. In this space, a new learning paradigm, called Unified Contrastive Learning (UniCL) with a single learning objective is proposed to seamlessly prompt the synergy of two data types. We demonstrate that UniCL is an effective way of learning semantically rich yet discriminative representations, universally for image recognition in zero-shot, linear-probe, fully finetuning and transfer learning scenarios. When scaled up to billions of data, UniCL can exclusively learn a powerful visual-semantic representation supporting dozens of downstream tasks shown in Florence.

We make the comparisons between UniCL with coventional learning methods below:

💥 All previous links are broken. Please find all checkpoints here: https://github.com/microsoft/UniCL/releases/tag/v1.0

Updates

  • [11/24/2022] KLITE, the knowledge-augmented version of UniCL, is publicly released on Github.
  • 💥 [10/05/2022] How do we use the pretrainied UniCL checkpoints? Beyond the zero-shot classification shown in our paper, we can use them for object detection. Now RegionCLIP supports to use pretrained UniCL transformer models, such as Swin, ViT for open-vocabulary object detection without any finetuning. Check it out!
  • [08/19/2022] Organizing ECCV Workshop Computer Vision in the Wild (CVinW), where two challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks:

$\qquad$ [Workshop] $\qquad$ [IC Challenge] $\qquad$ [OD Challenge]

  • [06/19/2022] Released the evaluation benchmark used in UniCL, ELEVATER, which contains 20 downstream image classification tasks. More info: [Benchmark] [Toolkit] [Paper]
  • [06/04/2022] Checkout out our Huggingface Gradio demo.
  • [05/21/2022] Released pretrained model and zero-shot evaluation on ImageNet-1k.

Benchmarking

Image-label training augmented by image-text pairs

Model Training Set Top-1 on IN-1K ZS on 14 datasets Download
Swin-T IN-1K 79.9 30.2 ckpt/config
Swin-T IN-1K + GCC-3M 80.2 39.0 ckpt/config
Swin-T IN-1K + GYFCC-14M 81.1 40.0 ckpt/config
Swin-T IN-1K + GCC-15M 81.8 45.1 ckpt/config

Note that all the above models are trained without strong data augmentations like mixup and cutmix.

Image-text learning augmented by image-label data

Model Training Set ZS on IN-1K ZS on 14 datasets ZS on 20 datasets Download
Swin-T YFCC-14M 30.1 36.3 - ckpt/config
Swin-T IN-21K 28.5 37.8 - ckpt/config
Swin-T IN-22K 66.8 38.9 - ckpt/config
Swin-T IN-21K (half) + YFCC-14M (half) 36.4 45.5 - ckpt/config
Swin-T IN-21K + YFCC-14M 40.5 49.1 - ckpt/config
Swin-B IN-21K 29.9 42.4 - ckpt/config
Swin-B IN-21K (half) + YFCC-14M (half) 41.1 48.5 - ckpt/config
Swin-B IN-21K + YFCC-14M 44.3 52.2 - ckpt/config
Swin-B IN-21K + GCC-15M + YFCC-14M 52.2 - 43.2 ckpt/config
Focal-B IN-21K + GCC-15M + YFCC-14M 54.2 - 44.0 ckpt/config

NOTE: Setting "ZS on 20 datasets" is used in the ICinW benchmark.

Getting Started

Installation

Please follow INSTALL.md for installation.

Data preparation

Please following DATA.md for data preparation.

Evaluation

To evaluate a pre-trained UniCL on ImageNet val, run:

python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 main.py --eval \
--cfg <config-file> --resume <checkpoint> --data-path <imagenet-path> 

For example, to evaluate the UniCL-Swin-Tiny trained on YFCC-14M with a single GPU:

python -m torch.distributed.launch --nproc_per_node 1 --master_port 12345 main.py --eval \
--cfg configs/unicl_swin_tiny.yaml --resume yfcc14m.pth --data-path <imagenet-path>

The Image Classification in the Wild Benchmark

Interested in evaluating UniCL for downstream image classification tasks, and comparing performance on the same task suite? We release ELEVATER benchmark, which has 20 downstream image classification tasks. The software toolkit is also released to ease the process to onboad new models. It will be hosted as a challenge at the CV in the Wild Workshop @ ECCV 2022. We hope our benchmark and toolkit can encourage the community to solve the challenge of image classification in the wild!

Please see more instructions: [Benchmark] [Toolkit] [Paper]

Citation

If you find this repo useful to your project, please consider to cite it with following bib:

@misc{yang2022unified,
    title={Unified Contrastive Learning in Image-Text-Label Space}, 
    author={Jianwei Yang and Chunyuan Li and Pengchuan Zhang and Bin Xiao and Ce Liu and Lu Yuan and Jianfeng Gao},
    year={2022},
    eprint={2204.03610},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgement

Our codebase is built based on Swin Transformer, Focal Transformer and FocalNet.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

unicl's People

Contributors

anugunjnaman avatar chunyuanli avatar jwyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unicl's Issues

YFCC 15M Download

Hello

I have been following the work of this repo. However, I wasn't able to successfully download the YFCC 15M subset used by CLIP.

Could you please help me out and let me know how to download it? It would be of great help.

I tried to look up online on different ways other people have downloaded it , but it takes almost 200 hours which is too long.

Stochastic behavior with image embedding

Hey, thanks for the all work.

I'm trying to run a simple demo (adapted from Spaces), and the image embedding isn't consistent.

    def encode_image(self, image, norm=True):
        x = self.image_encoder.forward_features(image)      <--- non-deterministic
        x = x @ self.image_projection

        if norm:
            x = x / x.norm(dim=-1, keepdim=True)

        return x

Sample Output:

=> merge config from ./configs/unicl_swin_base.yaml
Creating model: swin
{'bag': 0.9939755797386169, 'book': 0.0060244896449148655}

{'bag': 0.743205189704895, 'book': 0.25679486989974976}

{'bag': 0.984933078289032, 'book': 0.015066967345774174}

{'bag': 0.974284291267395, 'book': 0.025715718045830727}

{'bag': 0.9412586092948914, 'book': 0.05874132364988327}

We observe that the similarity score varies.

Resolved: My snippet was missing the model.eval().

About the target_transform.

Hi there:

Great work. However, I cannot find any code about the target_transform to transform the target. I guess it has not been released with the training code. So when will the code be released?

Best regard

Questions about loss calculation during training

Hi, sorry to bother you. I have got some questions about the loss calculation during training.
In the paper, it seems that the label size should be [Batch_size * Batch_size], so it can be calculated with the output which is also with size [Batch_size * Batch_size].
But in the code , if the MODEL.NUM_CLASS>2, the the target size after the function mixup_fn became [Batch_size * NUM_CLASS], which will fail at the calculation of loss.
Maybe I missed some thing, this is really confusing me.
Really appreciate it if you can answer my questions.

All samples from each GPU combined before applying contrastive loss?

Hi, thank you for the great work Jianwei! I was wondering, for distributed training, do you:

  1. combine the mini-batches across GPUs before applying contrastive loss (therefore the actual batchsize = n_GPUs x batchsize per GPU)
    OR
  2. simply compute the contrastive loss seperately for each GPU? (batchsize is just the batchsize on each GPU)

I've seen implementations of contrastive pretraining methods such as this one (SimCLR) do the 1st option:
https://github.com/Spijkervet/SimCLR/blob/cd85c4366d2e6ac1b0a16798b76ac0a2c8a94e58/simclr/modules/gather.py#L5

I ask because in your code, you have a comment that says "# gather features from all gpus" but if I'm not mistaken I don't actually see where the features are gathered across all GPUs:

UniCL/main.py

Line 177 in 4f680ff

# gather features from all gpus

Thanks!

Any update on relasing the training code?

Hi sir!

Thank you for your work, it's really itneresting to see how a combining image and text with label supervision basically matches the same results of a pure image classification training! This has inspired me a lot and now I am trying to replicate your results from this table

image

I am trying to train on my custom dataset of 22k classes and 1M images but there i'm still seeing a clear gap between training with your loss and a classification head. Did you try to run the experiment of this table on a bigger dataset such as imagenet 22k?

Releasing the training code for getting those results would be very helpfull for everyone interested in training a classification problem with your framework!

Checkpoints on CIFAR-10 and CIFAR-100

Could you please release the pre-trained weights on CIFAR-10 and CIFAR-100 datasets for the standard classification task (in Table 2 of paper)? Thanks.

add web demo/model to Huggingface

Hi, would you be interested in adding UniCL to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/salesforce/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.