GithubHelp home page GithubHelp logo

s-clip's Introduction

S-CLIP: Semi-supervised CLIP

Implementation of the S-CLIP algorithm described in "S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions."

S-CLIP improves the training of CLIP in scenarios where only a few image-text pairs are available by incorporating unpaired images alongside the image-text pairs.

Motivation

S-CLIP addresses the issue of naive pseudo-labeling in semi-supervised CLIP. plot

Method overview

S-CLIP introduces caption-level and keyword-level pseudo-labeling approaches. plot plot

Installation

Our code is based on the open_clip library. The original CLIP training code can be found in the training directory, while our newly developed code is located in the custom directory. The main logic of the proposed training loss is implemented in custom/loss.py.

Install the requirements.

pip install -r requirements.txt

Training

Run ./scripts/train_RS.sh.

Evaluation

Run ./scripts/eval_RS.sh [CKPT-NAME].

s-clip's People

Contributors

sangwoomo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

s-clip's Issues

Experimental data: Image-text retrieval R@1 is too low

Hello,Thank you for sharing such great code. But I have a question, I noticed in your paper that the Image text retrieval R@1 parameter metric used in the RSICD dataset is 4.2, but my actual result of running the program is that the Image text retrieval R@1 is only about 0.04, the following is my parameter configuration.

`METHOD=(
"ours"
"base"
)
SEED=("0")
RATIO=(
"0.1"
)
MODEL=(
"RN50"
#"ViT-B-32"
#"ViT-B-16"
)
ImagenetVal=(
#"RSICD-CLS"
#"UCM-CLS",
#"WHU-RS19",
#"RSSCN7",
#"AID",
"RESISC45"
)

for val in "${ImagenetVal[@]}"; do
for model in "${MODEL[@]}"; do
for ratio in "${RATIO[@]}"; do
for method in "${METHOD[@]}"; do
for seed in "${SEED[@]}"; do
torchrun --nproc_per_node 2 -m main
--model "${model}"
--pretrained openai
--train-data "RS-ALL"
--label-ratio "${ratio}"
--val-data "RS-ALL"
--imagenet-val "${val}"
--keyword-path "keywords/RS/class-name.txt"
--lr 5e-5
--batch-size 64
--warmup 10
--epochs 25
--zeroshot-frequency 3
--precision amp
--method "${method}"
--seed "${seed}"
--report-to wandb
--wandb-project-name "S-CLIP" \

done
done
done
done
done`

Could you give me some advice?

The method for generating keywords

Hello:
Thank you for sharing such great code. But I have a question, I noticed in your paper the method for generating keywords: keywords can be obtained from class names or extracted by using algorithms like YAKE applied to captions.
Could you please send me your program for generating keywords ?
Thanks.

txtclasses_rsicd data

hello,Thank you for sharing such great code. But I have a question, what file is txtclasses_rsicd in your code? I tried to reproduce your code but could not find this file, this file has been confused for several days, could you give me some advice?
class RSICD_CLS(RSICD):
def init(self, *args, **kwargs):
super().init(*args, **kwargs)
self.load_class_info(os.path.join(self.root, "txtclasses_rsicd")) -------this i dont understand

def load_class_info(self, class_dir):
    classes = []
    path2class = {}
    for idx, fn in enumerate(sorted(os.listdir(class_dir))):
        classes.append(fn.split(".txt")[0])
        with open(os.path.join(class_dir, fn)) as f:
            for line in f.readlines():
                path2class[line.strip()] = idx

    self.classes = classes
    self.path2class = path2class

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.