GithubHelp home page GithubHelp logo

xmed-lab / allspark Goto Github PK

View Code? Open in Web Editor NEW
42.0 4.0 7.0 14.41 MB

CVPR 2024: AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation

License: MIT License

Python 97.68% Shell 2.32%
attention cvpr2024 semi-supervised-segmentation transformer semantic-segmentation

allspark's Introduction

[CVPR-2024] AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation

PWC
PWC
PWC
PWC
PWC
PWC
PWC
PWC
PWC
PWC
PWC
PWC

This repo is the official implementation of AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation which is accepted at CVPR-2024.

The AllSpark is a powerful Cybertronian artifact in the film series of Transformers. It was used to reborn Optimus Prime in Transformers: Revenge of the Fallen, which aligns well with our core idea.


💥 Motivation

In this work, we discovered that simply converting existing semi-segmentation methods into a pure-transformer framework is ineffective.

  • The first reason is that transformers inherently possess weaker inductive bias compared to CNNs, so transformers heavily rely on a large volume of training data to perform well.

  • The more critical issue lies in the existing semi-supervised segmentation frameworks. These frameworks separate the training flows for labeled and unlabeled data, which aggravates the overfitting issue of transformers on the limited labeled data.

Thus, we propose to intervene and diversify the labeled data flow with unlabeled data in the feature domain, leading to improvements in generalizability.


🛠️ Usage

‼️ IMPORTANT: This version is not the final version. We made some mistakes when re-organizing the code. We will release the correct version soon. Sorry for any inconvenience this may cause.

1. Environment

First, clone this repo:

git clone https://github.com/xmed-lab/AllSpark.git
cd AllSpark/

Then, create a new environment and install the requirements:

conda create -n allspark python=3.7
conda activate allspark
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install tensorboard
pip install six
pip install pyyaml
pip install -U openmim
mim install mmcv==1.6.2
pip install einops
pip install timm

2. Data Preparation & Pre-trained Weights

2.1 Pascal VOC 2012 Dataset

Download the dataset with wget:

wget https://hkustconnect-my.sharepoint.com/:u:/g/personal/hwanggr_connect_ust_hk/EcgD_nffqThPvSVXQz6-8T0B3K9BeUiJLkY_J-NvGscBVA\?e\=2b0MdI\&download\=1 -O pascal.zip
unzip pascal.zip

2.2 Cityscapes Dataset

Download the dataset with wget:

wget https://hkustconnect-my.sharepoint.com/:u:/g/personal/hwanggr_connect_ust_hk/EWoa_9YSu6RHlDpRw_eZiPUBjcY0ZU6ZpRCEG0Xp03WFxg\?e\=LtHLyB\&download\=1 -O cityscapes.zip
unzip cityscapes.zip

2.3 COCO Dataset

Download the dataset with wget:

wget https://hkustconnect-my.sharepoint.com/:u:/g/personal/hwanggr_connect_ust_hk/EXCErskA_WFLgGTqOMgHcAABiwH_ncy7IBg7jMYn963BpA\?e\=SQTCWg\&download\=1 -O coco.zip
unzip coco.zip

Then your file structure will be like:

├── VOC2012
    ├── JPEGImages
    └── SegmentationClass
    
├── cityscapes
    ├── leftImg8bit
    └── gtFine
    
├── coco
    ├── train2017
    ├── val2017
    └── masks

Next, download the following pretrained weights.

├── ./pretrained_weights
    ├── mit_b2.pth
    ├── mit_b3.pth
    ├── mit_b4.pth
    └── mit_b5.pth

For example, mit-B5:

mkdir pretrained_weights
wget https://hkustconnect-my.sharepoint.com/:u:/g/personal/hwanggr_connect_ust_hk/ET0iubvDmcBGnE43-nPQopMBw9oVLsrynjISyFeGwqXQpw?e=9wXgso\&download\=1 -O ./pretrained_weights/mit_b5.pth

3. Training & Evaluating

# use torch.distributed.launch
sh scripts/train.sh <num_gpu> <port>
# to fully reproduce our results, the <num_gpu> should be set as 4 on all three datasets
# otherwise, you need to adjust the learning rate accordingly

# or use slurm
# sh scripts/slurm_train.sh <num_gpu> <port> <partition>

To train on other datasets or splits, please modify dataset and split in train.sh.

4. Results

Model weights and training logs will be released soon.

4.1 PASCAL VOC 2012 original

Splits 1/16 1/8 1/4 1/2 Full
Weights of AllSpark 76.07 78.41 79.77 80.75 82.12
Reproduced 76.06 | log 78.41 79.93 | log 80.70 | log 82.56 | log

4.2 PASCAL VOC 2012 augmented

Splits 1/16 1/8 1/4 1/2
Weights of AllSpark 78.32 79.98 80.42 81.14

4.3 Cityscapes

Splits 1/16 1/8 1/4 1/2
Weights of AllSpark 78.33 79.24 80.56 81.39

4.4 COCO

Splits 1/512 1/256 1/128 1/64
Weights of AllSpark 34.10 | log 41.65 | log 45.48 | log 49.56 | log

Citation

If you find this project useful, please consider citing:

@inproceedings{allspark,
  title={AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation},
  author={Wang, Haonan and Zhang, Qixiang and Li, Yi and Li, Xiaomeng},
  booktitle={CVPR},
  year={2024}
}

Acknowlegement

AllSpark is built upon UniMatch and SegFormer. We thank their authors for making the source code publicly available.

allspark's People

Contributors

elliot-qxzhang avatar mcgregorwwww avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

allspark's Issues

install mmcv encounter: python setup.py eggs error, subprocess exited and metadata generation failed

thanks for your work ,it's actually details.I followed your readme and install necessary package by sequence.but enconter the error:
egg_info

I try some solutions such as pip install pip upgrade pip setuptools contain uninstall and install ,and I confirm the config path of cuda in bashrc contain CUDA_Home and LD_LIBRARY ,which point to the folder that contain bin lib and include in my conda envs .Due to the fault in the base env is not exist , I guess that is not related to the cuda .But the gpt suggest that base env has not cuda so that don't affect the installation of mmcv.Besides,the version of mmcv I get it's match your readme called 1.6.2 from https://pypi.tuna.tsinghua.edu.cn/simple to faster.About the cuda, due to it's binding to the torch so that .so called libtorch_cuda.so.In my a test.py in Generic_ssl,I use the ctypes..CDLL(library_path) to load them, and in the mmcv setup.py its seems so complicated.
Looking forward to hearing from your reply!

when will the final version release?

when will the final version release? i am very instreast in your reasearch work because of the excellent result on the leadboad.but sadly the code is not ready for use, do you any plan about when the final version will be released?

用于自己数据集

非常感谢你们出色的工作!!
我将mit_b5换掉,用了一个新的backbone运行,参数量19.2M,在VOC 2012 Full(1464)上进行训练,发现miou达到了86%,但是我用我自己数据集训练的时候,发现了一个很奇怪的现象。我用1/2data训练时miou先达到了95.3%,到了64个epoch之后,逐步降低到了55%。这可能是什么原因呢?我在全监督网络segformer训练我自己的模型,miou能达到97%,且细节信息分割地更加完整,但是用你们的模型时细节信息却不是那么完整,我也尝试了别的半监督框架,如ST++,但细节信息都差强人意,请问这个可能是什么原因呢?期待您的回答,谢谢!

编码器和解码器问题

作者您好,请问我想尝试将编码器换成Resnet50/101,解码器换成deeplabv3+。这需要更改哪里呀!

AllSpark中的InstanceNorm和cross attention

感谢出彩的工作!
代码AllSpark中有一个InstanceNorm层
self.psi = nn.InstanceNorm2d(self.num_heads)
请问为什么用InstanceNorm以及为什么参数设置为num_heads呢?

k_l2u = rearrange(k_l2u, 'b n c -> n (b c)')
v_l2u = rearrange(v_l2u, 'b n c -> n (b c)')

k_l2u = repeat(k_l2u, 'n bc -> r n bc', r=batch_size)
v_l2u = repeat(v_l2u, 'n bc -> r n bc', r=batch_size)

还有cross attention中的这一段代码是什么作用呢?

mismatch keys in checkpoint

Thanks for sharing this nice work.

However, when i try to reproduce the results using your provided checkpoints, it seems that the checkpoints does not match the model architecture.

RuntimeError: Error(s) in loading state_dict for DistributedDataParallel:
        Missing key(s) in state_dict: "module.decoder.allspark.map_in.0.weight", "module.decoder.allspark.map_in.0.bias", "module.decoder.allspark.attn_norm.weight", "module.decoder.allspark.attn_norm.bias", "module.decoder.allspark.attn.queue_ptr", "module.decoder.allspark.attn.kv_queue", "module.decoder.allspark.attn.q_u.weight", "module.decoder.allspark.attn.k_u.weight", "module.decoder.allspark.attn.v_u.weight", "module.decoder.allspark.attn.q_l2u.weight", "module.decoder.allspark.attn.k_l2u.weight", "module.decoder.allspark.attn.v_l2u.weight", "module.decoder.allspark.attn.out_u.weight", "module.decoder.allspark.attn.out_l2u.weight", "module.decoder.allspark.encoder_norm.weight", "module.decoder.allspark.encoder_norm.bias", "module.decoder.allspark.map_out.0.weight", "module.decoder.allspark.map_out.0.bias". 
        Unexpected key(s) in state_dict: "module.decoder.allspark.embeddings.patch_embeddings.weight", "module.decoder.allspark.embeddings.patch_embeddings.bias", "module.decoder.allspark.encoder.layer.0.attn_norm1.weight", "module.decoder.allspark.encoder.layer.0.attn_norm1.bias", "module.decoder.allspark.encoder.layer.0.attn.queue_ptr", "module.decoder.allspark.encoder.layer.0.attn.kv_queue", "module.decoder.allspark.encoder.layer.0.attn.query_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.key_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.value_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.query_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.key_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.value_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.out_u.weight", "module.decoder.allspark.encoder.layer.0.attn.out_l.weight", "module.decoder.allspark.encoder.layer.0.ffn_norm1.weight", "module.decoder.allspark.encoder.layer.0.ffn_norm1.bias", "module.decoder.allspark.encoder.layer.0.ffn1.fc1.weight", "module.decoder.allspark.encoder.layer.0.ffn1.fc1.bias", "module.decoder.allspark.encoder.layer.0.ffn1.fc2.weight", "module.decoder.allspark.encoder.layer.0.ffn1.fc2.bias", "module.decoder.allspark.encoder.layer.1.attn_norm1.weight", "module.decoder.allspark.encoder.layer.1.attn_norm1.bias", "module.decoder.allspark.encoder.layer.1.attn.queue_ptr", "module.decoder.allspark.encoder.layer.1.attn.kv_queue", "module.decoder.allspark.encoder.layer.1.attn.query_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.key_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.value_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.query_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.key_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.value_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.out_u.weight", "module.decoder.allspark.encoder.layer.1.attn.out_l.weight", "module.decoder.allspark.encoder.layer.1.ffn_norm1.weight", "module.decoder.allspark.encoder.layer.1.ffn_norm1.bias", "module.decoder.allspark.encoder.layer.1.ffn1.fc1.weight", "module.decoder.allspark.encoder.layer.1.ffn1.fc1.bias", "module.decoder.allspark.encoder.layer.1.ffn1.fc2.weight", "module.decoder.allspark.encoder.layer.1.ffn1.fc2.bias", "module.decoder.allspark.encoder.encoder_norm1.weight", "module.decoder.allspark.encoder.encoder_norm1.bias", "module.decoder.allspark.recon.conv.weight", "module.decoder.allspark.recon.conv.bias".

Is this problem caused by the incorrect allspark checkpoints, or my misuse? If possible, could you provide a evaluation script to reproduce the results?

About filtering strategy for pseudo labels

Hi, thanks for your inspiring work. I notice that there is no quality control on pseudo labels. How is that possible to guarantee the training quality. I am also surprised that no cutmix augmentation is used in your work. Can you shed some light on how allspark can achieve such good performance without label filtering and strong augmentation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.