GithubHelp home page GithubHelp logo

xmed-lab / allspark Goto Github PK

View Code? Open in Web Editor NEW
44.0 4.0 6.0 14.41 MB

CVPR 2024: AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation

License: MIT License

Python 97.68% Shell 2.32%
attention cvpr2024 semi-supervised-segmentation transformer semantic-segmentation

allspark's Issues

编码器和解码器问题

作者您好,请问我想尝试将编码器换成Resnet50/101,解码器换成deeplabv3+。这需要更改哪里呀!

用于自己数据集

非常感谢你们出色的工作!!
我将mit_b5换掉,用了一个新的backbone运行,参数量19.2M,在VOC 2012 Full(1464)上进行训练,发现miou达到了86%,但是我用我自己数据集训练的时候,发现了一个很奇怪的现象。我用1/2data训练时miou先达到了95.3%,到了64个epoch之后,逐步降低到了55%。这可能是什么原因呢?我在全监督网络segformer训练我自己的模型,miou能达到97%,且细节信息分割地更加完整,但是用你们的模型时细节信息却不是那么完整,我也尝试了别的半监督框架,如ST++,但细节信息都差强人意,请问这个可能是什么原因呢?期待您的回答,谢谢!

when will the final version release?

when will the final version release? i am very instreast in your reasearch work because of the excellent result on the leadboad.but sadly the code is not ready for use, do you any plan about when the final version will be released?

AllSpark中的InstanceNorm和cross attention

感谢出彩的工作!
代码AllSpark中有一个InstanceNorm层
self.psi = nn.InstanceNorm2d(self.num_heads)
请问为什么用InstanceNorm以及为什么参数设置为num_heads呢?

k_l2u = rearrange(k_l2u, 'b n c -> n (b c)')
v_l2u = rearrange(v_l2u, 'b n c -> n (b c)')

k_l2u = repeat(k_l2u, 'n bc -> r n bc', r=batch_size)
v_l2u = repeat(v_l2u, 'n bc -> r n bc', r=batch_size)

还有cross attention中的这一段代码是什么作用呢?

install mmcv encounter: python setup.py eggs error, subprocess exited and metadata generation failed

thanks for your work ,it's actually details.I followed your readme and install necessary package by sequence.but enconter the error:
egg_info

I try some solutions such as pip install pip upgrade pip setuptools contain uninstall and install ,and I confirm the config path of cuda in bashrc contain CUDA_Home and LD_LIBRARY ,which point to the folder that contain bin lib and include in my conda envs .Due to the fault in the base env is not exist , I guess that is not related to the cuda .But the gpt suggest that base env has not cuda so that don't affect the installation of mmcv.Besides,the version of mmcv I get it's match your readme called 1.6.2 from https://pypi.tuna.tsinghua.edu.cn/simple to faster.About the cuda, due to it's binding to the torch so that .so called libtorch_cuda.so.In my a test.py in Generic_ssl,I use the ctypes..CDLL(library_path) to load them, and in the mmcv setup.py its seems so complicated.
Looking forward to hearing from your reply!

mismatch keys in checkpoint

Thanks for sharing this nice work.

However, when i try to reproduce the results using your provided checkpoints, it seems that the checkpoints does not match the model architecture.

RuntimeError: Error(s) in loading state_dict for DistributedDataParallel:
        Missing key(s) in state_dict: "module.decoder.allspark.map_in.0.weight", "module.decoder.allspark.map_in.0.bias", "module.decoder.allspark.attn_norm.weight", "module.decoder.allspark.attn_norm.bias", "module.decoder.allspark.attn.queue_ptr", "module.decoder.allspark.attn.kv_queue", "module.decoder.allspark.attn.q_u.weight", "module.decoder.allspark.attn.k_u.weight", "module.decoder.allspark.attn.v_u.weight", "module.decoder.allspark.attn.q_l2u.weight", "module.decoder.allspark.attn.k_l2u.weight", "module.decoder.allspark.attn.v_l2u.weight", "module.decoder.allspark.attn.out_u.weight", "module.decoder.allspark.attn.out_l2u.weight", "module.decoder.allspark.encoder_norm.weight", "module.decoder.allspark.encoder_norm.bias", "module.decoder.allspark.map_out.0.weight", "module.decoder.allspark.map_out.0.bias". 
        Unexpected key(s) in state_dict: "module.decoder.allspark.embeddings.patch_embeddings.weight", "module.decoder.allspark.embeddings.patch_embeddings.bias", "module.decoder.allspark.encoder.layer.0.attn_norm1.weight", "module.decoder.allspark.encoder.layer.0.attn_norm1.bias", "module.decoder.allspark.encoder.layer.0.attn.queue_ptr", "module.decoder.allspark.encoder.layer.0.attn.kv_queue", "module.decoder.allspark.encoder.layer.0.attn.query_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.key_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.value_u_u.weight", "module.decoder.allspark.encoder.layer.0.attn.query_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.key_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.value_l_u.weight", "module.decoder.allspark.encoder.layer.0.attn.out_u.weight", "module.decoder.allspark.encoder.layer.0.attn.out_l.weight", "module.decoder.allspark.encoder.layer.0.ffn_norm1.weight", "module.decoder.allspark.encoder.layer.0.ffn_norm1.bias", "module.decoder.allspark.encoder.layer.0.ffn1.fc1.weight", "module.decoder.allspark.encoder.layer.0.ffn1.fc1.bias", "module.decoder.allspark.encoder.layer.0.ffn1.fc2.weight", "module.decoder.allspark.encoder.layer.0.ffn1.fc2.bias", "module.decoder.allspark.encoder.layer.1.attn_norm1.weight", "module.decoder.allspark.encoder.layer.1.attn_norm1.bias", "module.decoder.allspark.encoder.layer.1.attn.queue_ptr", "module.decoder.allspark.encoder.layer.1.attn.kv_queue", "module.decoder.allspark.encoder.layer.1.attn.query_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.key_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.value_u_u.weight", "module.decoder.allspark.encoder.layer.1.attn.query_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.key_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.value_l_u.weight", "module.decoder.allspark.encoder.layer.1.attn.out_u.weight", "module.decoder.allspark.encoder.layer.1.attn.out_l.weight", "module.decoder.allspark.encoder.layer.1.ffn_norm1.weight", "module.decoder.allspark.encoder.layer.1.ffn_norm1.bias", "module.decoder.allspark.encoder.layer.1.ffn1.fc1.weight", "module.decoder.allspark.encoder.layer.1.ffn1.fc1.bias", "module.decoder.allspark.encoder.layer.1.ffn1.fc2.weight", "module.decoder.allspark.encoder.layer.1.ffn1.fc2.bias", "module.decoder.allspark.encoder.encoder_norm1.weight", "module.decoder.allspark.encoder.encoder_norm1.bias", "module.decoder.allspark.recon.conv.weight", "module.decoder.allspark.recon.conv.bias".

Is this problem caused by the incorrect allspark checkpoints, or my misuse? If possible, could you provide a evaluation script to reproduce the results?

About filtering strategy for pseudo labels

Hi, thanks for your inspiring work. I notice that there is no quality control on pseudo labels. How is that possible to guarantee the training quality. I am also surprised that no cutmix augmentation is used in your work. Can you shed some light on how allspark can achieve such good performance without label filtering and strong augmentation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.