GithubHelp home page GithubHelp logo

dazhangyu123 / acmil Goto Github PK

View Code? Open in Web Editor NEW
36.0 2.0 4.0 4.76 MB

WSI classification

License: MIT License

Python 100.00%
bracs camelyon16 computational-pathology digital-pathology multiple-instance-learning pathology-image whole-slide-image acmil weakly-supervised-learning histopathology

acmil's Introduction

Attention-Challenging Multiple Instance Learning for Whole Slide Image Classification

This is the Pytorch implementation of our "Attention-Challenging Multiple Instance Learning for Whole Slide Image Classification". This code is based on the CLAM.

Dataset Preparation

We provide a part of the extracted features to reimplement our results.

Extracted patch features using ImageNet supervised ResNet18 on Camelyon16 at 20× magnification: https://pan.quark.cn/s/dd77e6a476a0

Extracted patch features using SSL ViT-S/16 on Camelyon16 at 20× magnification: https://pan.quark.cn/s/6ea54bfa0e72

Extracted patch features using ImageNet supervised ResNet18 on Bracs at 10× magnification: https://pan.quark.cn/s/7cf21bbe46a7

Extracted patch features using SSL ViT-S/16 on Bracs at 10× magnification: https://pan.quark.cn/s/f2f9c93cd5e1

Extracted patch features using ImageNet supervised ResNet18 on Bracs at 20× magnification: https://pan.quark.cn/s/cbe4e1d0e68c

Extracted patch features using SSL ViT-S/16 on Bracs at 20× magnification: https://pan.quark.cn/s/3c8c1ffce517

For your own dataset, you can modify and run Step1_create_patches_fp.py and Step2_feature_extract.py. More details about this file can refer CLAM. Note that we recommend extracting features using SSL pretrained method. Our code using the checkpoints provided by Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

Training

For the ABMIL (baseline), you can run Step3_WSI_classification_ACMIL.py and set n_token=1 n_masked_patch=0 mask_drop=0

CUDA_VISIBLE_DEVICES=2 python Step3_WSI_classification_ACMIL.py --seed 4 --wandb_mode online --arch ga --n_token 1 --n_masked_patch 0 --mask_drop 0 --config config/bracs_natural_supervised_config.yml

For our ACMIL, you can run Step3_WSI_classification_ACMIL.py and set n_token=5 n_masked_patch=10 mask_drop=0.6

CUDA_VISIBLE_DEVICES=2 python Step3_WSI_classification_ACMIL.py --seed 4 --wandb_mode online --arch ga --n_token 5 --n_masked_patch 10 --mask_drop 0.6 --config config/bracs_natural_supervised_config.yml

For CLAM, DAMIL, and TransMIL, you run Step3_WSI_classification.py

CUDA_VISIBLE_DEVICES=2 python Step3_WSI_classification.py --seed 4 --wandb_mode online --arch clam_sb/clam_mb/transmil/dsmil --config config/bracs_natural_supervised_config.yml

For DTFD-MIL, you run Step3_WSI_classification_DTFD.py

CUDA_VISIBLE_DEVICES=2 python Step3_WSI_classification_DTFD.py --seed 4 --wandb_mode online --config config/bracs_natural_supervised_config.yml

BibTeX

If you find our work useful for your project, please consider citing the following paper.

@misc{zhang2023attentionchallenging,
      title={Attention-Challenging Multiple Instance Learning for Whole Slide Image Classification}, 
      author={Yunlong Zhang and Honglin Li and Yuxuan Sun and Sunyi Zheng and Chenglu Zhu and Lin Yang},
      year={2023},
      eprint={2311.07125},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

acmil's People

Contributors

dazhangyu123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

acmil's Issues

Extracted patch features using SSL ViT-S/16 on camelyon16

Hi,
Thank you for your work. It's very amazing!
Is it possible for you to provide me with the Extracted patch features using SSL ViT-S/16 on camelyon16? I would appreciate it if you could provide it to me.
I am looking forward to your reply!

questions about comparison experiments

Thank you for your excellent work! You have provided the codes and operation modes of ABMIL, CLAM, DAMIL, TransMIL and DTFD-MIL, but I noticed that the comparison table of your paper also provided the comparison results of IBMIL and MHIM-MIL. Could you please provide the operation modes of IBMIL and MHIM-MIL?

Question on the results of TransMIL and ABMIL

Hi, it‘s a nice work, thanks for your contribution. But in table 1, I have some question about the performance of TransMIL and ABMIL. In your work, the results of ABMIL is better with both two backbones.
image
But in origional TransMIL paper, the results is the opposite one. So have you thought about the reason?Can this account for the different backbone?Since in TransMIL, they use res50 to extract features.
image

Question of normalization parameters used for feature extraction?

Thanks for your nice work. I would like to know in what way dino_vit_small_patch16_ep200.torch goes through the normalization of the training images because in practice we often put the test images through the same normalization process as the training set, e.g.

transforms_beit3 = transforms.Compose([
transforms.Resize((224,224), interpolation=3), transforms.
transforms.ToTensor(),
transforms.Normalize(mean= (0.5, 0.5, 0.5), std= (0.5, 0.5, 0.5))
])

trnsfrms_CTransPath_Brow_iBOTViT = transforms.Compose(
[
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225))
]
)

Attention Map

Hello! Thanks for your great work!
I run ACMIL on my own dataset, it gets good result and I want to draw an attention map to verify it, could you provide the code corresponding to attention map? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.