GithubHelp home page GithubHelp logo

caiyuanhao1998 / mst Goto Github PK

View Code? Open in Web Editor NEW
570.0 9.0 71.0 7.74 MB

A toolbox for spectral compressive imaging reconstruction including MST (CVPR 2022), CST (ECCV 2022), DAUHST (NeurIPS 2022), BiSCI (NeurIPS 2023), HDNet (CVPR 2022), MST++ (CVPRW 2022), etc.

License: MIT License

Python 82.53% MATLAB 17.47%
image-restoration hyperspectral-images snapshot-compressive-imaging spectral-reconstruction binarized-neural-networks bnn qnn transformer ntire

mst's Introduction

PWC PWC

PWC PWC

A Toolbox for Spectral Compressive Imaging

winner zhihu zhihu zhihu

Authors

Yuanhao Cai*, Jing Lin*, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool

Papers

Awards

Introduction

This is a baseline and toolbox for spectral compressive imaging reconstruction. This repo supports over 15 algorithms. Our method MST++ won the NTIRE 2022 Challenge on spectral recovery from RGB images. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.

News

  • 2024.04.09 : We release the results of the three traditional model-based methods, i.e., TwIST, GAP-TV, and DeSCI for your convenience to conduct research. Feel free to use them. 😄
  • 2024.03.21 : Our methods Retinexformer and MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) ranked top-2 in the NTIRE 2024 Challenge on Low Light Enhancement. Code, pre-trained models, training logs, and enhancement results will be released in the repo of Retinexformer. Stay tuned! 🚀
  • 2024.02.15 : NTIRE 2024 Challenge on Low Light Enhancement begins. Welcome to use our Retinexformer or MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) to participate in this challenge! 🏆
  • 2023.12.02 : Codes for real experiments have been updated. Welcome to check and use them. 🥳
  • 2023.11.24 : Code, models, and results of BiSRNet (NeurIPS 2023) are released at this repo. We also develop a toolbox BiSCI for binarized SCI reconstruction. Feel free to check and use them. 🌟
  • 2023.11.02 : MST, MST++, CST, and DAUHST are added to the Awesome-Transformer-Attention collection. 💫
  • 2023.09.21 : Our new work BiSRNet is accepted by NeurIPS 23. Code will be released at this repo and BiSCI
  • 2023.02.26 : We release the RGB images of five real scenes and ten simulation scenes. Please feel free to check and use them. 🌟
  • 2022.11.02 : We have provided more visual results of state-of-the-art methods and the function to evaluate the parameters and computational complexity of models. Please feel free to check and use them. 🔆
  • 2022.10.23 : Code, models, and reconstructed HSI results of DAUHST have been released. 🔥
  • 2022.09.15 : Our DAUHST has been accepted by NeurIPS 2022, code and models are coming soon. 🚀
  • 2022.07.20 : Code, models, and reconstructed HSI results of CST have been released. 🔥
  • 2022.07.04 : Our paper CST has been accepted by ECCV 2022, code and models are coming soon. 🚀
  • 2022.06.14 : Code and models of MST and MST++ have been released. This repo supports 12 learning-based methods to serve as toolbox for Spectral Compressive Imaging. The model zoo will be enlarged. 🔥
  • 2022.05.20 : Our work DAUHST is on arxiv. 💫
  • 2022.04.02 : Further work MST++ has won the NTIRE 2022 Spectral Reconstruction Challenge. 🏆
  • 2022.03.09 : Our work CST is on arxiv. 💫
  • 2022.03.02 : Our paper MST has been accepted by CVPR 2022, code and models are coming soon. 🚀
Scene 2 Scene 3 Scene 4 Scene 7

 

1. Comparison with State-of-the-art Methods

12 learning-based algorithms and 3 model-based methods are supported.

Supported algorithms:

We are going to enlarge our model zoo in the future.

MST vs. SOTA CST vs. MST
MST++ vs. SOTA DAUHST vs. SOTA
BiSRNet vs. SOTA BNNs

Quantitative Comparison on Simulation Dataset

Method Params (M) FLOPS (G) PSNR SSIM Model Zoo Simulation Result Real Result
TwIST - - 23.12 0.669 - Google Drive / Baidu Disk Google Drive / Baidu Disk
GAP-TV - - 24.36 0.669 - Google Drive / Baidu Disk Google Drive / Baidu Disk
DeSCI - - 25.27 0.721 - Google Drive / Baidu Disk Google Drive / Baidu Disk
λ-Net 62.64 117.98 28.53 0.841 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
TSA-Net 44.25 110.06 31.46 0.894 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
DGSMP 3.76 646.65 32.63 0.917 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
GAP-Net 4.27 78.58 33.26 0.917 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
ADMM-Net 4.27 78.58 33.58 0.918 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
BIRNAT 4.40 2122.66 37.58 0.960 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
HDNet 2.37 154.76 34.97 0.943 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
MST-S 0.93 12.96 34.26 0.935 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
MST-M 1.50 18.07 34.94 0.943 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
MST-L 2.03 28.15 35.18 0.948 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
MST++ 1.33 19.42 35.99 0.951 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
CST-S 1.20 11.67 34.71 0.940 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
CST-M 1.36 16.91 35.31 0.947 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
CST-L 3.00 27.81 35.85 0.954 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
CST-L-Plus 3.00 40.10 36.12 0.957 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
DAUHST-2stg 1.40 18.44 36.34 0.952 Google Drive / Baidu Disk Google Drive /Baidu Disk Google Drive / Baidu Disk
DAUHST-3stg 2.08 27.17 37.21 0.959 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
DAUHST-5stg 3.44 44.61 37.75 0.962 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
DAUHST-9stg 6.15 79.50 38.36 0.967 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk
BiSRNet 0.036 1.18 29.76 0.837 Google Drive / Baidu Disk Google Drive / Baidu Disk Google Drive / Baidu Disk

The performance are reported on 10 scenes of the KAIST dataset. The test size of FLOPS is 256 x 256.

We also provide the RGB images of five real scenes and ten simulation scenes for your convenience to draw a figure.

Note: access code for Baidu Disk is mst1

 

2. Create Environment:

  • Python 3 (Recommend to use Anaconda)

  • NVIDIA GPU + CUDA

  • Python packages:

  pip install -r requirements.txt

 

3. Prepare Dataset:

Download cave_1024_28 (Baidu Disk, code: fo0q | One Drive), CAVE_512_28 (Baidu Disk, code: ixoe | One Drive), KAIST_CVPR2021 (Baidu Disk, code: 5mmn | One Drive), TSA_simu_data (Baidu Disk, code: efu8 | One Drive), TSA_real_data (Baidu Disk, code: eaqe | One Drive), and then put them into the corresponding folders of datasets/ and recollect them as the following form:

|--MST
    |--real
    	|-- test_code
    	|-- train_code
    |--simulation
    	|-- test_code
    	|-- train_code
    |--visualization
    |--datasets
        |--cave_1024_28
            |--scene1.mat
            |--scene2.mat
            :  
            |--scene205.mat
        |--CAVE_512_28
            |--scene1.mat
            |--scene2.mat
            :  
            |--scene30.mat
        |--KAIST_CVPR2021  
            |--1.mat
            |--2.mat
            : 
            |--30.mat
        |--TSA_simu_data  
            |--mask.mat   
            |--Truth
                |--scene01.mat
                |--scene02.mat
                : 
                |--scene10.mat
        |--TSA_real_data  
            |--mask.mat   
            |--Measurements
                |--scene1.mat
                |--scene2.mat
                : 
                |--scene5.mat

Following TSA-Net and DGSMP, we use the CAVE dataset (cave_1024_28) as the simulation training set. Both the CAVE (CAVE_512_28) and KAIST (KAIST_CVPR2021) datasets are used as the real training set.

 

4. Simulation Experiement:

4.1 Training

cd MST/simulation/train_code/

# MST_S
python train.py --template mst_s --outf ./exp/mst_s/ --method mst_s 

# MST_M
python train.py --template mst_m --outf ./exp/mst_m/ --method mst_m  

# MST_L
python train.py --template mst_l --outf ./exp/mst_l/ --method mst_l 

# CST_S
python train.py --template cst_s --outf ./exp/cst_s/ --method cst_s 

# CST_M
python train.py --template cst_m --outf ./exp/cst_m/ --method cst_m  

# CST_L
python train.py --template cst_l --outf ./exp/cst_l/ --method cst_l

# CST_L_Plus
python train.py --template cst_l_plus --outf ./exp/cst_l_plus/ --method cst_l_plus

# GAP-Net
python train.py --template gap_net --outf ./exp/gap_net/ --method gap_net 

# ADMM-Net
python train.py --template admm_net --outf ./exp/admm_net/ --method admm_net 

# TSA-Net
python train.py --template tsa_net --outf ./exp/tsa_net/ --method tsa_net 

# HDNet
python train.py --template hdnet --outf ./exp/hdnet/ --method hdnet 

# DGSMP
python train.py --template dgsmp --outf ./exp/dgsmp/ --method dgsmp 

# BIRNAT
python train.py --template birnat --outf ./exp/birnat/ --method birnat 

# MST_Plus_Plus
python train.py --template mst_plus_plus --outf ./exp/mst_plus_plus/ --method mst_plus_plus 

# λ-Net
python train.py --template lambda_net --outf ./exp/lambda_net/ --method lambda_net

# DAUHST-2stg
python train.py --template dauhst_2stg --outf ./exp/dauhst_2stg/ --method dauhst_2stg

# DAUHST-3stg
python train.py --template dauhst_3stg --outf ./exp/dauhst_3stg/ --method dauhst_3stg

# DAUHST-5stg
python train.py --template dauhst_5stg --outf ./exp/dauhst_5stg/ --method dauhst_5stg

# DAUHST-9stg
python train.py --template dauhst_9stg --outf ./exp/dauhst_9stg/ --method dauhst_9stg

# BiSRNet
python train.py --template bisrnet --outf ./exp/bisrnet/ --method bisrnet
  • The training log, trained model, and reconstrcuted HSI will be available in MST/simulation/train_code/exp/

4.2 Testing

Download the pretrained model zoo from (Google Drive / Baidu Disk, code: mst1) and place them to MST/simulation/test_code/model_zoo/

Run the following command to test the model on the simulation dataset.

cd MST/simulation/test_code/

# MST_S
python test.py --template mst_s --outf ./exp/mst_s/ --method mst_s --pretrained_model_path ./model_zoo/mst/mst_s.pth

# MST_M
python test.py --template mst_m --outf ./exp/mst_m/ --method mst_m --pretrained_model_path ./model_zoo/mst/mst_m.pth

# MST_L
python test.py --template mst_l --outf ./exp/mst_l/ --method mst_l --pretrained_model_path ./model_zoo/mst/mst_l.pth

# CST_S
python test.py --template cst_s --outf ./exp/cst_s/ --method cst_s --pretrained_model_path ./model_zoo/cst/cst_s.pth

# CST_M
python test.py --template cst_m --outf ./exp/cst_m/ --method cst_m --pretrained_model_path ./model_zoo/cst/cst_m.pth

# CST_L
python test.py --template cst_l --outf ./exp/cst_l/ --method cst_l --pretrained_model_path ./model_zoo/cst/cst_l.pth

# CST_L_Plus
python test.py --template cst_l_plus --outf ./exp/cst_l_plus/ --method cst_l_plus --pretrained_model_path ./model_zoo/cst/cst_l_plus.pth

# GAP_Net
python test.py --template gap_net --outf ./exp/gap_net/ --method gap_net --pretrained_model_path ./model_zoo/gap_net/gap_net.pth

# ADMM_Net
python test.py --template admm_net --outf ./exp/admm_net/ --method admm_net --pretrained_model_path ./model_zoo/admm_net/admm_net.pth

# TSA_Net
python test.py --template tsa_net --outf ./exp/tsa_net/ --method tsa_net --pretrained_model_path ./model_zoo/tsa_net/tsa_net.pth

# HDNet
python test.py --template hdnet --outf ./exp/hdnet/ --method hdnet --pretrained_model_path ./model_zoo/hdnet/hdnet.pth

# DGSMP
python test.py --template dgsmp --outf ./exp/dgsmp/ --method dgsmp --pretrained_model_path ./model_zoo/dgsmp/dgsmp.pth

# BIRNAT
python test.py --template birnat --outf ./exp/birnat/ --method birnat --pretrained_model_path ./model_zoo/birnat/birnat.pth

# MST_Plus_Plus
python test.py --template mst_plus_plus --outf ./exp/mst_plus_plus/ --method mst_plus_plus --pretrained_model_path ./model_zoo/mst_plus_plus/mst_plus_plus.pth

# λ-Net
python test.py --template lambda_net --outf ./exp/lambda_net/ --method lambda_net --pretrained_model_path ./model_zoo/lambda_net/lambda_net.pth

# DAUHST-2stg
python test.py --template dauhst_2stg --outf ./exp/dauhst_2stg/ --method dauhst_2stg --pretrained_model_path ./model_zoo/dauhst_2stg/dauhst_2stg.pth

# DAUHST-3stg
python test.py --template dauhst_3stg --outf ./exp/dauhst_3stg/ --method dauhst_3stg --pretrained_model_path ./model_zoo/dauhst_3stg/dauhst_3stg.pth

# DAUHST-5stg
python test.py --template dauhst_5stg --outf ./exp/dauhst_5stg/ --method dauhst_5stg --pretrained_model_path ./model_zoo/dauhst_5stg/dauhst_5stg.pth

# DAUHST-9stg
python test.py --template dauhst_9stg --outf ./exp/dauhst_9stg/ --method dauhst_9stg --pretrained_model_path ./model_zoo/dauhst_9stg/dauhst_9stg.pth

# BiSRNet
python test.py --template bisrnet --outf ./exp/bisrnet/ --method bisrnet --pretrained_model_path ./model_zoo/bisrnet/bisrnet.pth
  • The reconstrcuted HSIs will be output into MST/simulation/test_code/exp/. Then place the reconstructed results into MST/simulation/test_code/Quality_Metrics/results and run the following MATLAB command to calculate the PSNR and SSIM of the reconstructed HSIs.
Run cal_quality_assessment.m
  • Evaluating the Params and FLOPS of models

    We provide two functions my_summary() and my_summary_bnn() in simulation/test_code/utils.py. Use them to evaluate the parameters and FLOPS of full-precision and binarized models

from utils import my_summary, my_summary_bnn
my_summary(MST(), 256, 256, 28, 1)
my_summary_bnn(BiSRNet(), 256, 256, 28, 1)

4.3 Visualization

  • Put the reconstruted HSI in MST/visualization/simulation_results/results and rename it as method.mat, e.g., mst_s.mat.

  • Generate the RGB images of the reconstructed HSIs

 cd MST/visualization/
 Run show_simulation.m 
  • Draw the spetral density lines
cd MST/visualization/
Run show_line.m

 

5. Real Experiement:

5.1 Training

cd MST/real/train_code/

# MST_S
python train.py --template mst_s --outf ./exp/mst_s/ --method mst_s 

# MST_M
python train.py --template mst_m --outf ./exp/mst_m/ --method mst_m  

# MST_L
python train.py --template mst_l --outf ./exp/mst_l/ --method mst_l 

# CST_S
python train.py --template cst_s --outf ./exp/cst_s/ --method cst_s 

# CST_M
python train.py --template cst_m --outf ./exp/cst_m/ --method cst_m  

# CST_L
python train.py --template cst_l --outf ./exp/cst_l/ --method cst_l

# CST_L_Plus
python train.py --template cst_l_plus --outf ./exp/cst_l_plus/ --method cst_l_plus

# GAP-Net
python train.py --template gap_net --outf ./exp/gap_net/ --method gap_net 

# ADMM-Net
python train.py --template admm_net --outf ./exp/admm_net/ --method admm_net 

# TSA-Net
python train.py --template tsa_net --outf ./exp/tsa_net/ --method tsa_net 

# HDNet
python train.py --template hdnet --outf ./exp/hdnet/ --method hdnet 

# DGSMP
python train.py --template dgsmp --outf ./exp/dgsmp/ --method dgsmp 

# BIRNAT
python train.py --template birnat --outf ./exp/birnat/ --method birnat 

# MST_Plus_Plus
python train.py --template mst_plus_plus --outf ./exp/mst_plus_plus/ --method mst_plus_plus 

# λ-Net
python train.py --template lambda_net --outf ./exp/lambda_net/ --method lambda_net

# DAUHST-2stg
python train.py --template dauhst_2stg --outf ./exp/dauhst_2stg/ --method dauhst_2stg

# DAUHST-3stg
python train.py --template dauhst_3stg --outf ./exp/dauhst_3stg/ --method dauhst_3stg

# DAUHST-5stg
python train.py --template dauhst_5stg --outf ./exp/dauhst_5stg/ --method dauhst_5stg

# DAUHST-9stg
python train.py --template dauhst_9stg --outf ./exp/dauhst_9stg/ --method dauhst_9stg

# BiSRNet
python train_s.py --outf ./exp/bisrnet/ --method bisrnet
  • If you do not have a large memory GPU, add --size 128 to use a small patch size.

  • The training log, trained model, and reconstrcuted HSI will be available in MST/real/train_code/exp/

  • Note: you can use train_s.py for other methods except BiSRNet if you cannot access the mask data or you have limited GPU resources. In this case, you need to replace the --method paramter in the above commands and make some modifications.

5.2 Testing

The pretrained model of BiSRNet can be download from (Google Drive / Baidu Disk, code: mst1) and place them to MST/real/test_code/model_zoo/

cd MST/real/test_code/

# MST_S
python test.py --outf ./exp/mst_s/ --pretrained_model_path ./model_zoo/mst/mst_s.pth

# MST_M
python test.py --outf ./exp/mst_m/ --pretrained_model_path ./model_zoo/mst/mst_m.pth

# MST_L
python test.py  --outf ./exp/mst_l/ --pretrained_model_path ./model_zoo/mst/mst_l.pth

# CST_S
python test.py --outf ./exp/cst_s/ --pretrained_model_path ./model_zoo/cst/cst_s.pth

# CST_M
python test.py --outf ./exp/cst_m/ --pretrained_model_path ./model_zoo/cst/cst_m.pth

# CST_L
python test.py --outf ./exp/cst_l/ --pretrained_model_path ./model_zoo/cst/cst_l.pth

# CST_L_Plus
python test.py --outf ./exp/cst_l_plus/ --pretrained_model_path ./model_zoo/cst/cst_l_plus.pth

# GAP_Net
python test.py --outf ./exp/gap_net/ --pretrained_model_path ./model_zoo/gap_net/gap_net.pth

# ADMM_Net
python test.py --outf ./exp/admm_net/ --pretrained_model_path ./model_zoo/admm_net/admm_net.pth

# TSA_Net
python test.py --outf ./exp/tsa_net/ --pretrained_model_path ./model_zoo/tsa_net/tsa_net.pth

# HDNet
python test.py --template hdnet --outf ./exp/hdnet/ --method hdnet --pretrained_model_path ./model_zoo/hdnet/hdnet.pth

# DGSMP
python test.py --outf ./exp/dgsmp/ --pretrained_model_path ./model_zoo/dgsmp/dgsmp.pth

# BIRNAT
python test.py --outf ./exp/birnat/ --pretrained_model_path ./model_zoo/birnat/birnat.pth

# MST_Plus_Plus
python test.py --outf ./exp/mst_plus_plus/ --pretrained_model_path ./model_zoo/mst_plus_plus/mst_plus_plus.pth

# λ-Net
python test.py --outf ./exp/lambda_net/ --pretrained_model_path ./model_zoo/lambda_net/lambda_net.pth

# DAUHST_2stg
python test.py --outf ./exp/dauhst_2stg/ --pretrained_model_path ./model_zoo/dauhst/dauhst_2stg.pth

# DAUHST_3stg
python test.py --outf ./exp/dauhst_3stg/ --pretrained_model_path ./model_zoo/dauhst/dauhst_3stg.pth

# DAUHST_5stg
python test.py --outf ./exp/dauhst_5stg/ --pretrained_model_path ./model_zoo/dauhst/dauhst_5stg.pth

# DAUHST_9stg
python test.py --outf ./exp/dauhst_9stg/ --pretrained_model_path ./model_zoo/dauhst/dauhst_9stg.pth

# BiSRNet
python test.py --outf ./exp/bisrnet  --pretrained_model_path ./model_zoo/bisrnet/bisrnet.pth --method bisrnet
  • The reconstrcuted HSI will be output into MST/real/test_code/exp/

5.3 Visualization

  • Put the reconstruted HSI in MST/visualization/real_results/results and rename it as method.mat, e.g., mst_plus_plus.mat.

  • Generate the RGB images of the reconstructed HSI

cd MST/visualization/
Run show_real.m

 

6. Citation

If this repo helps you, please consider citing our works:

# MST
@inproceedings{mst,
  title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},
  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={CVPR},
  year={2022}
}


# CST
@inproceedings{cst,
  title={Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction},
  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={ECCV},
  year={2022}
}


# DAUHST
@inproceedings{dauhst,
  title={Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging},
  author={Yuanhao Cai and Jing Lin and Haoqian Wang and Xin Yuan and Henghui Ding and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={NeurIPS}, 
  year={2022}
}


# BiSCI
@inproceedings{bisci,
  title={Binarized Spectral Compressive Imaging},
  author={Yuanhao Cai and Yuxin Zheng and Jing Lin and Xin Yuan and Yulun Zhang and Haoqian Wang},
  booktitle={NeurIPS},
  year={2023}
}


# MST++
@inproceedings{mst_pp,
  title={MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction},
  author={Yuanhao Cai and Jing Lin and Zudi Lin and Haoqian Wang and Yulun Zhang and Hanspeter Pfister and Radu Timofte and Luc Van Gool},
  booktitle={CVPRW},
  year={2022}
}


# HDNet
@inproceedings{hdnet,
  title={HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging},
  author={Xiaowan Hu and Yuanhao Cai and Jing Lin and  Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={CVPR},
  year={2022}
}

mst's People

Contributors

caiyuanhao1998 avatar linjing7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mst's Issues

归一化问题

请问在对原始数据的处理中需要归一化到[0,1]之间吗,在代码中似乎没有做类似的操作。此外 input = meas / 28 * 2 * 1.2 是什么意思呢

mask_3d_shift和phi的关系

您好我想请教一下,在阅读论文的时候看到,在多数建模的过程中都是通过向量化建模,phi是表示系统矩阵,实际上是包含了色散后相加的过程。但是在dauhst代码实现的过程中变成了图片级别的实现,而phi变成了mask_3d_shift,通道相加则是通过额外的代码实现的。
这样的求解是等效于向量化的求解吗?为什么?

self.HSI

在dataset中有写到如果是test模式的话为 hsi = self.HSI[:, :, :, index1] 请问这里的HSI是什么数据集呢,代码中似乎没有交代。

你好,想请教一个问题

基于RGB的光谱重建的数据集制作,能否通过某个特定的RGB相机配合不同的窄带滤波片获得同一物体的各波段图片,进而组合成一个光谱图用于训练。这样在特定相机下的训练结果能否提升获取的RGB图片重建效果?换言之就是重建的网络过程中,除了RGB图片信息本身以外,可以学到相机响应函数吗?

readme文件中的一些小疑问

image
您好!很抱歉再次打扰您!如图所示,在5.2test中实际上是没有给出预训练模型的,模型要通过我们自己训练得到而非model_zoo,因此此处是否应该修改?诸如python test.py --template mst_s --outf ./exp/mst_s/ --method mst_s --pretrained_model_path ./model_zoo/mst/mst_s.pth,更像是simulation的测试代码。

为什么测试结果是包含了truth

image
如图所示,pred是4-D的我理解,代表了我们得到的训练结果一共10张图片,每一张都包含28张高光谱图片,那么truth的意义是什么呢?

论文real实验的图片

你好,我注意到mst和其他论文的real实验对照图中都会有对应的rgb图像,这个有原数据吗,感谢

数据集中RGB图像

作者您好,您是否有每一张高光谱图像所对应的RGB图像?repo里面只有simu test的10张和real test的5张所对应的RGB图像,您是否有训练集里每一张高光谱图像对应的RGB图像?非常感谢您的解答!

ADMM-Net real result问题

您好,ADMM-Net real result给的链接好像是模拟任务上的,能不能分享一下real的结果呀,包括生成的.mat文件
麻烦您了!
非常感谢!!!

测试代码问题

真实数据集的测试代码是不是有误,没法正确运行,显示test.py: error: unrecognized arguments: --template mst_l --outf ./exp/mst_l/ --method mst_l

Real-architecture

recently im about to test cst,i found this error.these three parameters are unresolved,but it make sense in simulation.
image

Real Test Data and Masks?

README.md mentions that

Both the CAVE (CAVE_512_28) and KAIST (KAIST_CVPR2021) datasets are used as the real training set.

But there are no clues which datasets should we put into these two paths
https://github.com/caiyuanhao1998/MST/blob/main/real/test_code/test.py

parser.add_argument('--data_path', default='./Data/Testing_data/', type=str,help='path of data')
parser.add_argument('--mask_path', default='./Data/mask.mat', type=str,help='path of mask')

parser.add_argument('--data_path', default='./Data/Testing_data/', type=str,help='path of data')
parser.add_argument('--mask_path', default='./Data/mask.mat', type=str,help='path of mask')

Thanks!

代码

什么时候开源代码呢

关于参数的疑问

image
您好,在您的脚本里有这样两个参数:input_setting和mask,他们的取值H, HM or Y和Phi, Phi_PhiPhiT, Mask or None,我对这两个参数代表的意义不是很理解,您能解释一下吗?非常感谢!

Baidu Disk links do not open

Hello, the Baidu Disk links are not opening in my country. Can you consider putting those files on OneDrive or Google Drive?

real-test.py

when running the test.py,some situations occured in 'out=model(input,mask_3d_shift,mask_3d_shift_s)',the model that used in test was trainned by me before testting.
微信图片_20240118154502
and the system has been changed into ubuntu,if you could give me some advice?thanks

关于测试指标问题

文章中在关于仿真数据的测试细节中写道,从KAIST数据集中选取10个样本,裁剪成256*256大小的数据输入到网络中得到ssim和psnr的相关指标。这里想问一下,KAIST数据的空间分辨率在2000+*3000+,文章中的指标是随机裁剪后得到的最优指标吗,因为随机裁剪的话他的指标差异会很大。

优秀

工作很优秀,这两年成果很丰富。但是指标已经刷到这么高了,但数据集还用的3、4年前的一套,有没有考虑过更新数据集?

Confusion about the code

作者您好,我在您给出的代码里发现,在模拟集对Mask的处理当中,看到您给出的以下这种Phi_PhiPhiT类型,我想请教您一下这种Mask的是在什么情况下使用?期待您的回复。
def generate_shift_masks(mask_path, batch_size): mask = sio.loadmat(mask_path + '/mask_3d_shift.mat') mask_3d_shift = mask['mask_3d_shift'] mask_3d_shift = np.transpose(mask_3d_shift, [2, 0, 1]) mask_3d_shift = torch.from_numpy(mask_3d_shift) [nC, H, W] = mask_3d_shift.shape Phi_batch = mask_3d_shift.expand([batch_size, nC, H, W]) Phi_s_batch = torch.sum(Phi_batch**2,1) Phi_s_batch[Phi_s_batch==0] = 1 return Phi_batch, Phi_s_batch

关于real data的noise注入问题

在您的论文中写道 To simulate real imaging situations, we inject 11-bit shot noise into the measurements during training.
在代码中似乎没有找到相关的操作,想请教下具体是怎样实现的呢,谢谢!

测试代码问题

image
学长你好,我在上个问题中,将num_worker从8修改为0之后可以成功训练模型,将训练好的模型进行测试,出现了以下的bug,在csdn上也找不到解决办法,所以想来请教一下您,说是参数有问题,但是您的代码我也没怎么修改过呀。

关于训练的问题

78D3558BFDCB49758E3B1E4F4F7564E1

博主你好,我在real数据集训练模型的时候,发现加载两遍数据集,如图所示,loading CAVE 0直到loading CAVE 29,然后开始loading KAIST,KAIST的数据集加载完之后,重新加载CAVE,您知道这是怎么一回事吗?
另外,当两遍数据集全部加载后,出现了这样的错误:ran out of input。
a9410a3db9ec4e27176f35cbf1b5321

could you provide the training logs about DAUHST model?

I am very interesed in your DAUHST work and reproducing your expermental work.
Could you prvoid the training logs of DAUHST, it would be helpful to have training logs to inspect the correctness of the training process.

一点点疑问

你好,数据集里TSA_real_data的Measurements是加噪之后得到的吗

请问MST这篇论文中的 图6是怎么画的?

尊敬的作者您好,您在HSI重建领域做出了一系列非常棒的工作,并且开源了一个非常棒的工具箱,对你表示尊敬。我对您发表在cvpr2022上的MST非常感兴趣,您在论文的图6中展示了一种可视化图来说明所提出的S-MSA机制的有效性,我想知道这种可视化应该如何操作。希望得到您的回复,再次感谢!

关于真实实验

学长您好,您在真实重建实验中,论文中并没有给出比较结果,只有目视比较,是因为没有ground truth吗?
image
此外,在真实实验中,您将图片裁剪为384384的大小进行训练,训练出来的模型用660714的的measurement进行测试,这样是否表明训练的模型所用的图片尺寸和测试所用的图片尺寸无关?

ValueError: invalid literal for int() with base 10: ''

MST/simulation/train_code# python3 train.py --template mst_s --outf ./exp/mst_s/ --method mst_s
training sences: 502
Traceback (most recent call last):
  File "train.py", line 25, in <module>
    train_set = LoadTraining(opt.data_path)
  File "/share/home/brcao/Repos/MST/simulation/train_code/utils.py", line 39, in LoadTraining
    scene_num = int(scene_list[i].split('.')[0][5:])
ValueError: invalid literal for int() with base 10: ''

GPU in train.py but No GPUs in Test.py

My environment:

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

In MST/simulation/train_code:
The training script can detect my GPU, I can run it without errors

python train.py --template mst_s --outf ./exp/mst_s/ --method mst_s
MST/simulation/train_code$ nvidia-smi
Sat Aug 19 11:48:46 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:04:00.0 Off |                  N/A |
| 30%   44C    P8    12W / 350W |   1731MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |

However, in the same environment, when I run the test script, I got:

MST/simulation/test_code# python test.py --template mst_s --outf ./exp/mst_s/ --method mst_s --pretrained_model_path ./model_zoo/mst/mst_s.pth
Traceback (most recent call last):
  File "test.py", line 26, in <module>
    raise Exception('NO GPU!')
**Exception: NO GPU!**

Note that I have downloaded the ./model_zoo/mst/mst_s.pth.
Any help would be appreciated.

Thanks!

mask 3d shift s

在阅读代码的时候,看到了mask_3d_shift_s,我清楚mask是初始二维掩码660×660,mask_3d是mask经过28次复制后得到的660×660×28,而mask_3d_shift是经过平移后得到的660×714×28,那么mask_3d_shift_s是将mask_3d_shift每一个像素平方后,在图片维度相加,得到的尺寸为660×714,我不太理解mask_3d_shift_s为什么要通过平方操作获得,以及mask_3d_shift_s的意义是什么?希望能得到您的解答!非常感谢。
27909e90906266c95f7c96dac7f3089

real任务中所提供的gap-net与其他的不一样

real任务中所提供的gap-net的rgb结果与其他的不一样
gap-net很明显的是四周有黑色边框,其他的图片会铺满整个图片
我用提供的mat重新生成的rgb也是如此
想请问下这个是gap-net的问题吗

trained model

您好,感谢您的出色工作,您文章里引用的9个算法当中,TwIST,GAP-TV,HSSP,DNU,DIP-HSI的实验结果或者源码没有在您的主页找到,请问您可以发一下吗,期待您的回复。

关于DAUHST

作者您好,在您的公布的DAUHST源码中,我注意到在迭代阶段的shift_back_3d和 shift_3d这两个函数的输入和输出没有区别,如果我没有做错的话,可以向您请教下这两个函数的作用吗?

数据集无法下载

你好,TSA_simu_data和TSA_real_data的OneDrive链接无法打开,请问可以提供百度云链接吗?没有里面的mask.mat文件似乎跑不了算法

可视化生成的图片与作者百度网盘中提供的图片大小不一致

您好,首先感谢您做出这么棒的工作,并且无私的开源出来。在运行可视化matlab代码时,我生成的图片与作者生成的图片不一致。
frame1channel1
frame1channel1
第一张是我生成的,第二张是作者提供的,我想知道为何出现这种问题,是作者已经裁剪过图像了嘛?希望得到您的解答!

请问real训练过程为何掩码也进行随机裁剪

real训练的时候对高光谱数据集裁剪为38438428的数据,而掩码也随机裁剪为384384,这样岂不是每次裁剪出来的掩码是不一样的,而我们用来测试的时候,mask是660660的,为什么不用660*660的mask加入训练?

real and simulation dataset

您好,想问一下,在您提供的simulation代码中显示 opt.data_path = f"{opt.data_root}/cave_1024_28/",似乎您在simulation下训练时用的是cave 1024尺寸的数据集,而在real下用的是cave 512尺寸的数据集,这与您在之前issue中说明的有些出入,请问这里这两个数据集的选取对指标影响大吗,您在这两种情况下是怎样选取的数据集呢?谢谢!

Question about the ADMM-Net

I found some differences in the original ADMM-net and your implementation

  1. Original one has sparse inverse operation to achieve 2nd equation of formula (12) in original paper, while yours doesn't. Is there any change applied on formulas?
  2. Your implementation has multiple stages of U-Net which doesn't appear in original paper. Instead, I found this paper has similar usage of U-Net as denoising network here. Did you use U-Net as some kind of equivalent to part of the original ADMM-net or sth else?

HSI可视化的问题

我使用MST++仓库中的算法得到HSI的.mat文件,之后使用MST仓库中的show_real文件进行可视化,将channel更改为1到31后并增加lam28中的数量,运行后得到报错:

Index in position 3 exceeds array bounds. Index must not exceed 1.
出错 show_real (第 25 行)
dispCubeAshwin(recon(:,:,img_nb),intensity,lam31(img_nb), [] ,col_num,row_num,0,1,name);

如下图划线处是我更改的地方:
EN0X3CQZTQLQV@A3RQ_QW5A

请问这样的操作流程是否正确,代码还有哪里需要更改?

TSA_simu_data/mask_3d_shift.mat 数据如何获得。

您好,我们最近在关注您的DAUHST的工作。在阅读您的代码中,发现您的代码加载的mask的数据为 ‘TSA_simu_data/mask_3d_shift.mat’(shape ->256x310x28)。
请问这个mask 形式是如何获得的?它和‘TSA_simu_data/mask.mat ’中的数据有什么关系?

About Layernorms within MSAB

class MSAB(nn.Module):
    def __init__(
            self,
            dim,
            dim_head=64,
            heads=8,
            num_blocks=2,
    ):
        super().__init__()
        self.blocks = nn.ModuleList([])
        for _ in range(num_blocks):
            self.blocks.append(nn.ModuleList([
                MS_MSA(dim=dim, dim_head=dim_head, heads=heads),
                PreNorm(dim, FeedForward(dim=dim))
            ]))

    def forward(self, x):
        x = x.permute(0, 2, 3, 1)
        for (attn, ff) in self.blocks:
            x = attn(x) + x
            x = ff(x) + x
        out = x.permute(0, 3, 1, 2)
        return out

I wonder if there is a Layernorm missing in MSAB, since the MST paper describes "MSAB is composed of a Feed-Forward Network (FFN), an MS-MSA, and two layer normalization"

关于数据集的询问

请问一下代码架构中datasets的cave_1024_28文件夹下的数据是怎么获取的呀,是通过自己设计的mask对CAVE进行实验得到的吗

Wrong paper links presented in README

MST/README.md

Lines 113 to 116 in 28478e5

| [DAUHST-2stg](https://arxiv.org/abs/2203.04845) | 1.40 | 18.44 | 36.34 | 0.952 | [Google Drive](https://drive.google.com/drive/folders/1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1qOrnp1crkk1z5ha56UoyOqDMFGfWlLC7?usp=sharing) /[Baidu Disk]( https://pan.baidu.com/s/1_RxqZQpCcYH50nxhSWeb0w?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |
| [DAUHST-3stg](https://arxiv.org/abs/2203.04845) | 2.08 | 27.17 | 37.21 | 0.959 | [Google Drive](https://drive.google.com/drive/folders/1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1uwXh5JrD4rnh_xYBpF4K4wI4lcTD1j4p?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1iYtxPuf1rkFWut5UdEYqtg?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |
| [DAUHST-5stg](https://arxiv.org/abs/2203.04845) | 3.44 | 44.61 | 37.75 | 0.962 | [Google Drive](https://drive.google.com/drive/folders/1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1snTVZSUsbtzjJ5lxbPbaKhpTJX28Byuh?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1k1q0Y8QPgMZhThBEfzGKzQ?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |
| [DAUHST-9stg](https://arxiv.org/abs/2203.04845) | 6.15 | 79.50 | 38.36 | 0.967 | [Google Drive](https://drive.google.com/drive/folders/1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1rzZG1L-s2rYmR-wHXg9KnnGPbOIT5GaP?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/10vGcOirPk2L8sQg6uJoJkg?pwd=mst1) | [Google Drive](https://drive.google.com/drive/folders/1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) / [Baidu Disk](https://pan.baidu.com/s/1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |

测试数据集转为RGB图像

屏幕截图 2024-03-14 183947 作者您好,DAUHST论文中图4左上角的RGB Image是如何生成的?visualization中的show simualtion只能生成每个通道的rgb图

real data training

在真实集训练时,cave尺寸是512方的,我们在复现时似乎遇到了麻烦。具体来讲,真实实验模型要求传递660*660尺寸的图像,而cave的尺寸达不到该要求。如果我理解的没有错,这里是否使用了1024版本的cave?我将不胜感激如果可以收到您的答复。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.