GithubHelp home page GithubHelp logo

chenhongruixuan / mambacd Goto Github PK

View Code? Open in Web Editor NEW
215.0 215.0 6.0 50.79 MB

ChangeMamba: Remote Sensing Change Detection Based on Spatio-Temporal State Space Model

License: Apache License 2.0

Python 89.14% Shell 1.13% C++ 3.01% Cuda 6.52% C 0.20%
change-detection mamba pytorch remote-sensing spatio-temporal-modeling state-space-model

mambacd's People

Contributors

chengxihan avatar chenhongruixuan avatar jtrneo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mambacd's Issues

something about "infer_MambaBCD.py"

"Hello, I would like to use the SCD model for prediction, but I only see 'infer_MambaBCD.py' available. Will there be an update to include 'infer_MambaSCD.py'? Thank you very much."

Request for Trained Weights on xBD Dataset from ChangeMamba Project

Hello,

I am highly interested in your work on "ChangeMamba: Remote Sensing Change Detection with Spatio-Temporal State Space Model" posted on GitHub. I noticed that you have successfully trained the MambaBDA model on the xBD dataset and achieved remarkable results. I am currently engaged in related research and would greatly appreciate the opportunity to test my dataset using your trained weights.

Would it be possible for you to share the weights trained on the xBD dataset? It would be immensely beneficial for my research, and I will ensure to acknowledge and thank your work in my subsequent studies.

Thank you very much for your consideration and assistance!

Best regards,

xczhou

关于SYSU测试集的问题

非常感谢这篇MambaCD的工作,我在复现代码的时候发现,代码并没有用到验证集,而是在每个固定的iteration进行了测试集的评估。如:
在sh中指定的是:
--test_dataset_path '<dataset_path>/SYSU/test'
是测试集的路径,而不是验证集
在训练脚本中:
if (itera + 1) % 10 == 0:
print(f'iter is {itera + 1}, overall loss is {final_loss}')
if (itera + 1) % 500 == 0:
self.deep_model.eval()
rec, pre, oa, f1_score, iou, kc = self.validation()
if kc > best_kc:
torch.save(self.deep_model.state_dict(),
os.path.join(self.model_save_path, f'{itera + 1}_model.pth'))
best_kc = kc
best_round = [rec, pre, oa, f1_score, iou, kc]
self.deep_model.train()

print('The accuracy of the best round is ', best_round)

看上去像是在测试集中找到最好的performance,请问论文中报告的performance是否是用这种方式找到的呢?
非常感谢

make_data_loader.py

Hello, when I only run the BDA task, in the xBD dataset I downloaded, the train folder only contains the images, targets and labels three files, and then I try to change it to the format in the make_data_loader.py of your paper, but it is always wrong, how did you change this part of the code, is there any changed code to provide it, thank you very much!

MambaSCD eval is NAN

Training Dataset: SECOND
Environment:

  • Python: 3.12.3
  • torch: 2.2.2+cu118

the first thing was GT_CD is no correct, but i found GT_CD data is not used to calculate eval score. my evaluation result is here
image

I have a bug. NameError: name 'selective_scan_cuda_oflex' is not defined,Can you help me solve it?

#!/bin/bash

python /home/ubuntu20/Desktop/MambaCD1/MambaCD/script/train_MambaBCD.py
--dataset 'LEVIR-CD+'
--batch_size 16
--crop_size 256
--max_iters 320000
--model_type MambaBCD_Small
--model_param_path '/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/saved_models'
--train_dataset_path '/home/ubuntu20/Desktop/train'
--train_data_list_path '/home/ubuntu20/Desktop/train/train.txt'
--test_dataset_path '/home/ubuntu20/Desktop/train'
--test_data_list_path '/home/ubuntu20/Desktop/train/train.txt'
--cfg '/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml'
--pretrained_weight_path '/home/ubuntu20/Desktop/MambaCD1/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'

This is my main file

(mamba) ubuntu20@ubuntu20-System-Product-Name:/Desktop/MambaCD1/MambaCD/changedetection$ sh main.sh
=> merge config from /home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml
Successfully load ckpt /home/ubuntu20/Desktop/MambaCD1/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth
_IncompatibleKeys(missing_keys=['outnorm0.weight', 'outnorm0.bias', 'outnorm1.weight', 'outnorm1.bias', 'outnorm2.weight', 'outnorm2.bias', 'outnorm3.weight', 'outnorm3.bias'], unexpected_keys=['classifier.norm.weight', 'classifier.norm.bias', 'classifier.head.weight', 'classifier.head.bias'])
0%| | 0/20000 [00:16<?, ?it/s]
Traceback (most recent call last):
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 207, in
main()
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 203, in main
trainer.training()
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 104, in training
output_1 = self.deep_model(pre_change_imgs, post_change_imgs)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/MambaBCD.py", line 67, in forward
pre_features = self.encoder(pre_data)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/Mamba_backbone.py", line 50, in forward
o, x = layer_forward(layer, x) # (B, H, W, C)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/Mamba_backbone.py", line 43, in layer_forward
x = l.blocks(x)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1360, in forward
return self._forward(input)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1348, in _forward
x = input + self.drop_path(self.op(self.norm(input)))
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1147, in forwardv2
y = self.forward_core(x)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1124, in forward_corev2
return cross_selective_scan(
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 406, in cross_selective_scan
ys: torch.Tensor = selective_scan(
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 372, in selective_scan
return SelectiveScan.apply(u, delta, A, B, C, D, delta_bias, delta_softplus, nrows, backnrows, ssoflex)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 110, in decorate_fwd
return fwd(*args, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 299, in forward
out, x, *rest = selective_scan_cuda_oflex.fwd(u, delta, A, B, C, D, delta_bias, delta_softplus, 1, oflex)
NameError: name 'selective_scan_cuda_oflex' is not defined
(mamba) ubuntu20@ubuntu20-System-Product-Name:
/Desktop/MambaCD1/MambaCD/changedetection$

dataset category

Hello,
I'm very interested in your work and I have some questions for you while reading the code. Regarding the data set categories used in the article and the number of categories in the ADE data set, and I see that the number of categories in the code is also 150, do I need to change the type and number of categories in other locations when using it?
Thanks!

CUDA version requirements

hello!

First of all, I'd like to express my sincere thanks for your great work on the MambaCD project. This project has been of great help to my research and work, for which I am very appreciative and grateful.

While trying to install and use the project, I noticed that the installation process requires a CUDA version of at least 11.6. Due to hardware and other dependency limitations, the CUDA version installed in my environment can only be up to 11.4.

Therefore, I would like to ask a few questions:

  1. Does the project absolutely require CUDA 11.6 and above to run properly?
  2. Do you have a suggested solution or alternative for environments that only support CUDA 11.4? For example, is it possible to modify the configuration or source code to be compatible with CUDA 11.4?
  3. If the code needs to be adjusted to support CUDA 11.4, can you provide some advice or guidance to help me achieve this?

I'd really like to have this work successfully in my environment. Any advice and help on how to resolve this compatibility issue would be extremely valuable and appreciated.

Very much looking forward to your reply!

best wishes!

Could not build wheels for selective_scan, which is required to install pyproject.toml-based projects

Hello, I encountered the following error while trying to install. Can you try to resolve it?
Processing /home/hhy/下载/VMamba-main/kernels/selective_scan
Preparing metadata (setup.py) ... done
Requirement already satisfied: torch in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (2.1.1+cu118)
Requirement already satisfied: packaging in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (23.2)
Requirement already satisfied: ninja in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (1.11.1.1)
Requirement already satisfied: einops in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (0.7.0)
Requirement already satisfied: filelock in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.9.0)
Requirement already satisfied: typing-extensions in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (4.8.0)
Requirement already satisfied: sympy in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (1.12)
Requirement already satisfied: networkx in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.2.1)
Requirement already satisfied: jinja2 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.1.2)
Requirement already satisfied: fsspec in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (2024.3.1)
Requirement already satisfied: triton==2.1.0 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (2.1.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from jinja2->torch->selective_scan==0.0.2) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from sympy->torch->selective_scan==0.0.2) (1.3.0)
Building wheels for collected packages: selective_scan
Building wheel for selective_scan (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [118 lines of output]

  torch.__version__  = 2.1.1+cu118
  
  
  
  
  CUDA_HOME = /home/hhy/anaconda3/envs/mamba
  
  
  running bdist_wheel
  running build
  running build_ext
  /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no g++ version bounds defined for CUDA version 11.8
    warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
  building 'selective_scan_cuda_core' extension
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus
  Emitting ninja build file /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/3] c++ -MMD -MF '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o'.d -pthread -B /home/hhy/anaconda3/envs/mamba/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o' -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o
  c++ -MMD -MF '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o'.d -pthread -B /home/hhy/anaconda3/envs/mamba/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o' -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  In file included from /home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp:5:
  /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: 没有那个文件或目录
      5 | #include <cuda_runtime_api.h>
        |          ^~~~~~~~~~~~~~~~~~~~
  compilation terminated.
  [2/3] /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_fwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o
  /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_fwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  [3/3] /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_bwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o
  /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_bwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2100, in _run_ninja_build
      subprocess.run(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/subprocess.py", line 526, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
  
  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "<string>", line 2, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "/home/hhy/下载/VMamba-main/kernels/selective_scan/setup.py", line 140, in <module>
      setup(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/__init__.py", line 104, in setup
      return distutils.core.setup(**attrs)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 184, in setup
      return run_commands(dist)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 200, in run_commands
      dist.run_commands()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
      self.run_command(cmd)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 364, in run
      self.run_command("build")
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
      self.distribution.run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 132, in run
      self.run_command(cmd_name)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
      self.distribution.run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 91, in run
      _build_ext.run(self)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
      self.build_extensions()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 873, in build_extensions
      build_ext.build_extensions(self)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 479, in build_extensions
      self._build_extensions_serial()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 505, in _build_extensions_serial
      self.build_extension(ext)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 252, in build_extension
      _build_ext.build_extension(self, ext)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 560, in build_extension
      objects = self.compiler.compile(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 686, in unix_wrap_ninja_compile
      _write_ninja_file_and_compile_objects(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1774, in _write_ninja_file_and_compile_objects
      _run_ninja_build(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2116, in _run_ninja_build
      raise RuntimeError(message) from e
  RuntimeError: Error compiling objects for extension
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for selective_scan
Running setup.py clean for selective_scan
Failed to build selective_scan
ERROR: Could not build wheels for selective_scan, which is required to install pyproject.toml-based projects

I encounter a question!

Hello, author. When I run the BCD code, I encounter an error: torch.cuda.OutOfMemoryError: CUDA out of memory. So, I wanted to ask how much GPU memory is required to run this part of the code?

FileNotFoundError: No such file: '/media/hhy/Ventoy/xbd/train/images/hurricane-florence_00000263_pre_disaster_pre_disaster.png.png'

Hello, I am trying to execute the BDA task and ran train-MambaBDA.py. The following error occurred. I thought it was a problem with the dataset, but I downloaded the xbd dataset again and still encountered the same problem. Can you help me answer this? In addition, I saw that the hold out data, which is not useful for executing BDA tasks, was used there?
I also encountered the same problem when running the train-Mambascd.py file.

parser.add_argument('--pretrained_weight_path', type=str,default='/home/hhy/下载/MambaCD-master/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'
'')

parser.add_argument('--dataset', type=str, default='xBD')
parser.add_argument('--type', type=str, default='train')
parser.add_argument('--train_dataset_path', type=str, default='/media/hhy/Ventoy/xbd/train')
parser.add_argument('--train_data_list_path', type=str, default='/media/hhy/Ventoy/xbd/train/train.txt')
parser.add_argument('--test_dataset_path', type=str, default='/media/hhy/Ventoy/xbd/test')
parser.add_argument('--test_data_list_path', type=str, default='/media/hhy/Ventoy/xbd/test/test.txt')
parser.add_argument('--shuffle', type=bool, default=True)
parser.add_argument('--batch_size', type=int, default=4)
parser.add_argument('--crop_size', type=int, default=256)
parser.add_argument('--train_data_name_list', type=list)
parser.add_argument('--test_data_name_list', type=list)
parser.add_argument('--start_iter', type=int, default=0)
parser.add_argument('--cuda', type=bool, default=True)
parser.add_argument('--max_iters', type=int, default=800000)
parser.add_argument('--model_type', type=str, default='bda——small')
parser.add_argument('--model_param_path', type=str, default='../saved_models')

parser.add_argument('--resume', type=str)
parser.add_argument('--learning_rate', type=float, default=1e-4)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--weight_decay', type=float, default=5e-3)

False
0%| | 0/200000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 235, in
main()
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 231, in main
trainer.training()
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 97, in training
itera, data = train_enumerator.next()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1347, in _next_data
return self._process_data(data)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1373, in _process_data
data.reraise()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hhy/下载/MambaCD-master/changedetection/datasets/make_data_loader.py", line 189, in getitem
pre_img = self.loader(pre_path)
File "/home/hhy/下载/MambaCD-master/changedetection/datasets/make_data_loader.py", line 14, in img_loader
img = np.array(imageio.imread(path), np.float32)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/init.py", line 97, in imread
return imread_v2(uri, format=format, **kwargs)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/v2.py", line 359, in imread
with imopen(uri, "ri", **imopen_args) as file:
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/imopen.py", line 113, in imopen
request = Request(uri, io_mode, format_hint=format_hint, extension=extension)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/request.py", line 247, in init
self._parse_uri(uri)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/request.py", line 407, in _parse_uri
raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/media/hhy/Ventoy/xbd/train/images/hurricane-florence_00000263_pre_disaster_pre_disaster.png.png'

MambaBDA-Tiny on the [xBD]

Hi! Congratulations on your great work! I am wondering what happened to MambaBDA-Tiny on the [xBD] pretrained weights?

dataset

我想问一下dataset数据集的时候,SCD数据集SECOND里面的GT_CD源数据集没有啊。是要自己生成吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.