pnlbwh / cnn-diffusion-mribrain-segmentation Goto Github PK
View Code? Open in Web Editor NEWCNN based brain masking
License: Other
CNN based brain masking
License: Other
has been made:
The percentile of image intensity value to be used as a threshold for normalizing a b0 image to [0,1]
Fixed by faeeca3
Are you checking if the # of gradients in sizes
field equals the number of gradient directions in the header?
bashCommand1 = ("unu head " + input_file + " | grep -i sizes | awk '{print $5}'")
bashCommand2 = ("unu head " + input_file + " | grep -i _gradient_ | wc -l")
output1 = subprocess.check_output(bashCommand1, shell=True)
output2 = subprocess.check_output(bashCommand2, shell=True)
if output1.strip():
header_gradient = int(output1.decode(sys.stdout.encoding))
total_gradient = int(output2.decode(sys.stdout.encoding))
Should you not be checking if the # of gradients in the image equals the # of gradients in the size
field instead?
@senthilcaesar , I think training part is not properly streamlined with prediction part. I trained a model, and then tried to predict from that model, however, ran into the following error:
Saving data to disk...
Pre-Processing Time Taken : 1.7 min
Loading sagittal model from disk...
Traceback (most recent call last):
File "../pipeline/dwi_masking.py", line 713, in <module>
dwi_mask_sagittal = predict_mask(cases_file_s, trained_model_folder, view='sagittal')
File "../pipeline/dwi_masking.py", line 139, in predict_mask
loaded_model.load_weights(trained_folder + '/weights-' + view + '-improvement-' + optimal + '.h5')
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/keras/engine/network.py", line 1157, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 408, in __init__
swmr=swmr)
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'data/model_folder_test/weights-sagittal-improvement-09.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
(dmri_seg) [tb571@pnl-oracle tests]$ ls ../model_folder/
CompNetBasicModel.json weights-axial-improvement-08.h5 weights-sagittal-improvement-09.h5
IITmean_b0_256.nii.gz weights-coronal-improvement-08.h5
(dmri_seg) [tb571@pnl-oracle tests]$ ls data/model_folder_test/
CompNetBasicModel.json IITmean_b0_256.nii.gz sagittal-compnet_final_weight.h5 weights-sagittal-improvement-01.h5
The difference in number of files in the above tells us something is incomplete.
I streamlined the testing process for this software. Please use my process for testing your commit. Create a new branch like below:
ssh pnl-maxwell or pnl-oracle or grx03
source /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/train_env.sh
# I trust you to know /path/to/
cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/
git checkout -b fix-training
cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests
./pipeline_test.sh
You may look into /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests/data
folder for details.
Please fix it.
python pipeline/dwi_masking.py -i /tmp/cnn_dwi_list.txt -f model_folder/ -nproc 16 -ref model_folder/IITmean_b0_256.nii.gz
Using TensorFlow backend.
File exist
Extracting b0 volume...
Performing ants rigid body transformation...
Normalizing input data
Case completed = 0
Merging npy files...
Saving data to disk...
Pre-Processing Time Taken : 0.68 min
Loading sagittal model from disk...
WARNING:tensorflow:From /tmp/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
256/256 [==============================] - 87s 340ms/step
Loading coronal model from disk...
256/256 [==============================] - 88s 342ms/step
Loading axial model from disk...
256/256 [==============================] - 88s 342ms/step
Masking Time Taken : 4.87 min
Splitting files....
Performing Muti View Aggregation...
Converting file format...
Performing ants inverse transform...
Traceback (most recent call last):
File "pipeline/dwi_masking.py", line 717, in <module>
omat=list(omat_list[i].split()))
File "pipeline/dwi_masking.py", line 303, in npy_to_nhdr
process = subprocess.Popen(mask_filter.split(), stdout=subprocess.PIPE)
File "/tmp/miniconda3/envs/tf/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/tmp/miniconda3/envs/tf/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'maskfilter': 'maskfilter'
It probably only occurs with environment_gpu.yml
:
(dmri_seg) [tb571@grx06 cnn-test]$ ipython
Traceback (most recent call last):
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 6, in <module>
from contextvars import ContextVar
ModuleNotFoundError: No module named 'contextvars'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/ipython", line 5, in <module>
from IPython import start_ipython
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/__init__.py", line 56, in <module>
from .terminal.embed import embed
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/terminal/embed.py", line 16, in <module>
from IPython.terminal.interactiveshell import TerminalInteractiveShell
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py", line 19, in <module>
from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/__init__.py", line 16, in <module>
from .application import Application
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/__init__.py", line 1, in <module>
from .application import Application
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/application.py", line 38, in <module>
from prompt_toolkit.buffer import Buffer
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/buffer.py", line 28, in <module>
from .application.current import get_app
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 8, in <module>
from prompt_toolkit.eventloop.dummy_contextvars import ContextVar # type: ignore
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/eventloop/__init__.py", line 1, in <module>
from .async_generator import generator_to_async_generator
File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/eventloop/async_generator.py", line 5, in <module>
from typing import AsyncGenerator, Callable, Iterable, TypeVar, Union
ImportError: cannot import name 'AsyncGenerator'
pip install contextvars
pip install --upgrade prompt-toolkit==2.0.1
@senthilcaesar , I think you made an assumption that they exist or they were so you didn't consider the issue:
This block is not connected to args.cr
:
CNN-Diffusion-MRIBrain-Segmentation/pipeline/dwi_masking.py
Lines 627 to 675 in 8caf97f
Hence, it spawns as many processes as the number of cases in caselist.txt
. This will blow out the CPU. It was not discovered so far because test pipeline included in this project has only three cases. Tashrif discovered it while trying to utilize its inherent looping facility over 220 cases. All 220 cases spawned uncontrollably!!
It needs to be fixed using a standard multiprocessing coding e.g. https://github.com/pnlbwh/TBSS/blob/2fb1bc776b07013b450c24057db876486fc4bb2b/lib/tbss_single.py#L112-L121
Follow up from work group on 1/29/2020:
@tashrifbillah suggested building the packages in requirements.txt
using py3 compiler.
Unused packages such as:
opencv-python>=3.4.1.15
nilearn>=0.5.0
can be removed.
If there are only a few keras incompatibility of the current software against py3, we can work on to remove them. If there are many (unlikely), we shall devise a plan.
The following files are common for all users:
axial-binary-dwi
coronal-binary-dwi
sagittal-binary-dwi
That means, user A would have to overwrite user B's content, which would result in a permission issue.
Let's meet and talk about fixing all such problems in the software.
Should it be:
For CPU:
For GPU:
For both:
Hi @senthilcaesar ,
All the scripts have redundant and repeated imports.
For example, see pipeline/dwi_masking.py
:
import os.path
from os import path
Not only the duplication exists, but also the above block has been repeated. Similar repetition exists for other libraries as well.
To fix this, please create a new branch clean-imports
from remove-ext-dep
, and then make changes in the former. I shall review your changes and merge with remove-ext-dep
. Please don't edit on remove-ext-dep
branch.
All the memory leak error we got in the cluster so far should be due to the following numpy version:
Tashrif's independent CNN design found the above.
More investigations to follow ...
Error from https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/solve-memory-leak/environment_gpu.yml :
(dmri_seg_1) [tb571@grx06 envs]$ ipython
Traceback (most recent call last):
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 6, in <module>
from contextvars import ContextVar
ModuleNotFoundError: No module named 'contextvars'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/bin/ipython", line 5, in <module>
from IPython import start_ipython
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/__init__.py", line 56, in <module>
from .terminal.embed import embed
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/terminal/embed.py", line 16, in <module>
from IPython.terminal.interactiveshell import TerminalInteractiveShell
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py", line 19, in <module>
from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/__init__.py", line 16, in <module>
from .application import Application
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/__init__.py", line 1, in <module>
from .application import Application
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/application.py", line 38, in <module>
from prompt_toolkit.buffer import Buffer
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/buffer.py", line 28, in <module>
from .application.current import get_app
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 8, in <module>
from prompt_toolkit.eventloop.dummy_contextvars import ContextVar # type: ignore
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/eventloop/__init__.py", line 1, in <module>
from .async_generator import generator_to_async_generator
File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/eventloop/async_generator.py", line 5, in <module>
from typing import AsyncGenerator, Callable, Iterable, TypeVar, Union
ImportError: cannot import name 'AsyncGenerator'
pip install tensorflow==1.12.0
Could not find a version that satisfies the requirement tensorflow==1.12.0 (from versions: 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 2.0.0a0, 2.0.0b0, 2.0.0b1)
No matching distribution found for tensorflow==1.12.0
More info:
python -V
Python 3.7.1
Follow up from work group on 1/29/2020:
A few external dependencies can be removed readily:
(i) nrrd library - To check nhdr/nrrd Image header
unu
is no longer required given our development with pynrrd
(ii) ResampleImage - To Resample the input Image
(iii)
bse.sh - To extract b0 from nhdr/nrrd Image
dwiextract - To extract b0 from nifti Image
nifti_write.py
and then bse should be extracted using pnlNipype/scripts/bse.py
(iv)
ConvertBetweenFileFormats - To Convert ndhr/nrrd to nifti files
(v)
FSL FLIRT - To perform Image Registration
A gcc
related error occurs in GPU machines grx03
and pnl-maxwell
when psutil
is attempted to install as part of conda
environment inside - pip:
block. They both have gcc -V
:
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
Reason for this failure is unknown:
creating build/temp.linux-x86_64-3.6/psutil
/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_posix.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_posix.o
/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_linux.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_linux.o
psutil/_psutil_linux.c: In function 'PyInit__psutil_linux':
psutil/_psutil_linux.c:612:15: warning: unused variable 'v' [-Wunused-variable]
PyObject *v;
^
gcc -pthread -shared -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath=/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib,--no-as-needed -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath=/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib,--no-as-needed -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath-link,/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include build/temp.linux-x86_64-3.6/psutil/_psutil_common.o build/temp.linux-x86_64-3.6/psutil/_psutil_posix.o build/temp.linux-x86_64-3.6/psutil/_psutil_linux.o -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -lpython3.6m -o build/lib.linux-x86_64-3.6/psutil/_psutil_linux.cpython-36m-x86_64-linux-gnu.so
gcc: error: unrecognized command line option ‘-fno-plt’
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-aykwzsnf/psutil/setup.py'"'"'; __file__='"'"'/tmp/pip-install-aykwzsnf/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-2yo77_bn/install-record.txt --single-version-externally-managed --compile --install-headers /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m/psutil Check the logs for full command output.
CondaEnvException: Pip failed
Updating conda
didn't solve this problem.
The following is the environment_gpu.yml
file:
name: dmri_seg
channels:
- conda-forge
- pnlbwh
dependencies:
- python==3.6
- tensorflow-gpu==1.12.0
- cudatoolkit==9.0
- cudnn==7.6.0
- keras==2.2.4
- numpy==1.16.4
- nibabel>=2.2.1
- ants
- pip
- pip:
- gputil
- scikit-image==0.16.2
- git+https://github.com/pnlbwh/conversion.git
The conversion
package has psutil
as its dependency.
The error didn't occur in pnl-z840-2
which is not a GPU machine and have gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
. Notice -39
here is different from -35
above.
Installing psutil
in the dependencies
block first, solves the problem:
name: dmri_seg
channels:
- conda-forge
- pnlbwh
dependencies:
- python==3.6
- tensorflow-gpu==1.12.0
- cudatoolkit==9.0
- cudnn==7.6.0
- keras==2.2.4
- numpy==1.16.4
- nibabel>=2.2.1
- ants
- psutil
- pip
- pip:
- gputil
- scikit-image==0.16.2
- git+https://github.com/pnlbwh/conversion.git
Dependencies are unstable, make an independent container so we have a last resort for future.
Use os.path.basename(str1)+ os.path.basename(str2)
Otherwise, this error happens for (probably) long absolute paths:
ERROR: cannot open slicesdir/_data_pnl_U01_HCP_Psychosis_data_processing_BIDS_derivatives_pnlpipe_sub-2001_ses-1_dwi_hcppipe1_Diffusion_topup_hifib0_bse_to__data_pnl_U01_HCP_Psychosis_data_processing_BIDS_derivatives_pnlpipe_sub-2001_ses-1_dwi_hcppipe1_Diffusion_topup_hifib0_bse-multi_BrainMask.pngfor writing
CNN-Diffusion-MRIBrain-Segmentation/tests/../pipeline/dwi_masking.py:40: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
@RyanZurrin , can we deal with this somehow?
I do not understand what this block does really.
Follow up from work group on 1/29/2020
Hi @suheyla2 , I understand that we do ANTs rigid registration between a given b0 image and reference IIT_mean_b0.nii.gz
:
Can you tell me the motivation for simplifying the registration to rigid instead of SyN? A given b0 image can be quite off the space of reference and hence SyN seems more relevant to me.
Follow up from work group on 1/29/2020
For open-source distribution of the software, all hard-coded paths such as /rfanfs/...
must be omitted.
See the following search result:
Hi @senthilcaesar ,
https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/remove-ext-dep/pipeline/dwi_masking.py#L516
generates the following error when previous slicesdir_multi
exists:
mv: cannot move ‘connectom/A/slicesdir’ to ‘connectom/A/slicesdir_multi/slicesdir’: File exists
The reason being, mv
does not copy the content of slicesdir
, rather moves the slicesdir
folder to slicesdir_multi
. If previous output exists, then mv fails. See the following simple example:
[tb571@pnl-z840-2 A]$ mkdir top
[tb571@pnl-z840-2 A]$ mkdir top/top-1
[tb571@pnl-z840-2 A]$ touch top/top-1/abc.txt
[tb571@pnl-z840-2 A]$ tree top
top
└── top-1
└── abc.txt
[tb571@pnl-z840-2 A]$ mkdir top-1
[tb571@pnl-z840-2 A]$ touch top-1/abc.txt
[tb571@pnl-z840-2 A]$ tree top
top-1
└── abc.txt
[tb571@pnl-z840-2 A]$ mv top-1 top
mv: cannot move ‘top-1/’ to ‘top/top-1’: File exists
I shall give you a solution for this.
/software/rocky9/CNN-Diffusion-MRIBrain-Segmentation/tests/../pipeline/dwi_masking.py:40: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
Use GPU device # 0
2024-03-18 17:23:12.865942: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
File exist
Can we deal with this warnings?
I think these warnings are the reason why it uses only CPU and not GPU. Please test in pnl-axon.
The program runs without having to provide -a
, -c
, or -s
. So what view mask is generated by default?
python pipeline/dwi_masking.py -h
Using TensorFlow backend.
usage: dwi_masking.py [-h] [-i DWI] [-ref REF] [-f MODEL_FOLDER] [-a [AXIAL]]
[-c [CORONAL]] [-s [SAGITTAL]] [-nproc CR]
optional arguments:
-h, --help show this help message and exit
-i DWI input caselist file in txt format
-ref REF reference b0 file for registration
-f MODEL_FOLDER folder which contain the trained model
-a [AXIAL] generate axial Mask (yes/true/y/1)
-c [CORONAL] generate coronal Mask (yes/true/y/1)
-s [SAGITTAL] generate sagittal Mask (yes/true/y/1)
-nproc CR number of processes to use
Traceback (most recent call last):
File "/home/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/pipeline/dwi_masking.py", line 716, in <module>
dwi_mask_sagittal = predict_mask(cases_file_s, trained_model_folder, view='sagittal')
File "/home/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/pipeline/dwi_masking.py", line 134, in predict_mask
loaded_model.load_weights(optimal_model)
File "/home/pnlbwh/miniconda3/envs/pnlpipe3/lib/python3.6/site-packages/keras/engine/network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "/home/pnlbwh/miniconda3/envs/pnlpipe3/lib/python3.6/site-packages/keras/engine/saving.py", line 1004, in load_weights_from_hdf5_group
original_keras_version = f.attrs['keras_version'].decode('utf8')
AttributeError: 'str' object has no attribute 'decode'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.