GithubHelp home page GithubHelp logo

cnn-diffusion-mribrain-segmentation's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cnn-diffusion-mribrain-segmentation's Issues

Question about check_gradient()

Are you checking if the # of gradients in sizes field equals the number of gradient directions in the header?

bashCommand1 = ("unu head " + input_file + " | grep -i sizes | awk '{print $5}'")
bashCommand2 = ("unu head " + input_file + " | grep -i _gradient_ | wc -l")
output1 = subprocess.check_output(bashCommand1, shell=True)
output2 = subprocess.check_output(bashCommand2, shell=True)

if output1.strip():
     header_gradient = int(output1.decode(sys.stdout.encoding))
     total_gradient = int(output2.decode(sys.stdout.encoding))

if header_gradient == total_gradient:

Should you not be checking if the # of gradients in the image equals the # of gradients in the size field instead?

Trained model does not interface with prediction

@senthilcaesar , I think training part is not properly streamlined with prediction part. I trained a model, and then tried to predict from that model, however, ran into the following error:

Saving data to disk...
Pre-Processing Time Taken :  1.7  min
Loading sagittal model from disk...
Traceback (most recent call last):
  File "../pipeline/dwi_masking.py", line 713, in <module>
    dwi_mask_sagittal = predict_mask(cases_file_s, trained_model_folder, view='sagittal')
  File "../pipeline/dwi_masking.py", line 139, in predict_mask
    loaded_model.load_weights(trained_folder + '/weights-' + view + '-improvement-' + optimal + '.h5')
  File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/keras/engine/network.py", line 1157, in load_weights
    with h5py.File(filepath, mode='r') as f:
  File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 408, in __init__
    swmr=swmr)
  File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 173, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'data/model_folder_test/weights-sagittal-improvement-09.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
  • Provided models
(dmri_seg) [tb571@pnl-oracle tests]$ ls ../model_folder/
CompNetBasicModel.json  weights-axial-improvement-08.h5    weights-sagittal-improvement-09.h5
IITmean_b0_256.nii.gz   weights-coronal-improvement-08.h5
  • After training
(dmri_seg) [tb571@pnl-oracle tests]$ ls data/model_folder_test/
CompNetBasicModel.json  IITmean_b0_256.nii.gz  sagittal-compnet_final_weight.h5  weights-sagittal-improvement-01.h5

The difference in number of files in the above tells us something is incomplete.

TODO

I streamlined the testing process for this software. Please use my process for testing your commit. Create a new branch like below:

ssh pnl-maxwell or pnl-oracle or grx03
source /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/train_env.sh
# I trust you to know /path/to/

cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/
git checkout -b fix-training

cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests
./pipeline_test.sh

You may look into /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests/data folder for details.

Please fix it.

maskfilter not found

python pipeline/dwi_masking.py -i /tmp/cnn_dwi_list.txt -f model_folder/ -nproc 16 -ref model_folder/IITmean_b0_256.nii.gz

Using TensorFlow backend.
File exist
Extracting b0 volume...
Performing ants rigid body transformation...
Normalizing input data
Case completed =  0
Merging npy files...
Saving data to disk...
Pre-Processing Time Taken :  0.68  min
Loading sagittal model from disk...
WARNING:tensorflow:From /tmp/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
256/256 [==============================] - 87s 340ms/step
Loading coronal model from disk...
256/256 [==============================] - 88s 342ms/step
Loading axial model from disk...
256/256 [==============================] - 88s 342ms/step
Masking Time Taken :  4.87  min
Splitting files....
Performing Muti View Aggregation...
Converting file format...
Performing ants inverse transform...
Traceback (most recent call last):
  File "pipeline/dwi_masking.py", line 717, in <module>
    omat=list(omat_list[i].split()))
  File "pipeline/dwi_masking.py", line 303, in npy_to_nhdr
    process = subprocess.Popen(mask_filter.split(), stdout=subprocess.PIPE)
  File "/tmp/miniconda3/envs/tf/lib/python3.6/subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "/tmp/miniconda3/envs/tf/lib/python3.6/subprocess.py", line 1344, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'maskfilter': 'maskfilter'

ipython loading error

Traceback

It probably only occurs with environment_gpu.yml:

(dmri_seg) [tb571@grx06 cnn-test]$ ipython
Traceback (most recent call last):
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 6, in <module>
    from contextvars import ContextVar
ModuleNotFoundError: No module named 'contextvars'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/ipython", line 5, in <module>
    from IPython import start_ipython
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/__init__.py", line 56, in <module>
    from .terminal.embed import embed
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/terminal/embed.py", line 16, in <module>
    from IPython.terminal.interactiveshell import TerminalInteractiveShell
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py", line 19, in <module>
    from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/__init__.py", line 16, in <module>
    from .application import Application
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/__init__.py", line 1, in <module>
    from .application import Application
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/application.py", line 38, in <module>
    from prompt_toolkit.buffer import Buffer
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/buffer.py", line 28, in <module>
    from .application.current import get_app
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 8, in <module>
    from prompt_toolkit.eventloop.dummy_contextvars import ContextVar  # type: ignore
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/eventloop/__init__.py", line 1, in <module>
    from .async_generator import generator_to_async_generator
  File "/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/prompt_toolkit/eventloop/async_generator.py", line 5, in <module>
    from typing import AsyncGenerator, Callable, Iterable, TypeVar, Union
ImportError: cannot import name 'AsyncGenerator'

Solution

pip install contextvars
pip install --upgrade prompt-toolkit==2.0.1

Uncontrolled multiprocessing is in place in this program!!

This block is not connected to args.cr:

with Manager() as manager:
target_list = manager.list()
omat_list = []
jobs = []
for i in range(0,len(case_arr)):
p_process = mp.Process(target=pre_process, args=(case_arr[i],
target_list))
p_process.start()
jobs.append(p_process)
for process in jobs:
process.join()
target_list = list(target_list)
"""
Enable Multi core Processing for ANTS Registration
manager provide a way to create data which can be shared between different processes
"""
with Manager() as manager:
result = manager.list()
ants_jobs = []
for i in range(0, len(target_list)):
p_ants = mp.Process(target=ANTS_rigid_body_trans, args=(target_list[i],
result, reference))
ants_jobs.append(p_ants)
p_ants.start()
for process in ants_jobs:
process.join()
result = list(result)
for subject_ANTS in result:
transformed_cases.append(subject_ANTS[0])
omat_list.append(subject_ANTS[1])
with Manager() as manager:
data_n = manager.list()
norm_jobs = []
for i in range(0, len(target_list)):
p_norm = mp.Process(target=normalize, args=(transformed_cases[i],
args.percentile, data_n))
norm_jobs.append(p_norm)
p_norm.start()
for process in norm_jobs:
process.join()
data_n = list(data_n)

parser.add_argument('-nproc', type=int, dest='cr', default=8, help='number of processes to use')

Hence, it spawns as many processes as the number of cases in caselist.txt. This will blow out the CPU. It was not discovered so far because test pipeline included in this project has only three cases. Tashrif discovered it while trying to utilize its inherent looping facility over 220 cases. All 220 cases spawned uncontrollably!!

It needs to be fixed using a standard multiprocessing coding e.g. https://github.com/pnlbwh/TBSS/blob/2fb1bc776b07013b450c24057db876486fc4bb2b/lib/tbss_single.py#L112-L121

py2 to py3 conversion

Follow up from work group on 1/29/2020:

@tashrifbillah suggested building the packages in requirements.txt using py3 compiler.

Unused packages such as:

opencv-python>=3.4.1.15
nilearn>=0.5.0

can be removed.

If there are only a few keras incompatibility of the current software against py3, we can work on to remove them. If there are many (unlikely), we shall devise a plan.

cc: @yrathi @sbouix

Flawed design for binary file

The following files are common for all users:

axial-binary-dwi
coronal-binary-dwi
sagittal-binary-dwi

That means, user A would have to overwrite user B's content, which would result in a permission issue.

Let's meet and talk about fixing all such problems in the software.

Differentiate instruction for CPU vs GPU support

Should it be:

For CPU:

  • pip install tensorflow==1.12.0

For GPU:

  • pip install tensorflow-gpu==1.12.0
  • conda install cudatoolkit=9.0
  • conda install cudnn=7.0.5

For both:

  • conda install -c pnlbwh ants
  • pip install keras==2.2.4
  • pip install nibabel
  • pip install gputil

Clean up imports

Hi @senthilcaesar ,

All the scripts have redundant and repeated imports.

For example, see pipeline/dwi_masking.py:

import os.path
from os import path

Not only the duplication exists, but also the above block has been repeated. Similar repetition exists for other libraries as well.

To fix this, please create a new branch clean-imports from remove-ext-dep, and then make changes in the former. I shall review your changes and merge with remove-ext-dep. Please don't edit on remove-ext-dep branch.

ImportError: cannot import name 'AsyncGenerator'

Error from https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/solve-memory-leak/environment_gpu.yml :

(dmri_seg_1) [tb571@grx06 envs]$ ipython
Traceback (most recent call last):
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 6, in <module>
    from contextvars import ContextVar
ModuleNotFoundError: No module named 'contextvars'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/bin/ipython", line 5, in <module>
    from IPython import start_ipython
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/__init__.py", line 56, in <module>
    from .terminal.embed import embed
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/terminal/embed.py", line 16, in <module>
    from IPython.terminal.interactiveshell import TerminalInteractiveShell
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py", line 19, in <module>
    from prompt_toolkit.enums import DEFAULT_BUFFER, EditingMode
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/__init__.py", line 16, in <module>
    from .application import Application
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/__init__.py", line 1, in <module>
    from .application import Application
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/application.py", line 38, in <module>
    from prompt_toolkit.buffer import Buffer
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/buffer.py", line 28, in <module>
    from .application.current import get_app
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/application/current.py", line 8, in <module>
    from prompt_toolkit.eventloop.dummy_contextvars import ContextVar  # type: ignore
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/eventloop/__init__.py", line 1, in <module>
    from .async_generator import generator_to_async_generator
  File "/PHShome/tb571/min3-tf-gpu/envs/dmri_seg_1/lib/python3.6/site-packages/prompt_toolkit/eventloop/async_generator.py", line 5, in <module>
    from typing import AsyncGenerator, Callable, Iterable, TypeVar, Union
ImportError: cannot import name 'AsyncGenerator'

Remove external dependencies

Follow up from work group on 1/29/2020:

A few external dependencies can be removed readily:

https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/master/pipeline/pipeline_requirements.txt

(i) nrrd library - To check nhdr/nrrd Image header

  • unu is no longer required given our development with pynrrd

(ii) ResampleImage - To Resample the input Image

(iii)

bse.sh - To extract b0 from nhdr/nrrd Image
dwiextract - To extract b0 from nifti Image
  • DWI should first be converted to nifti using nifti_write.py and then bse should be extracted using pnlNipype/scripts/bse.py

(iv)
ConvertBetweenFileFormats - To Convert ndhr/nrrd to nifti files

  • Once the above are satisfied, this becomes irrelevant.

(v)
FSL FLIRT - To perform Image Registration

cc: @yrathi @suheyla2 @sbouix

gcc error from psutil install

A gcc related error occurs in GPU machines grx03 and pnl-maxwell when psutil is attempted to install as part of conda environment inside - pip: block. They both have gcc -V:

gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)

Reason for this failure is unknown:

    creating build/temp.linux-x86_64-3.6/psutil
    /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
    /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_posix.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_posix.o
    /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/x86_64-conda_cos6-linux-gnu-cc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m -c psutil/_psutil_linux.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_linux.o
    psutil/_psutil_linux.c: In function 'PyInit__psutil_linux':
    psutil/_psutil_linux.c:612:15: warning: unused variable 'v' [-Wunused-variable]
         PyObject *v;
                   ^
    gcc -pthread -shared -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath=/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib,--no-as-needed -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath=/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib,--no-as-needed -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -Wl,-rpath-link,/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include build/temp.linux-x86_64-3.6/psutil/_psutil_common.o build/temp.linux-x86_64-3.6/psutil/_psutil_posix.o build/temp.linux-x86_64-3.6/psutil/_psutil_linux.o -L/data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/lib -lpython3.6m -o build/lib.linux-x86_64-3.6/psutil/_psutil_linux.cpython-36m-x86_64-linux-gnu.so
    gcc: error: unrecognized command line option ‘-fno-plt’
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-aykwzsnf/psutil/setup.py'"'"'; __file__='"'"'/tmp/pip-install-aykwzsnf/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-2yo77_bn/install-record.txt --single-version-externally-managed --compile --install-headers /data/pnl/soft/pnlpipe3/miniconda3/envs/dmri_seg/include/python3.6m/psutil Check the logs for full command output.


CondaEnvException: Pip failed

Updating conda didn't solve this problem.

The following is the environment_gpu.yml file:

name: dmri_seg

channels:
    - conda-forge
    - pnlbwh
    
dependencies:
    - python==3.6
    - tensorflow-gpu==1.12.0
    - cudatoolkit==9.0
    - cudnn==7.6.0
    - keras==2.2.4
    - numpy==1.16.4
    - nibabel>=2.2.1
    - ants
    - pip
    - pip:
        - gputil
        - scikit-image==0.16.2
        - git+https://github.com/pnlbwh/conversion.git

The conversion package has psutil as its dependency.

The error didn't occur in pnl-z840-2 which is not a GPU machine and have gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39). Notice -39 here is different from -35 above.

Installing psutil in the dependencies block first, solves the problem:

name: dmri_seg

channels:
    - conda-forge
    - pnlbwh
    
dependencies:
    - python==3.6
    - tensorflow-gpu==1.12.0
    - cudatoolkit==9.0
    - cudnn==7.6.0
    - keras==2.2.4
    - numpy==1.16.4
    - nibabel>=2.2.1
    - ants
    - psutil
    - pip
    - pip:
        - gputil
        - scikit-image==0.16.2
        - git+https://github.com/pnlbwh/conversion.git

Use relative path for slicesdir

https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/master/pipeline/dwi_masking.py#L504

Use os.path.basename(str1)+ os.path.basename(str2)

Otherwise, this error happens for (probably) long absolute paths:

ERROR: cannot open slicesdir/_data_pnl_U01_HCP_Psychosis_data_processing_BIDS_derivatives_pnlpipe_sub-2001_ses-1_dwi_hcppipe1_Diffusion_topup_hifib0_bse_to__data_pnl_U01_HCP_Psychosis_data_processing_BIDS_derivatives_pnlpipe_sub-2001_ses-1_dwi_hcppipe1_Diffusion_topup_hifib0_bse-multi_BrainMask.pngfor writing

Doc enhancement

Follow up from work group on 1/29/2020

  • remove dead codes and outdated comments
  • enhance doc with installation instruction
  • obtain Zenodo DOI for citation
  • properly cite other person's work i.e. make a note in all figures copied from https://github.com/raun1
  • distribute model architecture (just 200 MB) as binary with software release other than through Google Drive. It is better to distribute them directly from source.

cc: @yrathi @sbouix

File exists error in mv

Hi @senthilcaesar ,
https://github.com/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/blob/remove-ext-dep/pipeline/dwi_masking.py#L516

generates the following error when previous slicesdir_multi exists:

mv: cannot move ‘connectom/A/slicesdir’ to ‘connectom/A/slicesdir_multi/slicesdir’: File exists

The reason being, mv does not copy the content of slicesdir, rather moves the slicesdir folder to slicesdir_multi. If previous output exists, then mv fails. See the following simple example:

[tb571@pnl-z840-2 A]$ mkdir top
[tb571@pnl-z840-2 A]$ mkdir top/top-1
[tb571@pnl-z840-2 A]$ touch top/top-1/abc.txt
[tb571@pnl-z840-2 A]$ tree top
top
└── top-1
    └── abc.txt

[tb571@pnl-z840-2 A]$ mkdir top-1
[tb571@pnl-z840-2 A]$ touch top-1/abc.txt
[tb571@pnl-z840-2 A]$ tree top
top-1
└── abc.txt

[tb571@pnl-z840-2 A]$ mv top-1 top
mv: cannot move ‘top-1/’ to ‘top/top-1’: File exists

I shall give you a solution for this.

Issues with latest improvement

/software/rocky9/CNN-Diffusion-MRIBrain-Segmentation/tests/../pipeline/dwi_masking.py:40: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
Use GPU device # 0
2024-03-18 17:23:12.865942: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
File exist

Can we deal with this warnings?

I think these warnings are the reason why it uses only CPU and not GPU. Please test in pnl-axon.

What view mask is generated by default?

The program runs without having to provide -a, -c, or -s. So what view mask is generated by default?

python pipeline/dwi_masking.py -h

Using TensorFlow backend.
usage: dwi_masking.py [-h] [-i DWI] [-ref REF] [-f MODEL_FOLDER] [-a [AXIAL]]
                      [-c [CORONAL]] [-s [SAGITTAL]] [-nproc CR]

optional arguments:
  -h, --help       show this help message and exit
  -i DWI           input caselist file in txt format
  -ref REF         reference b0 file for registration
  -f MODEL_FOLDER  folder which contain the trained model
  -a [AXIAL]       generate axial Mask (yes/true/y/1)
  -c [CORONAL]     generate coronal Mask (yes/true/y/1)
  -s [SAGITTAL]    generate sagittal Mask (yes/true/y/1)
  -nproc CR        number of processes to use

Error while loading saved model

Traceback (most recent call last):
  File "/home/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/pipeline/dwi_masking.py", line 716, in <module>
    dwi_mask_sagittal = predict_mask(cases_file_s, trained_model_folder, view='sagittal')
  File "/home/pnlbwh/CNN-Diffusion-MRIBrain-Segmentation/pipeline/dwi_masking.py", line 134, in predict_mask
    loaded_model.load_weights(optimal_model)
  File "/home/pnlbwh/miniconda3/envs/pnlpipe3/lib/python3.6/site-packages/keras/engine/network.py", line 1166, in load_weights
    f, self.layers, reshape=reshape)
  File "/home/pnlbwh/miniconda3/envs/pnlpipe3/lib/python3.6/site-packages/keras/engine/saving.py", line 1004, in load_weights_from_hdf5_group
    original_keras_version = f.attrs['keras_version'].decode('utf8')
AttributeError: 'str' object has no attribute 'decode'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.