GithubHelp home page GithubHelp logo

connectomicslab / connectomemapper3 Goto Github PK

View Code? Open in Web Editor NEW
62.0 5.0 27.0 362.45 MB

Connectome Mapper 3 is a BIDS App that implements full anatomical, diffusion, resting/state functional MRI, and recently EEG processing pipelines, from raw T1 / DWI / BOLD , and preprocessed EEG data to multi-resolution brain parcellation with corresponding connection matrices.

Home Page: https://connectome-mapper-3.readthedocs.io

License: Other

Dockerfile 0.16% Shell 0.04% Python 17.68% TeX 0.42% HTML 81.70%
bids bidsapp dmri fmri connectome bids-apps pipelines chuv brain parcellation

connectomemapper3's People

Contributors

abirba avatar allcontributors[bot] avatar anilbey avatar davidrs06 avatar dvenum avatar emullier avatar joanrue avatar jsheunis avatar jwirsich avatar katharinski avatar kuba-fidel avatar mschoettner avatar osorensen avatar sebastientourbier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

connectomemapper3's Issues

GUI problem, help

when i input 'cmpbidsappmanager ', I could get the complete interface.

image

then

image

Implementation of EEG pipeline

First thoughts

  • Design/first sketch of the new EEG pipeline
  • Function to load Cartool / EEGlab / Fieldtrip inverse solutions as input
  • Implement the interface to compute single source dipoles per ROI based on SVD decomposition [Rubega et al. 2018] using pycartool
  • Computes diverse common functional connectivity metrics (Imaginary coherence, ...) using MNE
  • Implement the interface to compute time-varying functional connectivity based on the Granger causality framework and a modified version of the adaptive Kalman filter as proposed in [Pascucci et al. 2019]. This allows to recursively refine time-varying directed connectivity estimates with structural connectivity.

[Rubega et al. 2018] Rubega, M.; Carboni, M.; Seeber, M.; Vulliemoz, S.; and Michel, C. M. Estimating EEG source dipoles based on singular-value decomposition for connectivity analysis. 2018. paper

[Pascucci et al. 2019] Pascucci D.; Rubega M.; Hagmann P.; Plomp G. Adaptive filtering with anatomical priors: Integrating structural and effective connectivity. OHBM 19 abstract

What need to be answered

  • How to visualize sources estimated by Cartool in MNE (translation in log file enough?)
  • Check if MNE provides already function to load EEGlab, FieldTrip
    Yes (TO DO: list the functions).
  • How Cartool / EGGlab / Fieldtrip / Brainstorm / MNE are saving outputs (BIDS or in-house organization). This will indicate us how to grab the data (cleaned EEG data and LeadField matrix) in the most automatical way.
  • The determination of a common source space (Freesurfer space might be prefered as we already provide the annotation files for the Lausanne2008 and Lausanne2018 parcellations)
  • Difference between SVD method in [Rubega et al. 2018] and the SVD method implemented in MNE (mode: pca_flip)
  • How to adopt BIDS EEG (see spec paper)

To be prepared and implemented

Note: check in Nipype which MNE interfaces are already implemented which could then be used as they are or as a template for the developement of the MNE-related interfaces (Tasks 5. and 6.)

Achievement

  • Task 1
  • Task 2
  • Task 3
  • Task 4
  • Task 5
  • Task 6
  • Task 7
  • Task 8
  • Task 9

Task 1

Preparation of a sample dataset for development (See #6 (comment) for more details)

Task 2

Start the implementation of a simple Nipype workflow that can grab and convert to MNE's internal format the cleaned EEG data and the Lead Field matrix generated by:

  • Cartool
  • EEGlab
  • Fieldtrip
  • MNE
  • Brainstorm

Task 3

Implement for each toolbox an interface after the data grabber that converts and stores in MNE format the generated outputs (cleaned EEG data and lead field matrix)

Inputs

  • Cleaned EEG in toolbox format
  • Lead field matrix in toolbox format

Outputs

  • Cleaned EEG in MNE format
  • Lead field matrix in MNE format

Task 4

Implement an interface that creates the inverse solutions and transforms the source coordinates into a common source space

Inputs

  • Cleaned EEG in MNE format (channels x time)
  • Lead field matrix in MNE format (#sources x channels)
  • Transformation from toolbox source coordinates to common space

To be performed

  • Computes inverse solution (sources):
    Source = Lead field matrix x cleaned EEG
  • Transform the source coordinates (sources)

Outputs

  • Source time-series
  • Source coordinates

Task 5

Check if output sources from the different toolbox are in proper common space

Task 6

Implementation of an interface for estimating ROI dipoles based on Maria's SVD and/or MNE SVD methods.

Inputs

  • Inverse solution in MNE format
  • Parcellation annotation files or parcellation scheme

Outputs

  • ROI dipoles in MNE format

To be performed

  • Use MNE function or implementation of Maria's method to create ROI dipoles in MNE format

Task 7

Implementation of an interface for estimating multiple dynamic functional connectome maps

Inputs

  • ROI dipoles in MNE format

Outputs

  • Dynamic FC maps in .gpickle format
  • Dynamic FC maps in .mat format

To be performed

  • Computes multiple dynamic functional connectome maps using measure implemented in MNE and pycartool

Task 8 (Large effort)

Implementation of EEGPipeline:

  • Organisation of pipeline and stages (for configuration and ouput inspection with bidsappmanager)

  • Implement the stages (config parameters, create_worflow, inpsect_outputs) and modify the existing cmp/pipelines/EEG.py to implement the complete workflow

  • Implement the GUI components (EEGPipelineUI and all the related stages) for integration in the cmpbidsappmanager

  • Review bidsapp parser and run.py to take as input the eeg config file

Task 9

Documentation:

  • Add functionality in the index.html page
  • Update the bidsappmanager.html with EEG in 1) pipeline configuration, 2) run the bidsapp and 3) check stage outputs,
  • Update outputs.html page with BIDS-EEG derivatives data produced by CMP

MemoryError raised by ParcThal when run on CircleCI

Continuous integration testing with CIrcleCI failed when parcellating the thalamic nuclei with a Memory error message (see execution log on circleCI page more details). Seems 7.5GB RAM is not enough to handle ind = np.where(Ispams < Thresh). See error message below:
Error

200317-16:44:09,131 nipype.interface INFO:
	 Correcting the volumes after the interpolation 
Save output image to /output_dir/nipype/sub-01/ses-01/anatomical_pipeline/parcellation_stage/parcThal/T1_class-thalamus_dtissue_after_ants.nii.gz
200317-16:45:11,524 nipype.workflow WARNING:
	 [Node] Error on "anatomical_pipeline.parcellation_stage.parcThal" (/output_dir/nipype/sub-01/ses-01/anatomical_pipeline/parcellation_stage/parcThal)
200317-16:45:11,545 nipype.workflow ERROR:
	 Node parcThal failed to run on host 3be4adbaba2e.
200317-16:45:11,690 nipype.workflow ERROR:
	 Saving crash info to /tmp/crash-20200317-164511-root-parcThal-2682b8cb-8991-437c-977c-d15ba5964b8e.txt
Traceback (most recent call last):
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 369, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/cmtklib/parcellation.py", line 1364, in _run_interface
    ind = np.where(Ispams < Thresh)
MemoryError

System configuration on CircleCI
Machine Linux Medium
2 CPU / 7.5 GB RAM

Reconstruct with Constant Solid Angle (Q-Ball)

Implements interface for ODF reconstruction based on Constant Solid Angle ODF (Q-Ball) model from Aganj et al. [Aganj2010].

Ref:

[Aganj2010] | Aganj I, Lenglet C, Sapiro G, Yacoub E, Ugurbil K, Harel N. “Reconstruction of the orientation distribution function in single- and multiple-shell q-ball imaging within constant solid angle”, Magnetic Resonance in Medicine. 2010 Aug;64(2):554-66. doi: 10.1002/mrm.22365

EDDY error

Hi,

When running "eddy" (as opposed to "eddy_correct"), I get the following error:

Node: diffusion_pipeline.preprocessing_stage.eddy
Working directory: /output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/eddy

Node inputs:

acqp = /output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/acqpnode/acqp.txt
args = <undefined>
bvals = /output_dir/cmp/sub-ABCD1767/dwi/sub-ABCD1767_desc-cmp_dwi.bval
bvecs = /output_dir/cmp/sub-ABCD1767/dwi/sub-ABCD1767_desc-cmp_dwi.bvec
environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}
in_file = /output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/mr_convert_b/diffusion_corrected.nii.gz
index = /output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/indexnode/index.txt
mask = /output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/flirt_dwimask/dwi_brain_mask.nii.gz
out_file = eddy_corrected.nii.gz
output_type = NIFTI_GZ
verbose = True

Traceback (most recent call last):
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/plugins/linear.py", line 46, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 397, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/cmtklib/interfaces/fsl.py", line 324, in _run_interface
    runtime = super(EddyOpenMP, self)._run_interface(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 792, in _run_interface
    self.raise_exception(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 723, in raise_exception
    ).format(**runtime.dictcopy())
RuntimeError: Command:
eddy_openmp --imain=/output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/mr_convert_b/diffusion_corrected.nii.gz --mask=/output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/flirt_dwimask/dwi_brain_mask.nii.gz --index=/output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/indexnode/index.txt --acqp=/output_dir/nipype/sub-ABCD1767/diffusion_pipeline/preprocessing_stage/acqpnode/acqp.txt --bvecs=/output_dir/cmp/sub-ABCD1767/dwi/sub-ABCD1767_desc-cmp_dwi.bvec --bvals=/output_dir/cmp/sub-ABCD1767/dwi/sub-ABCD1767_desc-cmp_dwi.bval --out=eddy_corrected.nii.gz --verbose
Standard output:

Standard error:
eddy_openmp: error while loading shared libraries: libopenblas.so.0: cannot open shared object file: No such file or directory
Return code: 127

I think you may have to install libopenblas in the container. Also, eddy_correct works fine.

Best,
Steven

Warnings when using MAPMRI

Warning message

210114-02:01:46,527 nipype.interface INFO:
	Fitting MAP-MRI model
/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/cvxpy/expressions/expression.py:550: UserWarning: 
This use of ``*`` has resulted in matrix multiplication.
Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1.
    Use ``*`` for matrix-scalar and vector-scalar multiplication.
    Use ``@`` for matrix-matrix and matrix-vector multiplication.
    Use ``multiply`` for elementwise multiplication.

  warnings.warn(__STAR_MATMUL_WARNING__, UserWarning)
/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/cvxpy/expressions/expression.py:550: UserWarning: 
This use of ``*`` has resulted in matrix multiplication.
Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1.
    Use ``*`` for matrix-scalar and vector-scalar multiplication.
    Use ``@`` for matrix-matrix and matrix-vector multiplication.
    Use ``multiply`` for elementwise multiplication.

  warnings.warn(__STAR_MATMUL_WARNING__, UserWarning)
/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/cvxpy/expressions/expression.py:550: UserWarning: 
This use of ``*`` has resulted in matrix multiplication.
Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1.
    Use ``*`` for matrix-scalar and vector-scalar multiplication.
    Use ``@`` for matrix-matrix and matrix-vector multiplication.
    Use ``multiply`` for elementwise multiplication.

How to fix

Update to Dipy 1.3.0 in which this has been fixed (see dipy/dipy@3e07a0a)

More

Initially reported by @weimath in CMTK google group (https://groups.google.com/d/msgid/cmtk-users/73472fd5-b40c-478b-8f98-f96b9e9b8f4dn%40googlegroups.com?utm_medium=email&utm_source=footer)

ENH: Processing summary report for fMRI pipeline

It should includes:

  • Acquisition metadata
  • Preprocessing results (including motion correction (mcflirt results), slice timing correction, fieldmap-based distorsion correction)
  • Registration T1w<->BOLD overlay
  • Nuisance time-series
  • Pearson's correlation-based functional connectivity matrices
  • (Spatio-temporal connectomes)

Build test and deploy singularity image

It consists of adding jobs to:

  • build the singularity image from the docker image
  • test the singularity
  • deploy the singularity if master branch updated or release with version tag

This would prevent errors like #47.

Integer number_of_threads passed to string without any conversion

As reported @ https://neurostars.org/t/connectome-mapper-3/16856, this raises the following error:

User: root
Group: root
> BIDS dataset: /bids_dir
> Subjects to analyze : ['0010']
> Set $FS_LICENSE which points to FreeSurfer license location (BIDS App)
  ... $FS_LICENSE : /bids_dir/code/license.txt
  * Number of subjects to be processed in parallel set to one (sequential run)
  * Number of parallel threads set to one (total of cores: 23)
  * OMP_NUM_THREADS set to 1 (total of cores: 23)
> Sessions to analyze : ['ses-1', 'ses-2']
> Process subject sub-0010 session ses-1
WARNING: rewriting config file /output_dir/cmp/sub-0010/ses-1/sub-0010_ses-1_anatomical_config.ini
... Anatomical config created : /output_dir/cmp/sub-0010/ses-1/sub-0010_ses-1_anatomical_config.ini
WARNING: rewriting config file /output_dir/cmp/sub-0010/ses-1/sub-0010_ses-1_diffusion_config.ini
... Diffusion config created : /output_dir/cmp/sub-0010/ses-1/sub-0010_ses-1_diffusion_config.ini
... Running pipelines : 
        - Anatomical MRI (segmentation and parcellation)
        - Diffusion MRI (structural connectivity matrices)
INFO: functional pipeline not performed
Traceback (most recent call last):
  File "/app/connectomemapper3/run.py", line 497, in <module>
    number_of_threads=number_of_threads)
  File "/app/connectomemapper3/run.py", line 73, in create_cmp_command
    return ' '.join(cmd)
TypeError: sequence item 14: expected str instance, int found

FileNotFoundError in CMP GUI: failed to create DTI derivative

Hi

CMP3 seems to fail to create the DTI derivative.
Steps to reproduce:

  1. Start CMP3 GUI RC2
  2. Click on the first brain, config
  3. Click on the second brain -> a FileNotFoundError appears immediately in the terminal, see below.

RC1 worked before I reinstalled my PC, which makes the whole story more cryptic. RC1 does not function now either.
File permissions are OK. The "code" subdirectory with the config files is also created flawlessly.

How to fix it? :)

See the system info below. I am happy to provide further information upon request.

(py37cmp-gui) dualon@alpha-neuron:/mnt/wdred1/Research/connectomemapper3$ cmpbidsappmanager     
Graphical Backend : qt4

Connectome Mapper v3.0.0-RC2 - BIDS App Manager 
Copyright (C) 2009-2020, Ecole Polytechnique Federale de Lausanne (EPFL)
                the University Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland,
& Contributors

                All rights reserved.

QXcbConnection: XCB error: 3 (BadWindow), sequence: 1084, resource id: 14728032, major code: 40 (TranslateCoords), minor code: 0
BIDS Layout: .../Research/Apraxia-cTBS/BIDS-02 | Subjects: 1 | Sessions: 0 | Runs: 0
sub: 02
Available subjects : 
['sub-02']
sub-02
Sessions: 
[]
    ... Check for available input modalities...
T1w available: ['sub-02_T1w.nii.gz']
DWI available: ['sub-02_dwi.nii.gz']
Created directory /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code
Config file (anat) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.ini
Config json file (anat) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.json
>> Created reference anatomical config file :  /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.ini
/mnt/wdred1/Research/Apraxia-cTBS/BIDS-02
{'Preprocessing': <cmp.stages.preprocessing.preprocessing.PreprocessingStage object at 0x7fb03c1c0fb0>, 'Registration': <cmp.stages.registration.registration.RegistrationStage object at 0x7fb03c1c3c50>, 'Diffusion': <cmp.stages.diffusion.diffusion.DiffusionStage object at 0x7fb03c1c3d10>, 'Connectome': <cmp.stages.connectome.connectome.ConnectomeStage object at 0x7fb03c1c6230>}
QXcbConnection: XCB error: 3 (BadWindow), sequence: 1137, resource id: 14728027, major code: 40 (TranslateCoords), minor code: 0
['process_type', 'diffusion_imaging_model', 'subjects', 'subject', 'subject_session', 'modalities', 'dmri_bids_acq']
Config file (dwi) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.ini
Config json file (dwi) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.json
>> Created reference diffusion config file :  /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.ini
QXcbConnection: XCB error: 3 (BadWindow), sequence: 1304, resource id: 14728037, major code: 40 (TranslateCoords), minor code: 0
Anatomical config file : /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.ini
Diffusion config file : /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.ini
QXcbConnection: XCB error: 3 (BadWindow), sequence: 2563, resource id: 14728047, major code: 40 (TranslateCoords), minor code: 0
QXcbConnection: XCB error: 3 (BadWindow), sequence: 2746, resource id: 14728053, major code: 40 (TranslateCoords), minor code: 0
QXcbConnection: XCB error: 3 (BadWindow), sequence: 3101, resource id: 14728058, major code: 40 (TranslateCoords), minor code: 0
QXcbConnection: XCB error: 3 (BadWindow), sequence: 3266, resource id: 14728063, major code: 40 (TranslateCoords), minor code: 0
QXcbConnection: XCB error: 3 (BadWindow), sequence: 3407, resource id: 14728068, major code: 40 (TranslateCoords), minor code: 0
QXcbConnection: XCB error: 3 (BadWindow), sequence: 3656, resource id: 14728073, major code: 40 (TranslateCoords), minor code: 0
Saving pipeline configuration files...
Config file (anat) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.ini
Config json file (anat) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.json
Anatomical config saved as  /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_anatomical_config.ini
['process_type', 'diffusion_imaging_model', 'subjects', 'subject', 'subject_session', 'modalities', 'dmri_bids_acq']
Config file (dwi) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.ini
Config json file (dwi) saved as /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.json
Diffusion config saved as  /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/code/ref_diffusion_config.ini
/mnt/wdred1/Research/Apraxia-cTBS/BIDS-02
{'Preprocessing': <cmp.stages.preprocessing.preprocessing.PreprocessingStage object at 0x7fb03fcc4290>, 'Registration': <cmp.stages.registration.registration.RegistrationStage object at 0x7fb03c460b90>, 'Diffusion': <cmp.stages.diffusion.diffusion.DiffusionStage object at 0x7fb03c460230>, 'Connectome': <cmp.stages.connectome.connectome.ConnectomeStage object at 0x7fb03c4ee9b0>}
**** Check Inputs  ****
> Looking for....
WARNING : Diffusion json sidecar not found for subject 02.
... dwi_file : /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/sub-02/dwi/sub-02_dwi.nii.gz
... json_file : NotFound
... bvecs_file : /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/sub-02/dwi/sub-02_dwi.bvec
... bvals_file : /mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/sub-02/dwi/sub-02_dwi.bval
Exception occurred in traits notification handler.
Please check the log file for details.
Exception occurred in traits notification handler for object: <cmp.bidsappmanager.gui.CMP_MainWindow object at 0x7fb03ff657d0>, trait: bidsapp, old value: <undefined>, new value: 0
Traceback (most recent call last):
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traits/trait_notifiers.py", line 381, in __call__
    self.handler(*args)
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/site-packages/cmp/bidsappmanager/gui.py", line 2277, in _bidsapp_fired
    fmri_config=fmri_config
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/site-packages/cmp/bidsappmanager/gui.py", line 919, in __init__
    gui=False)
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/site-packages/cmp/pipelines/diffusion/diffusion.py", line 516, in check_input
    shutil.copy(src=dwi_file, dst=out_dwi_file)
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/shutil.py", line 248, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/home/dualon/miniconda3/envs/py37cmp-gui/lib/python3.7/shutil.py", line 121, in copyfile
    with open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: '/mnt/wdred1/Research/Apraxia-cTBS/BIDS-02/derivatives/cmp/sub-02/dwi/sub-02_desc-cmp_dwi.nii.gz'

Kubuntu 20.10.
Docker and CMP were installed as described in the documentation. The home folder sits on a different drive as the connectomemapper3 directory.

(py37cmp-gui) dualon@alpha-neuron:/mnt/wdred1/Research/connectomemapper3$ cat /proc/version
Linux version 5.8.0-40-generic (buildd@lcy01-amd64-013) (gcc (Ubuntu 10.2.0-13ubuntu1) 10.2.0, GNU ld (GNU Binutils for Ubuntu) 2.35.1) #45-Ubuntu SMP Fri Jan 15 11:05:36 UTC 2021

(py37cmp-gui) dualon@alpha-neuron:/mnt/wdred1/Research/connectomemapper3$ git log -1     
commit 10e78d98d9a4a25c8872c7c5be7a873cc55438ab (HEAD -> master, origin/master, origin/HEAD)
Author: Sebastien Tourbier <[email protected]>
Date:   Wed Jan 13 09:09:54 2021 +0100

    FIX: enabled/disabled gray-out button "Run BIDS App" with Qt Style sheet [skip ci]

CircleCI fails in building docker images

Error

unable to prepare context: path " " not found
/bin/sh: --build-arg: not found

Possible reason

Commands for building docker images might not be properly interpreted

Error running BBregister in fmri pipeline

CMP version
v3.0.0-beta-RC2

Error

**** Check Inputs ****
> Looking for....
WARNING : BOLD json sidecar not found for subject 103818, session ses-01.
WARNING : T2w image not found for subject 103818, session ses-01.
... fmri_file : /bids_dir/sub-103818/ses-01/func/sub-103818_ses-01_task-rest_acq-LR_bold.nii.gz
... json_file : /bids_dir/sub-103818/ses-01/func/sub-103818_ses-01_task-rest_bold.json
... t2_file : /bids_dir/sub-103818/ses-01/anat/sub-103818_ses-01_T2w.nii.gz
Inputs check finished successfully.
fMRI data available.
>> Process fmri pipeline
200603-13:29:47,568 nipype.interface INFO:
	 **** Processing ****
200603-13:29:48,979 nipype.workflow INFO:
	 Generated workflow graph: /output_dir/nipype/sub-103818/ses-01/fMRI_pipeline/graph.svg (graph2use=colored, simple_form=False).
200603-13:29:49,278 nipype.workflow INFO:
	 Workflow fMRI_pipeline settings: ['check', 'execution', 'logging', 'monitoring']
Traceback (most recent call last):
  File "/opt/conda/envs/py27cmp-core/bin/connectomemapper3", line 214, in <module>
    fmri_pipeline.process()
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/cmp/pipelines/functional/fMRI.py", line 346, in process
    flow.run()
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 589, in run
    execgraph = generate_expanded_graph(deepcopy(flatgraph))
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 1001, in generate_expanded_graph
    graph_in = _remove_nonjoin_identity_nodes(graph_in, keep_iterables=True)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 879, in _remove_nonjoin_identity_nodes
    _remove_identity_node(graph, node)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 905, in _remove_identity_node
    portinputs)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 982, in _propagate_internal_output
    value = evaluate_connect_function(src[1], src[2], value)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 734, in evaluate_connect_function
    output_value = func(first_arg, *list(args))
  File "<string>", line 3, in basename
AttributeError: '_Undefined' object has no attribute 'rfind'

fMRI_config.ini

[Global]
imaging_model = 
process_type = fMRI
subject_session = ses-01
subjects = ['sub-103818']
subject = sub-103818

[functional_stage]
detrending = False
csf = False
wm = False
detrending_mode = linear
smoothing = 0.0
motion = False
discard_n_volumes = 5
global_nuisance = False
highpass_filter = -1.0
lowpass_filter = -1.0
scrubbing = False

[preprocessing_stage]
repetition_time = 1.92
discard_n_volumes = 5
slice_timing = none
despiking = False
motion_correction = False

[connectome_stage]
dvars_thr = 4.0
circular_layout = False
output_types = ['gPickle', 'mat']
apply_scrubbing = False
fd_thr = 0.2
log_visualization = True
subject = 

[registration_stage]
ants_convergence_winsize = 10
ants_nonlinear_update_field_variance = 3.0
flirt_args = 
contrast_type = t2
ants_upper_quantile = 0.995
use_float_precision = False
ants_nonlinear_total_field_variance = 0.0
registration_mode = BBregister (FS)
init = header
ants_interpolation = Linear
ants_linear_sampling_perc = 0.25
apply_to_eroded_wm = True
ants_convergence_thresh = 1e-06
ants_multilab_interpolation_parameters = (5.0, 5.0)
ants_gauss_interpolation_parameters = (5.0, 5.0)
ants_nonlinear_cost = CC
ants_bspline_interpolation_parameters = (3,)
apply_to_eroded_brain = False
uses_qform = True
pipeline = fMRI
ants_lower_quantile = 0.005
no_search = True
ants_linear_cost = MI
ants_linear_gradient_step = 0.1
apply_to_eroded_csf = True
ants_perform_syn = True
ants_linear_sampling_strategy = Regular
dof = 6
diffusion_imaging_model = 
fsl_cost = normmi
ants_nonlinear_gradient_step = 0.1

[Multi-processing]
number_of_cores = 1

Resaon
Freesurfer subjects dir and subject id required by tkregister2 and bbregister are not set properly

dipy_CSD executes with error

Description
dipy_CSD executes with error when trying to import module actor from dipy.viz to save the results in a png

Full error

	 [Node] Setting-up "diffusion_pipeline.diffusion_stage.reconstruction.dipy_CSD" in "/output_dir/nipype/sub-103818/ses-01/diffusion_pipeline/diffusion_stage/reconstruction/dipy_CSD".
200528-07:46:28,13 nipype.workflow INFO:
	 [Node] Running "dipy_CSD" ("cmtklib.interfaces.dipy.CSD")
200528-07:48:43,262 nipype.interface INFO:
	 response: 
200528-07:48:43,266 nipype.interface INFO:
	 (array([0.00125123, 0.00029411, 0.00029411]), 3523.608)
200528-07:48:43,267 nipype.interface INFO:
	 ratio: 0.235058
200528-07:48:43,267 nipype.interface INFO:
	 nbr_voxel_used: 2833
200528-07:48:43,374 nipype.interface INFO:
	 Fitting CSD model
200528-08:23:39,551 nipype.interface INFO:
	 Save Spherical Harmonics image
200528-08:24:09,593 nipype.workflow WARNING:
	 [Node] Error on "diffusion_pipeline.diffusion_stage.reconstruction.dipy_CSD" (/output_dir/nipype/sub-103818/ses-01/diffusion_pipeline/diffusion_stage/reconstruction/dipy_CSD)
200528-08:24:09,601 nipype.workflow ERROR:
	 Node dipy_CSD failed to run on host f6accec1a5ea.
200528-08:24:09,604 nipype.workflow ERROR:
	 Saving crash info to /tmp/crash-20200528-082409-UID1000-dipy_CSD-8abfd994-ccf7-424c-a1c6-73c35d45ff1e.txt
Traceback (most recent call last):
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 369, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/cmtklib/interfaces/dipy.py", line 317, in _run_interface
    from dipy.viz import actor, window
ImportError: cannot import name actor

Traceback (most recent call last):
  File "/opt/conda/envs/py27cmp-core/bin/connectomemapper3", line 180, in <module>
    dmri_pipeline.process()
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/cmp/pipelines/diffusion/diffusion.py", line 968, in process
    flow.run()
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 599, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 369, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/cmtklib/interfaces/dipy.py", line 317, in _run_interface
    from dipy.viz import actor, window

tckgen: [ERROR] Backtracking not valid for deterministic algorithms

When using the deterministic ACT tractography of MRtrix, the backtrack option should be set to False.

Steps to reproduce the error

From the connectome mapper GUI (cmpbidsappmanager):

  • Load your unprocessed BIDS dataset with T1w and DWI
  • Click on the left button to open the pipeline configurator window
  • Go to diffusion pipeline panel
  • Click on Reconstruction and Tractography stage button

At this step, tractography should be set by default to use MRtrix ACT probabibilistic algorithm with backtracking enabled

  • Change the type of tractography method from probabilistic to deterministic

The backtracking option has been grayed out. However, its value is still True and causes the error

Solution

Set backtrack to False whenever the type of tractography algorithm is changed to deterministic

Update documentation

To be updated:

  • Mention of use of anonymized BIDS App run tracking to support funding requests.
  • Edit changes.rst
  • Review part on singularity

Migration to python 3

"Python 2.7 will receive bugfix support until January 1, 2020. After the last release, 2.7 will receive no support." from https://www.python.org/dev/peps/pep-0373/

Tasks:

  • Make code compatible with python 3 using automated tools (2to3 or modernize)
    Or
  • Correct code incompatible with python 3 detected by pycharm code compatibility inspection tool
  • Update python and dependencies in conda environment files
  • Execution tests (local) with GUI
  • Build and execution test (CircleCI) with update of ds-sample.output.txt
  • Update version in info.py
  • Update doc with the version list of changes (Build checked locally)

Continuous integration: regression test of every pipelines

To make the release of the future versions more robust to changes in the source code, regression tests within CircleCI should cover the most typical use cases for each pipeline/ diffusion modality.

This would require the following tasks:

Sample dataset for test

  • Acquisition of sample MRI dataset including:
    • MPRAGE (T1W)
    • DSI
    • Resting-state fMRI
  • Conversion to BIDS dataset
  • Publish dataset to open platform (Zenodo)

Implementation of regression test

  • Update .circleci/config.yml:
    • Load new sample dataset
    • Run anatomical pipeline and diffusion pipeline on DSI data (Lausanne2018,SHORE,MRtrix3 Det. tracking)
    • Update expected outputs.txt in repo

unable to Create a miniconda2 environment

Dear Experts,

I tried installing connectomemapper 3 on a MacBook after failed attempt on linux. However, I'm unable to Create a miniconda2 environment, as suggested in the documentation.

Please find the log below and advice.

Thank you,

Best Regards,
Amit.

(base) Amitkumars-MacBook-Pro:Applications amitjc$ conda env create -f connectomemapper3/environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • git-annex=7.20190219

(base) Amitkumars-MacBook-Pro:Applications amitjc$

Unable to process datasets from NFS mounted storage

Error

RuntimeError: Command:
5ttgen freesurfer -nocrop -sgm_amyg_hipp /tmp/derivatives/cmp/sub-001/ses-20170714/anat/sub-001_ses-20170714_desc-aparcaseg_dseg.nii.gz mrtrix_5tt.nii.gz
Standard output:

Standard error:
5ttgen: 
5ttgen: Note that this script makes use of commands / algorithms that have relevant articles for citation. Please consult the help page (-help option) for more information.
5ttgen: 
5ttgen: Generated temporary directory: /tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/5ttgen-tmp-EQMGFD/
Command:  mrconvert /tmp/derivatives/cmp/sub-001/ses-20170714/anat/sub-001_ses-20170714_desc-aparcaseg_dseg.nii.gz /tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/5ttgen-tmp-EQMGFD/input.mif
5ttgen: Changing to temporary directory (/tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/5ttgen-tmp-EQMGFD/)
Command:  labelconvert input.mif /opt/freesurfer/FreeSurferColorLUT.txt /opt/mrtrix3/share/mrtrix3/_5ttgen/FreeSurfer2ACT_sgm_amyg_hipp.txt indices.mif
Command:  mrcalc indices.mif 1 -eq cgm.mif
Command:  mrcalc indices.mif 2 -eq sgm.mif
Command:  mrcalc indices.mif 3 -eq  wm.mif
Command:  mrcalc indices.mif 4 -eq csf.mif
Command:  mrcalc indices.mif 5 -eq path.mif
Command:  mrcat cgm.mif sgm.mif wm.mif csf.mif path.mif - -axis 3 | mrconvert - result.mif -datatype float32
Command:  5ttcheck result.mif
Command:  mrconvert result.mif /tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/mrtrix_5tt.nii.gz
5ttgen: Changing back to original directory (/tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt)
5ttgen: Deleting temporary directory /tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/5ttgen-tmp-EQMGFD/
Traceback (most recent call last):
  File "/opt/mrtrix3/bin/5ttgen", line 56, in <module>
    app.complete()
  File "/opt/mrtrix3/lib/mrtrix3/app.py", line 268, in complete
    shutil.rmtree(tempDir)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/shutil.py", line 270, in rmtree
    onerror(os.rmdir, path, sys.exc_info())
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/shutil.py", line 268, in rmtree
    os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/tmp/derivatives/nipype/sub-001/ses-20170714/diffusion_pipeline/preprocessing_stage/mrtrix_5tt/5ttgen-tmp-EQMGFD/'
Return code: 1

What causes the error
I have been successfully running this code on datasets stored locally. The error appeared when we switched to processing a dataset from a NFS mounted storage.
See similar error at https://mail.python.org/pipermail/python-list/2009-July/543694.html

Solution
Until now, we did not find a solution for processing with success the dataset stored on the NFS mounted drive.

Workaround
Make a local copy of the dataset to be processed

Point process analysis of brain fMRI dynamics

Creation of spatio-temporal connectome following the method presented in Griffa 2017 based on Tagliazucchi 2012.

Tasks:

  • Threshold fMRI ROI time-series to compute binary activity pattern vectors OR keep fMRI ROI time-series value if the value is above the threshold to compute non-binary activity pattern vectors
  • Refinement based on structural connectivity
  • Save the structurally-refined activity pattern vectors

Code to be integrated available @ https://github.com/agriffa/STConn

DTI Reconstruction & Tractography settings will not open

Hi dev team,

Really excited to try this tool out! I happened to find this just as I began a bimodal connectivity study. I installed the BIDS-app docker container and GUI python code, and I was met with the following error upon trying to enter the DTI Reconstruction and Tractography Settings:

Exception occurred in traits notification handler for object: <cmp.bidsappmanager.pipelines.diffusion.diffusion.DiffusionPipelineUI object at 0x2b3bdca059b0>, trait: diffusion, old value: <undefined>, new value: 0
Traceback (most recent call last):
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traits/trait_notifiers.py", line 381, in __call__
    self.handler(*args)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/cmp/bidsappmanager/pipelines/diffusion/diffusion.py", line 116, in _diffusion_fired
    self.stages['Diffusion'].configure_traits(view=self.view_mode)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traits/has_traits.py", line 2088, in configure_traits
    args,
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/toolkit.py", line 224, in view_application
    id, scrollable, args)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/view_application.py", line 92, in view_application
    args=args)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/view.py", line 446, in ui
    ui.ui(parent, kind)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/ui.py", line 244, in ui
    self.rebuild(self, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/toolkit.py", line 159, in ui_livemodal
    ui_live.ui_livemodal(ui, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_live.py", line 47, in ui_livemodal
    _ui_dialog(ui, parent, BaseDialog.MODAL)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_live.py", line 63, in _ui_dialog
    BaseDialog.display_ui(ui, parent, style)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_base.py", line 278, in display_ui
    ui.owner.init(ui, parent, style)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_live.py", line 203, in init
    self.add_contents(panel(ui), bbox)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 265, in panel
    panel = _GroupPanel(content[0], ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 603, in __init__
    layout = self._add_groups(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 683, in _add_groups
    panel = _GroupPanel(subgroup, self.ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 605, in __init__
    layout = self._add_items(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 876, in _add_items
    editor.prepare(inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/editor.py", line 172, in prepare
    self.update_editor()
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/instance_editor.py", line 303, in update_editor
    self.resynch_editor()
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/instance_editor.py", line 364, in resynch_editor
    self.factory.id)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/view.py", line 446, in ui
    ui.ui(parent, kind)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/ui.py", line 244, in ui
    self.rebuild(self, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/toolkit.py", line 152, in ui_subpanel
    ui_panel.ui_subpanel(ui, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 78, in ui_subpanel
    _ui_panel_for(ui, parent, True)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 84, in _ui_panel_for
    ui.control = control = _Panel(ui, parent, is_subpanel).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 148, in __init__
    self.control = panel(ui)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 265, in panel
    panel = _GroupPanel(content[0], ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 603, in __init__
    layout = self._add_groups(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 683, in _add_groups
    panel = _GroupPanel(subgroup, self.ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 605, in __init__
    layout = self._add_items(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 876, in _add_items
    editor.prepare(inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/editor.py", line 172, in prepare
    self.update_editor()
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/instance_editor.py", line 303, in update_editor
    self.resynch_editor()
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/instance_editor.py", line 364, in resynch_editor
    self.factory.id)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/view.py", line 446, in ui
    ui.ui(parent, kind)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/ui.py", line 244, in ui
    self.rebuild(self, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/toolkit.py", line 152, in ui_subpanel
    ui_panel.ui_subpanel(ui, parent)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 78, in ui_subpanel
    _ui_panel_for(ui, parent, True)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 84, in _ui_panel_for
    ui.control = control = _Panel(ui, parent, is_subpanel).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 148, in __init__
    self.control = panel(ui)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 265, in panel
    panel = _GroupPanel(content[0], ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 603, in __init__
    layout = self._add_groups(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 683, in _add_groups
    panel = _GroupPanel(subgroup, self.ui).control
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 605, in __init__
    layout = self._add_items(content, inner)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/qt4/ui_panel.py", line 868, in _add_items
    ui, object, name, item.tooltip, None
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/editor_factory.py", line 147, in simple_editor
    description=description)
  File "/om2/user/smeisler/anaconda3/envs/py37cmp-gui/lib/python3.7/site-packages/traitsui/editor.py", line 147, in __init__
    self.old_value = getattr(self.object, self.name)
AttributeError: 'Dipy_recon_configUI' object has no attribute 'shore_lambdaN'

I supposed I can edit these settings in the .ini / .json files directly, but it would be more convenient to do so in the GUI.

Thanks in advance for the help!
Steven

FSL BET Error

Hi,

I came across the following error when using FSL from brain extraction:

Node: anatomical_pipeline.segmentation_stage.fsl_bet
Working directory: /output_dir/nipype/sub-ABCD1714/anatomical_pipeline/segmentation_stage/fsl_bet

Node inputs:

args = <undefined>
center = <undefined>
environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}
frac = <undefined>
functional = <undefined>
in_file = /output_dir/nipype/sub-ABCD1714/anatomical_pipeline/segmentation_stage/niigzConvert/nu.nii.gz
mask = True
mesh = <undefined>
no_output = <undefined>
out_file = brain.nii.gz
outline = <undefined>
output_type = NIFTI_GZ
padding = <undefined>
radius = <undefined>
reduce_bias = <undefined>
remove_eyes = <undefined>
robust = True
skull = True
surfaces = <undefined>
t2_guided = <undefined>
threshold = <undefined>
vertical_gradient = <undefined>

Traceback (most recent call last):
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/plugins/linear.py", line 46, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 397, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/fsl/preprocess.py", line 162, in _run_interface
    runtime = super(BET, self)._run_interface(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 792, in _run_interface
    self.raise_exception(runtime)
  File "/opt/conda/envs/py37cmp-core/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 723, in raise_exception
    ).format(**runtime.dictcopy())
RuntimeError: Command:
bet /output_dir/nipype/sub-ABCD1714/anatomical_pipeline/segmentation_stage/niigzConvert/nu.nii.gz brain.nii.gz -m -R -s
Standard output:
/opt/fsl/bin/bet failed during operation
Standard error:
/opt/fsl/bin/bet: line 267: dc: command not found
Return code: 1

Freesurfer brain extraction seems to work okay, however.

Best,
Steven

Custom Atlases

Hello,

Does anyone have experience with custom atlases on ConnectomeMapper3? Keen to try and use the Brainnetome atlas and was wondering which files to use as the nifty and graphml files?

Aswin

ENH: Implementation of FSL topup for DWI preprocessing

Following the issue #18 to integrate FSL topup correction (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/topup) in the diffusion pipeline

Tasks I see so far:

  • Implementation of a new Nipype interface specific to FSL topup in cmtklib/interfaces/fsl.py OR an existing interface might be already implemented inside Nipype (not checked yet)

  • Update of the interface to FSL eddy in cmtklib/interfaces/fsl.py which should now take as possible input the output of FSL topup.

  • Update of the main subworkflow dedicated to the preprocessing of DWI data (in cmp/stages/preprocessing/preprocessing.py). Workflow should be updated or inputs/outputs of interfaces should be rearranged appropriately with the new OR modified interface in the function create_workflow of the class PreprocessingStage. Option for enabling/disabling should be added as a Boolean class attribute/trait to the class PreprocessingConfig.

Note: More parameters related to FSL topup itself must also be added there.

  • If necessary, update the main diffusion workflow (in cmp/pipelines/diffusion/diffusion.py). This might involve updating outputs to be sinked to the main CMP3 outputs (<derivatives>/cmp/dwi).

  • New attributes/traits can then be easily integrated and tuned in the graphical user interface by an appropriate update of the corresponding graphical component, implemented in cmp/bidsappmanager/stages/preprocessing/preprocessing.py

Displaying Results

Hello,

I'm trying to use SurfIce to display some of my results from analyses. Is there a nii version of the default Lausanne 2018 atlas (eg for MNI brain or something) that I can use for this?

My other option is to use one of the subjects in my dataset.

Thanks,
Aswin

Singularity Issues with CMP v3.0.0-RC1

Hello,

I built a singularity image of CMP3 from the latest Docker file. When I try to run the file I am met with the following error:
COMMAND
singularity run -e -B /om4 /om2/user/smeisler/connectomemapper.img --bids_dir /om/project/PARC/BIDS/data --output_dir /om/project/PARC/BIDS/derivatives --participant_label sub-ABCD1702 --anat_pipeline_config /om/project/PARC/BIDS/data/code/ref_anatomical_config.ini --func_pipeline_config /om/project/PARC/BIDS/data/code/ref_fMRI_config.ini --dwi_pipeline /om/project/PARC/BIDS/data/code/ref_diffusion_config.ini --fs_license /om4/group/gablab/dti_proc/license.txt

ERROR
id: '\\': no such user id: '\\': no such user /.singularity.d/actions/run: 29: export: /om/project/PARC/BIDS/data: bad variable name

However, when I open the /.singularity.d/actions/run for the CMP container, I only see 23 lines, so I can't seem to find which script is being executed or how to fix it.

Do you know how I may fix this problem?

Thanks,
Steven

ENH: Compute and plot network statistics of structural and functional connectivity matrices

Goal
Serve as quality assurance

Tasks

  • Computes and plots of connectome-specific summary statistics as in ndmg:
    • Implement a load_graph() which takes the full path of the connectome to load
    • Implement a load_graphs() which takes the BIDS root directory, the output directory, the (sub)list, and the parcellation scheme. It will call load_graph() for each subject / session / parcellation scale.
    • Implement a compute_network_metrics() which:
      1. differentiate structural and functional connectivity matrices and
      2. compute for each connectivity matrix scale network measures and statistics (Number of non-zero edges, Max Local Statistic Sequence, Clustering Coefficients, Degree sequence, Edge weight sequence, , Eigen value, betweenness centrality, mean/median connectome)
  • Integration of plots in the processing summary report of the diffusion pipeline (#10)

ENH: Implementation of MSMT-CSD for estimation of fiber orientation distribution

Following #18 to integrate MRtrix3 MSMT-CSD for estimation of fiber orientation distribution

Tasks I see so far:

  • Implementation of a new Nipype interface specific to MRtrix3 MSMT-CSD OR modification of Nipype interface for CSD already provided in CMP3 in cmtklib/interfaces/mrtrix3.py.

Note: MSMT-CSD would require to estimate response functions specific to each tissue class using dwi2response dhollander dwi.mif wm_response.txt gm_response.txt csf_response.txt [ options ] (see mrtrix3 doc page )

  • Update of the main diffusion subworkflow and its subworkflow dedicated to the estimation of the fiber orientation distributions (in cmp/stages/diffusion/reconstruction.py). The local_model_editor and the local_model attributes of the class MRtrix_recon_config should be revisited to allow selection of Tensor, CSD, and MSMT-CSD. Workflow should be updated or inputs/outputs of interfaces should be rearranged appropriately with the new OR modified interface in the function create_mrtrix_recon_flow.

Note: Other new class attributes/traits (parameters) can be added in MRtrix_recon_config if needed.

  • If necessary, rearrangement/Reconnection OR update of the main diffusion workflow (in cmp/pipelines/diffusion/diffusion.py), this might involve updating outputs to be sinked to the main CMP3 outputs (<derivatives>/cmp/dwi).

  • If needed, new attributes/traits can then be easily integrated and tuned in the graphical user interface by an appropriate update of the corresponding graphical component, implemented in cmp/bidsappmanager/stages/diffusion/reconstruction.py

CI: Improving code coverage

Adding tests in CircleCI for:

  • Diffusion pipeline with FLIRT co-registration (DSI data)
  • Diffusion pipeline with DTI and CSD models (DTI data)
  • fMRI pipeline with FLIRT co-registration

Unable to launch connectomemapper GUI

Hello

I'll be highly obliged if someone help me reg. launching GUI

please find the log below:

(py27cmp-gui) amitjc@AMiT:/usr/local/connectomemapper/connectomemapper3$ cmpbidsappmanager
Traceback (most recent call last):
File "/usr/local/bin/cmpbidsappmanager", line 15, in
from bids import BIDSLayout
ImportError: No module named bids

Please Advice.

Thanks and Regards,
Amit.

BUG: CMTK.org still points to the old and now deprecated version of CMP

This involves:

  • Update CMTK.org to point to the documentation of the new version (Seems only JP Thiran is the admin)
  • Provide a way users of previous versions could still access to the old documentation
  • Put a notification in the GitHub repositories of the previous versions that they are now deprecated with no more support and that CMP3 is now the latest maintained version with support.
    PR LTS5/cmp#148 and PR LTS5/cmp_nipype#48 have been created for this purpose but do not know who is still admin and able to merge them.

Feature Request / How to Implement: SIFT filtering for dMRI streamlines

Hello,

If I wanted to implement SIFT/SIFT2 filtering for streamlines before making the structural connectome, would the following work/be possible?

  1. Make two sets of config files, one for doing all the preprocessing and tractography (anat, fmri, dmri) that stops before connectome making, and another that only does connectome production.
  2. Run CMP using the first config file set
  3. Run SIFT/SIFT2 filtering on the tractograms
  4. Run CMP using the second config file set, so the diffusion connectome is filtered.

I think this feature would make a good addition for a MrTrix3 diffusion pipeline, and only comprises a few additional commands.

Please let me know what you think, thanks!
Steven

ENH: "Custom feature": Importing preprocessed data from BIDS Apps

While adopting BIDS in CMP3, the way CMP is working changed dramastically and this feature, available in previous versions, is not working anymore for the moment.
However, this is an important feature but this would also require to rethink the way the "custom" feature is working, such as working with BIDS derivatives directories obtained for instance by smriprep / dmriprep or fmriprep.

tckgen: [ERROR] Cannot use image /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz for rejection sampling - image is empty

How we can get the error

The error is raised while using:

  • Lausanne2018 parcellation with no extra structures selected (no thalamic nuclei, no brainstem substructures, no hippocampal subfields)
  • MRtrix anatomically-constrained tractography

For more details, below are the configuration file of the pipelines as well as the complete crash report generated by tckgen interface:

Anatomical pipeline

[Global]
subject_session = ses-1
subjects = ['sub-01']
process_type = anatomical
subject = sub-01

[parcellation_stage]
parcellation_scheme = Lausanne2018
brain_file = 
segment_hippocampal_subfields = False
segment_brainstem = False
ants_precision_type = double
thalamic_nuclei_maps = app/connectomemapper3/cmtklib/data/segmentation/thalamus2018/Thalamus_Nuclei-HCP-4DSPAMs.nii.gz
include_thalamic_nuclei_parcellation = False
template_thalamus = app/connectomemapper3/cmtklib/data/segmentation/thalamus2018/mni_icbm152_t1_tal_nlin_sym_09b_hires_1.nii.gz
pipeline_mode = Diffusion
atlas_nifti_file = 
parcellation_scheme_editor = ['NativeFreesurfer', 'Lausanne2008', 'Lausanne2018', 'Custom']
number_of_regions = 0
graphml_file = 
atlas_info = {}
pre_custom = Lausanne2008
csf_file = 

[segmentation_stage]
ants_templatefile = 
make_isotropic = True
freesurfer_subject_id = 
isotropic_interpolation = cubic
isotropic_vox_size = 1.0
freesurfer_subjects_dir = 
brain_mask_extraction_tool = Freesurfer
freesurfer_args = 
white_matter_mask = 
ants_regmaskfile = 
seg_tool = Freesurfer
use_fsl_brain_mask = False
brain_mask_path = 
ants_probmaskfile = 
use_existing_freesurfer_data = False

[Multi-processing]
number_of_cores = 1

Diffusion pipeline

[Global]
modalities = []
process_type = diffusion
subject_session = ses-1
diffusion_imaging_model = DSI
dmri_bids_acq = dsiNdir129
subjects = ['sub-01']
subject = sub-01

[diffusion_stage]
tracking_processing_tool_editor = ['Dipy', 'MRtrix', 'Custom']
tracking_processing_tool = MRtrix
dilation_kernel = Box
recon_processing_tool_editor = ['Dipy', 'Custom']
custom_track_file = 
dilation_radius = 1
diffusion_model = Deterministic
dilate_rois = True
recon_processing_tool = Dipy
dipy_tracking_config.fa_thresh = 0.2
dipy_tracking_config.seed_density = 1.0
dipy_tracking_config.number_of_seeds = 1000
dipy_tracking_config.tracking_mode = Deterministic
dipy_tracking_config.use_act = False
dipy_tracking_config.imaging_model = DSI
dipy_tracking_config.sh_order = 8
dipy_tracking_config.curvature = 0.0
dipy_tracking_config.step_size = 0.5
dipy_tracking_config.seed_from_gmwmi = False
dipy_tracking_config.sd = True
dipy_tracking_config.max_angle = 25.0
diffusion_imaging_model_editor = ['DSI', 'DTI', 'HARDI']
mrtrix_tracking_config.min_length = 5.0
mrtrix_tracking_config.angle = 45.0
mrtrix_tracking_config.tracking_mode = Deterministic
mrtrix_tracking_config.use_act = True
mrtrix_tracking_config.cutoff_value = 0.05
mrtrix_tracking_config.backtrack = False
mrtrix_tracking_config.curvature = 0.0
mrtrix_tracking_config.crop_at_gmwmi = True
mrtrix_tracking_config.step_size = 0.5
mrtrix_tracking_config.max_length = 500.0
mrtrix_tracking_config.desired_number_of_tracks = 200000
mrtrix_tracking_config.seed_from_gmwmi = True
mrtrix_tracking_config.sd = True
mrtrix_recon_config.recon_mode = Probabilistic
mrtrix_recon_config.imaging_model = DSI
mrtrix_recon_config.single_fib_thr = 0.7
mrtrix_recon_config.lmax_order = Auto
mrtrix_recon_config.flip_table_axis = []
mrtrix_recon_config.local_model_editor = {True: 'Constrained Spherical Deconvolution'}
mrtrix_recon_config.local_model = True
mrtrix_recon_config.normalize_to_b0 = False
diffusion_model_editor = ['Deterministic', 'Probabilistic']
dipy_recon_config.shore_radial_order = 4
dipy_recon_config.shore_constrain_e0 = False
dipy_recon_config.positivity_constraint = True
dipy_recon_config.tracking_processing_tool = MRtrix
dipy_recon_config.imaging_model = DSI
dipy_recon_config.small_delta = 0.02
dipy_recon_config.big_delta = 0.5
dipy_recon_config.lmax_order = Auto
dipy_recon_config.shore_lambdal = 1e-08
dipy_recon_config.shore_lambdan = 1e-08
dipy_recon_config.local_model_editor = {False: '1:Tensor', True: '2:Constrained Spherical Deconvolution'}
dipy_recon_config.shore_zeta = 700
dipy_recon_config.local_model = True
dipy_recon_config.radial_order = 8
dipy_recon_config.radial_order_values = [2, 4, 6, 8, 10, 12]
dipy_recon_config.single_fib_thr = 0.7
dipy_recon_config.recon_mode = Probabilistic
dipy_recon_config.flip_table_axis = []
dipy_recon_config.shore_tau = 0.0338336
dipy_recon_config.laplacian_regularization = True
dipy_recon_config.mapmri = False
dipy_recon_config.laplacian_weighting = 0.05
dipy_recon_config.shore_positive_constraint = True
diffusion_imaging_model = DSI
processing_tool_editor = ['Dipy', 'MRtrix', 'Custom']

[preprocessing_stage]
eddy_current_and_motion_correction = True
description = description
denoising = False
total_readout = 0.0
eddy_correct_motion_correction = True
dipy_noise_model = Rician
denoising_algo = MRtrix (MP-PCA)
bias_field_algo = ANTS N4
bias_field_correction = False
fast_use_priors = True
resampling = (1, 1, 1)
partial_volume_estimation = True
eddy_correction_algo = FSL eddy_correct
interpolation = interpolate

[connectome_stage]
circular_layout = False
probtrackx = False
output_types = ['gPickle', 'cff']
compute_curvature = False
log_visualization = True
connectivity_metrics = ['Fiber number', 'Fiber length', 'Fiber density', 'Fiber proportion', 'Normalized fiber density', 'ADC', 'gFA']
subject = sub-01

[registration_stage]
ants_convergence_winsize = 10
ants_nonlinear_update_field_variance = 3.0
flirt_args = 
contrast_type = dti
ants_upper_quantile = 0.995
use_float_precision = False
ants_nonlinear_total_field_variance = 0.0
registration_mode = ANTs
init = header
ants_interpolation = BSpline
ants_linear_sampling_perc = 0.25
apply_to_eroded_wm = True
ants_convergence_thresh = 1e-06
ants_multilab_interpolation_parameters = (5.0, 5.0)
ants_gauss_interpolation_parameters = (5.0, 5.0)
ants_nonlinear_cost = CC
ants_bspline_interpolation_parameters = (5,)
apply_to_eroded_brain = False
uses_qform = True
pipeline = Diffusion
ants_lower_quantile = 0.005
no_search = True
ants_linear_cost = MI
ants_linear_gradient_step = 0.1
apply_to_eroded_csf = True
ants_perform_syn = True
ants_linear_sampling_strategy = Regular
dof = 6
diffusion_imaging_model = 
fsl_cost = normmi
ants_nonlinear_gradient_step = 0.1

[Multi-processing]
number_of_cores = 1

Complete crash report

Node: diffusion_pipeline.diffusion_stage.tracking.mrtrix_deterministic_tracking
Working directory: /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/diffusion_stage/tracking/mrtrix_deterministic_tracking

Node inputs:

act_file = /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz
angle = 45.0
args = <undefined>
backtrack = False
crop_at_gmwmi = True
cutoff_value = 0.05
desired_number_of_tracks = 200000
do_not_precompute = <undefined>
environ = {}
gradient_encoding_file = /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/extract_grad/grad.txt
in_file = /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/diffusion_stage/reconstruction/dipy_SHORE/shore_fodf.nii.gz
initial_cutoff_value = <undefined>
initial_direction = <undefined>
inputmodel = SD_Stream
mask_file = <undefined>
maximum_number_of_seeds = <undefined>
maximum_tract_length = 500.0
minimum_tract_length = 5.0
out_file = <undefined>
rk4 = <undefined>
seed_file = <undefined>
seed_gmwmi = /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz
seed_spec = <undefined>
step_size = 0.5
stop = <undefined>
unidirectional = <undefined>

Traceback (most recent call last):
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
    node.run(updatehash=updatehash)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 369, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 752, in _run_interface
    self.raise_exception(runtime)
  File "/opt/conda/envs/py27cmp-core/lib/python2.7/site-packages/nipype/interfaces/base/core.py", line 689, in raise_exception
    ).format(**runtime.dictcopy()))
RuntimeError: Command:
tckgen /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/diffusion_stage/reconstruction/dipy_SHORE/shore_fodf.nii.gz -act /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz -angle 45.0 -crop_at_gmwmi -cutoff 0.05 -select 200000 -grad /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/extract_grad/grad.txt -maxlength 500.0 -minlength 5.0 -seed_gmwmi /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz -step 0.5 -algorithm SD_Stream shore_fodf_tracked.tck
Standard output:

Standard error:

tckgen: [  0%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  1%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  2%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  3%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  4%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  5%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  6%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  7%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  8%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [  9%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 10%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 11%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 12%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 13%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 14%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 15%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 16%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 17%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 18%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 19%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 20%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 21%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 22%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 23%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 24%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 25%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 26%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 27%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 28%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 29%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 30%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 31%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 32%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 33%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 34%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 35%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 36%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 37%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 38%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 39%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 40%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 41%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 42%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 43%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 44%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 45%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 46%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 47%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 48%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 49%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 50%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 51%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 52%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 53%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 54%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 55%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 56%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 57%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 58%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 59%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 60%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 61%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 62%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 63%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 64%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 65%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 66%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 67%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 68%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 69%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 70%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 71%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 72%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 73%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 74%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 75%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 76%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 77%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 78%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 79%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 80%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 81%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 82%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 83%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 84%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 85%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 86%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 87%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 88%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 89%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 90%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 91%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 92%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 93%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 94%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 95%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 96%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 97%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 98%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [ 99%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [100%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"... �[0K
tckgen: [100%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_5tt/act_5tt_resampled_warped.nii.gz"�[0K

tckgen: [  0%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [  2%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [  3%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [  5%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [  7%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [  8%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 10%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 12%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 13%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 15%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 17%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 18%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 20%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 22%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 23%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 25%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 27%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 28%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 30%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 32%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 33%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 35%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 37%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 38%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 40%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 42%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 43%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 45%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 47%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 48%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 50%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 52%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 53%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 55%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 57%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 58%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 60%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 62%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 63%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 65%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 67%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 68%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 70%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 72%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 73%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 75%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 77%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 78%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 80%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 82%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 83%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 85%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 87%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 88%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 90%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 92%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 93%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 95%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 97%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [ 98%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [100%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"... �[0K
tckgen: [100%] uncompressing image "/tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz"�[0K
tckgen: �[01;31m[ERROR] Cannot use image /tmp/derivatives/nipype/sub-01/ses-1/diffusion_pipeline/registration_stage/apply_warp_gmwmi/gmwmi_resampled_warped.nii.gz for rejection sampling - image is empty�[0m
Return code: 1

Cannot launch the output inspector window of the GUI

The bidsappmanager of CMP v3.0.0-beta-RC1 fails to launch the quality inspector window. Based on the terminal output, it fails in finding existing outputs even if it exists (See below).

Terminal output

...
[PIPELINE INIT DONE]
Anatomical output(s) available : False
Diffusion output(s) available : False
fMRI output(s) available : False

Reason
The BIDSApp of CMP3 has adopted in merged PR #25 to use internal /bids_dir and /output_dir instead of /tmp and /tmp/derivatives for input and output directories but I forgot to make updates in the BIDSAppmanager

This will be fixed in PR #33

Bug in deployment of docker image from master branch (latest tag)

Job "deploy_docker_latest" on CircleCI failed with error:

#!/bin/bash -eo pipefail
# Get version, update files.
THISVERSION=$( python /home/circleci/src/connectomemapper3/get_version.py )
echo "THISVERSION : ${THISVERSION}"
echo "CIRCLE_BRANCH : ${CIRCLE_BRANCH}"

if [[ -n "$DOCKER_PASS" ]]; then
  docker login -u $DOCKER_USER -p $DOCKER_PASS
  docker tag sebastientourbier/connectomemapper3 sebastientourbier/connectomemapper-bidsapp:latest
  docker push sebastientourbier/connectomemapper-bidsapp:latest
fi

python: can't open file '/home/circleci/src/connectomemapper3/get_version.py': [Errno 2] No such file or directory


Exited with code exit status 2

CircleCI received exit code 2

See corresponding CircleCI workflow for more details

A few issues with diffusion pipeline

Hello,

Connectome mapper is fantastic, thank you. Would like to ask a couple of questions and report 1 bug.

Q1) Is it possible to use the top-up function for distortion correction with a negative PE sequence?

Q2) I have multi-shell diffusion data but the output seems to be relatively smooth across grey matter and CSF - is it possible to use the MSMT-CSD function instead of standard CSF?

Bug1) Some of the viewing options (esp for registration checking) use fslview which has been discontinued and therefore doesn't bring up the viewer, can this please be changed to fsleyes?

Thanks,
Aswin

[TASK] Write missing docstrings and documentation

Tasks

  • Write docstrings where missing to describe modules / classes / functions. Add citations to softwares and/or papers whenever it applies
  • Insert API documentation in the docs

Resources

ENH: Processing summary report for diffusion pipeline

It should include:

  • Summary of acquisition metadata
  • Preprocessing results (denoising, bias field, current distorsion correction, distorsion corrrection using fieldmaps)
  • Registration overlay
  • FODs
  • (Tractogram)
  • Connectivity matrices
  • (Measure from group analysis)

Problems while setting up the conda environment/ cannot find libffi

Hello,

I have been trying to install the Connectome mapper, but I am getting an error message while setting up the conda environment:

(base) Davids-MBP-3:Applications mynotebook$ conda env create -f connectomemapper3/environment.yml
Collecting package metadata (repodata.json): done
Solving environment: done
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Ran pip subprocess with arguments:
[u'/opt/miniconda2/envs/py27cmp-gui/bin/python', '-m', 'pip', 'install', '-U', '-r', '/Applications/connectomemapper3/tmp2JfN7s.requirements.txt']
Pip subprocess output:

Pip subprocess error:
Traceback (most recent call last):
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/__main__.py", line 16, in <module>
    from pip._internal import main as _main  # isort:skip # noqa
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/__init__.py", line 40, in <module>
    from pip._internal.cli.autocompletion import autocomplete
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/cli/autocompletion.py", line 8, in <module>
    from pip._internal.cli.main_parser import create_main_parser
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/cli/main_parser.py", line 12, in <module>
    from pip._internal.commands import (
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/commands/__init__.py", line 6, in <module>
    from pip._internal.commands.completion import CompletionCommand
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/commands/completion.py", line 6, in <module>
    from pip._internal.cli.base_command import Command
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 20, in <module>
    from pip._internal.download import PipSession
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/download.py", line 37, in <module>
    from pip._internal.utils.glibc import libc_ver
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/site-packages/pip/_internal/utils/glibc.py", line 3, in <module>
    import ctypes
  File "/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/ctypes/__init__.py", line 7, in <module>
    from _ctypes import Union, Structure, Array
ImportError: dlopen(/opt/miniconda2/envs/py27cmp-gui/lib/python2.7/lib-dynload/_ctypes.so, 2): Library not loaded: @rpath/libffi.6.dylib
  Referenced from: /opt/miniconda2/envs/py27cmp-gui/lib/python2.7/lib-dynload/_ctypes.so
  Reason: image not found

CondaEnvException: Pip failed

I have already tried reinstalling libffi with homebrew and renaming the libffi.7.dylib file to libffi.6.dylib in my conda environment and in /usr/local, however it seems that the program cannot find the libffi.6.dylib file.

Has anyone any ideas how to fix this issue?

Thanks in advance,
David

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.