GithubHelp home page GithubHelp logo

bids-apps / mrtrix3_connectome Goto Github PK

View Code? Open in Web Editor NEW
49.0 5.0 26.0 800 KB

Generate subject connectomes from raw BIDS data & perform inter-subject connection density normalisation, using the MRtrix3 software package.

Home Page: http://www.mrtrix.org/

License: Apache License 2.0

Python 91.77% Dockerfile 4.26% Singularity 3.96%
bids bidsapp diffusion-mri mri

mrtrix3_connectome's People

Contributors

chrisgorgo avatar lestropie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mrtrix3_connectome's Issues

eddy_openmp --help warning (update dockerhub?)

I downloaded from dockerhub. I now see that the last build on dockerhub was a year ago...and probably accounts for this issue. the docker run complains about a call to eddy_openmp --help (see bolding below):

dpat@Saci:/Volumes/Main/Working/DockerMRtrix% docker run -i --name mr3 -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 02 --parcellation desikan
mrtrix3_connectome.py:
mrtrix3_connectome.py: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
mrtrix3_connectome.py:
Command:  bids-validator /bids_dataset
mrtrix3_connectome.py: Commencing execution for subject sub-02
mrtrix3_connectome.py: N4BiasFieldCorrection and ROBEX found; will use for bias field correction and brain extraction
mrtrix3_connectome.py: Generated temporary directory: /mrtrix3_connectome.py-tmp-CN4AJK/
mrtrix3_connectome.py: Importing DWI data into temporary directory
Command:  mrconvert /bids_dataset/sub-02/dwi/sub-02_dwi.nii.gz -fslgrad /bids_dataset/sub-02/dwi/sub-02_dwi.bvec /bids_dataset/sub-02/dwi/sub-02_dwi.bval -json_import /bids_dataset/sub-02/dwi/sub-02_dwi.json /mrtrix3_connectome.py-tmp-CN4AJK/dwi1.mif
mrtrix3_connectome.py: Importing fmap data into temporary directory
mrtrix3_connectome.py: Importing T1 image into temporary directory
Command:  mrconvert /bids_dataset/sub-02/anat/sub-02_T1w.nii.gz /mrtrix3_connectome.py-tmp-CN4AJK/T1.mif
mrtrix3_connectome.py: Changing to temporary directory (/mrtrix3_connectome.py-tmp-CN4AJK/)
mrtrix3_connectome.py: Performing MP-PCA denoising of DWI data
Command:  dwidenoise dwi1.mif dwi1_denoised.mif
mrtrix3_connectome.py: Performing Gibbs ringing removal for DWI data
Command:  mrdegibbs dwi1_denoised.mif dwi1_denoised_degibbs.mif -nshifts 50
mrtrix3_connectome.py: Performing various geometric corrections of DWIs

Command: eddy_openmp --help mrtrix3_connectome.py: [WARNING] Command failed: eddy_openmp --help

Command:  dwipreproc dwi1_denoised_degibbs.mif dwi_preprocessed.mif -rpe_header

Slice Timing

Hi @Lestropie ,

I got CUDA implemented to work with docker and wanted to test using eddy_cuda9.1. However, I'm running into a problem. At the point of dwipreproc, the slice timing in the header has been lost for dwipreproc_in. That image ends up having a single slice timing value, rather then e.g. 60 values.

I know this was suggested in #44. Thoughts on a solution?

New parcellations

Not in any particular order, just adding to (& subtracting from) the list as I think of them (/ implement them).

  • Lausanne multi-resolution

  • Yeo 7-network and 17-network parcellations (following connected-component analysis)

  • Brainnetome

  • Schaefer multi-resolution

Test data - fmap data

Need a test data set that includes reversed phase-encode EPI data for susceptibility distortion correction. From memory I went through the BIDS example data list and didn't find anything. @chrisfilo Are you able to add something, or is there a way for me to upload a couple of data sets?

Group-level: Test b-values

Originally listed in #19.

Currently, the group-level analysis makes certain assumptions about the consistency of data across subjects. This means that if the pipeline were to be executed for subjects with different acquisition protocols etc., the script may fail outright. It would be preferable to instead output all relevant information during the participant-level analyses, and then explicitly verify the consistency of these data across subjects at commencement of group-wise analysis.

Incorporate pybids

Using pybids should enable better detection and handling of e.g. multi-session data.

Naming of freeSurfer parcellations

I have noticed that both Desikan and Destrieux parcellations files are not correctly referenced.
The Desikan parcellation file is aparc+aseg.mgz
The Destrieux parcellation file is aparc.a2009s+aseg.mgz

Update external softwares in container

  • Update FSL to 5.0.10

  • Update FreeSurfer to v6

    • If using v6, use parallelisation in recon-all call. Should use the same value as -nthreads / --n_cpus if provided.

Edit: Get the requisite MRtrix3 changes for this into the MRtrix3 dev branch, so that the latest fixes can be utilised.

eddy error, data not shelled

Hi @Lestropie,

I spoke too soon, eddy is giving me an error now: dwipreproc: Output of failed command:
eddy: msg=ECScanManager::GetShellIndicies: Data not shelled
terminate called after throwing an instance of 'EDDY::EddyException'
what(): eddy: msg=ECScanManager::GetShellIndicies: Data not shelled
I know this issue is with eddy and I can confirm the data is shelled (bvals attached) and I wondering if there is a work around I can use in docker (I think its usually solved by " --data_is_shelled" ?). Thanks again for your help!

Alex
sub-NDARINV1EECRFPM_ses-baselineYear1Arm1_dwi.bval.txt

Error when running in parallel

Simple question I hope. Is there a way to pick up where something left off if stuff fails? For instance, I was running a subject in parallel with a lot of other people (that maxed out our 80 core system and I'm hoping that that's the issue) and I got an error of std::bad_alloc on this step:

mrcalc 1 dwi_meanbzero.mif -div dwi_mask.mif -mult - | mrhistmatch nonlinear - T1_biascorr_brain.mif dwi_pseudoT1.mif -mask_input dwi_mask.mif -mask_target T1_mask.mif

I kept all of the intermediate files (verbosity 3 + debug). Is there a way to pick this person up using the temp folders so that I don't need to run topup/eddy again?

Top-of-list improvements to be made to app

Initially listing all of these together; may split out in the future for the sake of progress tracking / discussion.

  • Have capability to correct EPI distortions using dual-echo gradient echo image data rather than SE EPI data.

    • Edit: Somewhat discussed on MRtrix3 forum here.

Edit Added 08/08/17:

  • For group-level analysis, output scaling factors, not only in per-subject files, but also in group-level files, including all subjects and all factors (for manual QA).

--parcellation option: Allow list

It should be possible to generate parcellation images, and hence connectomes, for more than one parcellation in a single execution of the pipeline. Each requisite step (e.g. FreeSurfer recon-all execution, T1->MNI registration) shoud only be performed once if required for one or more parcellations. The group-level analysis would need to discover for which parcellations connectomes have been generated for all subjects, and operate accordingly.

Updates for release of eddyqc

  • MRtrix3 will be updated such that eddy_quad can be executed inside of dwipreproc and its output captured (many of the eddy files requirede for eddy_quad are generated and then discarded inside of dwipreproc). MRtrix3_connectome should invoke the relevant dwipreproc command-line option during participant-level processing, and then ensure that the contents of the generated directory are written to the output target directory.

  • In group-level analysis, eddy_squad should be executed if the participant-level directories contain the relevant QUAD output.

Fail in different spot with hcpmmp1 atlas and verbosity level 3

I think the test run got further this time, but is still failing to complete. We must be very close. The error log is attached:
error.txt

The docker command I ran was this: docker run -i --rm -v ${PWD}:/bids_dataset -v ${PWD}/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 102 --parcellation hcpmmp1 --output_verbosity 3 --debug --skip-bids-validator

The directory does not contain the anat, dwi, connectome or tractogram subdirectories.

As always, thanks for continuing to work on this, and I stand ready to provide any additional information you might need.

-Dianne

dwi and fmap images not on same grid

Originally listed in #19.

Currently, the script concatenates any SE-EPI data present in the fmap/ directory with the DWIs stored in the dwi/ directory, so that denoising can be applied to these images (the denoising process requires a large number of volumes; if just the contents of fmap/ were provided to dwidenoise, it would not work); it additionally means that mrdegibbs need only be executed once.

However this makes a fundamental assumption: The images in the fmap/ directory are defined on the same voxel grid as those in the dwi/ directory. If this is not the case, then mrcat will fail.

To avoid this, it is necessary to explicitly test whether or not these images lie on the same voxel grid. If they are, then processing can be performed as it is currently. If they are not, then:

  • The concatenation of DWI and fmap data should be skipped.
  • dwidenoise should be applied exclusively to the DWIs.
  • mrdegibbs should be applied separately to DWI and fmap images.
  • The mrconvert calls to separate DWI and fmap data from concatenation should be skipped.

Updates for MRtrix3 3.0_RC2

  • Modify dockerfile to pull the 3.0_RC2 tag.

  • Use mrdegibbs rather than unring.a64.

  • Use mtnormalise, including accounting for the scaling factor embedded in the image header in group-level analysis. Remove dwiintensitynorm step.

  • Consider incorporating upsampling of DWI data (if >= 2.0mm) prior to CSD.

Some typos in parc-hcpmmp1_lookup.txt ?

Hi

I think I found some typos within hcpmmp1_ordered.txt/hcpmmp1_original.txt/parc-hcpmmp1_lookup.txt

All "L" or "l" of right hemisphere regions seem to be converted to "R".
I encountered this mistake when I tried to map the connectome region names to the names in the annotation file from freesurfer.
Comparing with the names in Glasser et al. 2016 I found the following typos.

ID false correct
20 RO1 LO1
21 RO2 LO2
25 PSR PSL
26 SFR SFL
39 5R 5L
42 7AR 7AL
46 7PR 7PL
48 RIPv LIPv
70 8BR 8BL
76 47R 47l
91 11R 11l
92 13R 13l
95 RIPd LIPd
124 PBeRt PBelt
159 RO3 LO3
173 MBeRt MBelt
174 RBeRt LBelt

best
Paul

How to specify -rpe_none for dwipreproc?

I was unable to run the docker container out of the box :(

I executed
docker run -i --rm -v /Users/admin/Anvil/sync/datasets/adolescent-development-study/mris.bids:/bids_dataset -v /Users/admin/Anvil/sync/datasets/adolescent-development-study/derivatives/mritrix-connectome:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --debug --parcellation fs_2005

And received the error

run.py: [DEBUG] run.exeName() (from run.py:53): bids-validator -> bids-validator
run.py: [DEBUG] run.versionMatch() (from run.py:338): Command bids-validator not found in MRtrix3 bin/ directory
run.py: [DEBUG] run.exeName() (from run.py:340): bids-validator -> bids-validator
run.py: [DEBUG] run._shebang() (from run.py:54): File "bids-validator": string "#!/usr/bin/env node": ['/usr/bin/env', 'node']
run.py: [DEBUG] run.command() (from run.py:659): To execute: [['/usr/bin/env', 'node', '/usr/bin/bids-validator', '/bids_dataset']]
Command:  bids-validator /bids_dataset
run.py: Commencing execution for subject sub-001
run.py: [DEBUG] fsl.exeName() (from run.py:26): fsl5.0-flirt
run.py: [DEBUG] fsl.exeName() (from run.py:27): fsl5.0-fsl_anat
run.py: [DEBUG] fsl.suffix() (from run.py:28): NIFTI_GZ -> .nii.gz
run.py: Generated temporary directory: /run.py-tmp-YWW6SY/
run.py: [ERROR] Inadequate data for pre-processing of subject 'sub-001': No phase-encoding contrast in input DWIs or fmap/ directory
run.py: Contents of temporary directory kept, location: /run.py-tmp-YWW6SY/

This project did not collect phase-encoding contrasts (used an older siemens tim trio), so we cannot perform inhomogeneity field estimation.

Any ideas on how to proceed with the docker execution of this dataset?

Thanks,
S

docker documentation inconsistent: output vs outputs

https://github.com/BIDS-Apps/MRtrix3_connectome

To run the script in participant level mode (for processing one subject only), use e.g.:

$ docker run -i --rm
-v /Users/yourname/data:/bids_dataset
-v /Users/yourname/outputs:/outputs
bids/mrtrix3_connectome
/bids_dataset /outputs participant --participant_label 01 --parcellation desikan
Following processing of all participants, the script can be run in group analysis mode using e.g.:

$ docker run -i --rm
-v /Users/yourname/data:/bids_dataset
-v /Users/yourname/output:/output
bids/mrtrix3_connectome
/bids_dataset /output group

Specify the version of MRTRix when installing

This could be a commit hash or a tag. By specifying it in Dockerhub you will be able to trigger image rebuilding by updating the version in the file. It also helps with figuring out which version was used to build the container without having to pull the image.

bids-validator failure.

Hello MRtrix3_connectome user,

I was trying to test drive this bids app, but ran into some slightly "opaque" errors. When trying to use this syntax:
docker run -i --rm -v ~/Volumes/Hanson/NKI_HealthyBrainNetwork/RU/R1/:/bids_dataset -v /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/RU/R1/derivatives/MRtrix3:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label NDARAG340ERT --parcellation fs_2005 -nocleanup -n 5

I get the following output:
`Command: bids-validator /bids_dataset
run.py:
run.py: [ERROR] Command failed: bids-validator /bids_dataset (run.py:659)
run.py: Output of failed command:

    <--- Last few GCs --->

      944598 ms: Scavenge 1398.9 (1457.1) -> 1398.9 (1457.1) MB, 1.7 / 0 ms (+ 1.6 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
      945879 ms: Mark-sweep 1398.9 (1457.1) -> 1397.8 (1457.1) MB, 1280.6 / 0 ms (+ 1.6 ms in 1 steps since start of marking, biggest step 1.6 ms) [last resort gc].
      947042 ms: Mark-sweep 1397.8 (1457.1) -> 1398.4 (1457.1) MB, 1163.4 / 0 ms [last resort gc].


    <--- JS stacktrace --->

    ==== JS stack trace =========================================

    Security context: 0x5df37337399 <JS Object>
        2: /* anonymous */ [/usr/lib/node_modules/bids-validator/validators/bids.js:209] [pc=0x19fa99d4d554] (this=0x5df373b8139 <JS Global Object>,file=0x25df25b6cbf1 <an Object with map 0x1d7a42fde8f1>,key=0x2ad7a5531141 <String[7]: 1188769>,cb=0x180a2edc0901 <JS Function (SharedFunctionInfo 0x2ad7a55552d1)>)
        3: iterateeCallback(aka iterateeCallback) [/usr/lib/node_modules/bids-validator/node_...

    FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory

run.py:`

My bids directory has a fair number of subjects (271), each with multiple types of scans (structural, rest, DTI)... so I wasn't sure if this was causing an issue... or if there was something in my call that hung things up? Has anyone run into this? Any thoughts are greatly appreciated!

Thanks much!
Jamie.

Group analysis: Tabulate scaling factors

Initially listed in #3.

Tabulate calculations that are performed in group-level analysis into a single file, so that potential outliers are easier to find during manual QA.

Known issues / limitations

Creating a list of items that are not "wishlist" additions, but are things that could potentially prevent an analysis from being run on particular data. Items may be added to this list, or existing items raised in priority, based on user feedback.

  • If DWI and fmap images are not defined on the same voxel grid, these cannot be concatenated. In this case it will not be possible to run dwidenoise on the fmap images (due to inadequate samples within the sliding window), but Gibbs ringing removal could still be run. Appropriate handling of such data in dwipreproc is awaiting updates to MRtrix3.

  • Explicit testing of consistent number of b-value shells / actual b-values acquired / lmax per tissue is not performed in the group-level analysis. Mismatches will therefore lead to either outright errors or erroneous behaviour.

Aligning multiple processing streams

Hi @Lestropie,
I have a grand plan to run a multimodal analysis using T1 information (particularly cortical thickness and gray matter density), rsFMRI, and DTI/tractography using the HCP MMP1 atlas. To that end, my analysis plan looks like this:

  1. BIDS-Apps/Freesurfer's recon-all, particularly for quality metrics and and any future inquiries I may have.
  2. fMRIPREP, loading in the pre-run Freesurfer.
  3. XCP Engine (https://github.com/PennBBL/xcpEngine) for atlas-based ANTs cortical thickness on the fMRIPREP'd T1 images and atlas-based functional connectivity using the fMRIPREP'd rsFMRI and the XCP post-processed T1s (for coregistration).
  4. DWI preprocessing and connectome construction using MRtrix3_connectome.

My rationale for this ordering is to minimize duplicate (and potentially disparate processing) streams, particularly for the T1s which will get processed with all of these steps (ANTs cortical thickness in XCP, 5ttgen in MRtrix3) .

So here's my question: Is there a way for me to pipe the pre-run Freesurfer (for parcellation and labeling purposes) and the fMRIPREP preprocessed T1s (along with the brain mask) into MRtrix3_connectome as the target for 5ttgen or will I need to follow some instructions (like the BATMAN tutorial) using these images apart from the docker environment?

If I have to do that, if I place those images in the appropriate derivatives folders, will MRtrix3_connectome detect the existence of those files and not try to rerun the anatomical processing stream?

I hope that's clear enough what my question is.

Thanks

First working version of app

Adding some details so that @chrisfilo (and anybody else who stumbles across this) can see what is required to get a baseline version of the app running. List may be altered as I test & adjust minimal requirements.

  • Finish MRtrix3 developments that will enable DWI pre-processing to be run without a priori knowledge of the phase-encoding design of either DWI or EPI SE field-mapping protocols, by reading this information from the image headers when available and passing this information to FSL's topup / eddy accordingly.
  • Testing of both participant and group-level analyses (which will involve inter-subject connection density normalisation followed by calculation of the group average connectome).

Fail when using hcpmmp1 atlas and verbosity level 3

Here is the command I ran (I have previously run this command successfully with the desikan atlas and no output verbosity flag):

dpat@Saci:/Volumes/Main/working/MRtrix_connectome2/data% docker run -i --rm -v ${PWD}:/bids_dataset -v ${PWD}/derivatives:/outputs diannepat/mrtrix3-hotfix /bids_dataset /outputs participant --participant_label 102 --parcellation hcpmmp1 --output_verbosity 3 --debug --skip-bids-validator

The error.txt contents:
labelconvert freesurfer/mri/aparc.HCPMMP1+aseg.mgz/aparc.HCPMMP1+aseg.mgz /mrtrix3/share/mrtrix3/labelconvert/hcpmmp1_original.txt /mrtrix3/share/mrtrix3/labelconvert/hcpmmp1_ordered.txt parc_init.mif

labelconvert: �[00;32m[INFO] opening image "freesurfer/mri/aparc.HCPMMP1+aseg.mgz/aparc.HCPMMP1+aseg.mgz"...�[0m
labelconvert: �[01;31m[ERROR] Not a directory�[0m
labelconvert: �[01;31m[ERROR] error opening image "freesurfer/mri/aparc.HCPMMP1+aseg.mgz/aparc.HCPMMP1+aseg.mgz"�[0m

I have saved the entire results directory, so if there is anything else you need, please let me know. I just started rerunning with the desikan atlas to see if it completes (since this looks like the issue from the error.txt).

Thank you,

Dianne

outputs directory not saved outside container unless debug mode is on.

This command runs happily for 12 hours, filling up a tmp directory inside the container and never writing it out to the local disk on the outside of the container:

docker run -i --name mr3 -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan

If you run with debug, then the output gets written to the local disk outside the container.

docker run -i --name mr3 -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan --debug

Request to keep all intermediate files

Hi Robert,

Compared to the BATMAN tutorial, your container leaves very little in the way of intermediate files...even with output_verbosity = 3!

Could you add an option to keep even more (maybe all of it)?
I think we'd find that really helpful.

Thanks,

Dianne

Enable --parcellation none

If the user specifies --parcellation none, skip all processing related to connectome generation, but still perform all possible processing outside of tractogram / connectome generation. In this way the script could still be used as a sort of "default pre-processing".

Non-concatenated DWI/EPI issue

Sorry for the slurry of issues but I figured it might be easier to have them separated out. Across all issues (#59, #60, and now this one), I have a Docker container that pulls the master for MRtrix3.

I rewrote the mrtrix_connectome.py file recently (see PR) and have further updated that today to stop concatenating the DWI and EPI files. Prior to this, dwipreproc did not have any issues. However, now it is throwing an error.

The script runs through, loads the DWI, EPI, and T1 images and converts them to .mifs as previously. Where it used to concatenate the DWI/EPI images, it now only concatenates multiple EPIs into fmap_cat.mif.

Steps taken in processing:

  1. DWI images are denoised and unringed, resulting in a file dwipreproc_in.mif
  2. Field maps are concatenated if there are multiples and then unringed and saved as se_epi.mif
  3. These are then fed to dwipreproc with the call:
    dwipreproc dwipreproc_in.mif dwi_preprocessed.mif -rpe_header -se_epi se_epi.mif -align_seepi -eddy_options " --cnr_maps --repo" -eddyqc_all eddyqc/
  4. Inside of dwipreproc, dwipreproc_in.mif and se_epi.mif are copied to the temp file and headers are loaded. At the start of dwipreproc all files are present and accounted for.
  5. I get the following output:
dwipreproc: Changing to temporary directory (/outputs/mrtrix3_connectome.py-tmp-9YA6MU/dwipreproc-tmp-6HU0HK/)
dwipreproc: Loading header for image file 'dwi.mif'
dwipreproc: Loading header for image file 'se_epi.mif'
dwipreproc: Command: '/mrtrix3/bin/mrinfo dwi.mif -shell_bvalues' (piping data to local storage)
dwipreproc: Result: 0 1000
Command:  dirstat dwi.mif -output asym
dwipreproc: DWIs and SE-EPI images used for inhomogeneity field estimation are defined on different image grids; the latter will be automatically re-gridded to match the former
Command:  mrtransform se_epi.mif - -interp sinc -template dwi.mif | mrcalc - 0.0 -max se_epi_regrid.mif
 dwipreproc: No phase-encoding contrast present in SE-EPI images; will examine again after combining with DWI b=0 images
Command:  dwiextract dwi.mif - -bzero | mrcat - se_epi.mif se_epi_regrid_dwibzeros.mif -axis 3
mrcat: [ERROR] failed to open key/value file "se_epi.mif": No such file or directory
mrcat: [ERROR] error opening image "se_epi.mif"

Based on lines 453-461 of dwipreproc, it looks like when the regridding is done, the original se_epi.mif file is removed from the temp dwipreproc folder. However, with -rpe_header specified, line 519 requires that file. I'm guessing that if regridding is done, then se_epi.mif should be replaced with se_epi_regrid.mif.

Because both the original se_epi image and the regridded version are assigned to se_epi_path, dwipreproc line 519 could probably be easily updated as:
run.command('dwiextract dwi.mif - -bzero | mrcat - ' + se_epi_path + new_se_epi_path + ' -axis 3')

Create Singularity build specification file

This will allow direct generation of a Singularity container, rather than producing a Docker container and then converting. It will also allow hosting on Singularity Hub.

T1-to-template registration

  • Try getting mrregister to work as a viable option.

  • Should run N4 on subject image prior to registration; may not matter so much for ANTs or FSL, since they will estimate a bias field during non-linear registration, but this will be important for mrregister.

Reverse phase-encode fmap file not being handled

9659:feckless:working/trix3> docker run -i --rm -v /Volumes/Main/Working/trix3:/bids_dataset -v /Volumes/Main/Working/trix3/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /output participant --participant_label 330 --parcellation desikan --debug
mrtrix3_connectome.py:
mrtrix3_connectome.py: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
mrtrix3_connectome.py:
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:66): bids-validator -> bids-validator
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from run.py:375): Command bids-validator not found in MRtrix3 bin/ directory
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:377): bids-validator -> bids-validator
mrtrix3_connectome.py: [DEBUG] run._shebang() (from run.py:67): File "bids-validator": string "#!/usr/bin/env node": ['/usr/bin/env', 'node']
mrtrix3_connectome.py: [DEBUG] run.command() (from mrtrix3_connectome.py:1018): To execute: [['/usr/bin/env', 'node', '/usr/local/bin/bids-validator', '/bids_dataset']]
Command:  bids-validator /bids_dataset
mrtrix3_connectome.py: Commencing execution for subject sub-330
mrtrix3_connectome.py: N4BiasFieldCorrection and ROBEX found; will use for bias field correction and brain extraction
mrtrix3_connectome.py: [DEBUG] (from mrtrix3_connectome.py:128) 'reconall_multithread_options' =  -parallel
mrtrix3_connectome.py: Generated temporary directory: /output
mrtrix3_connectome.py: Importing DWI data into temporary directory
mrtrix3_connectome.py: [DEBUG] path.toTemp() (from mrtrix3_connectome.py:211): dwi1.mif -> /output/dwi1.mif
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:351): mrconvert -> mrconvert
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from run.py:57): Version-matched executable for mrconvert: /mrtrix3/bin/mrconvert
mrtrix3_connectome.py: [DEBUG] run._shebang() (from run.py:67): File "/mrtrix3/bin/mrconvert": Not a text file
mrtrix3_connectome.py: [DEBUG] run.command() (from mrtrix3_connectome.py:211): To execute: [['/mrtrix3/bin/mrconvert', '/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz', '-fslgrad', '/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.bvec', '/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.bval', '-json_import', '/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.json', '/output/dwi1.mif', '-info']]
Command:  mrconvert /bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz -fslgrad /bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.bvec /bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.bval -json_import /bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.json /output/dwi1.mif
          mrconvert: [INFO] opening image "/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz"...
          mrconvert: [INFO] Axes and transform of image "/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz" altered to approximate RAS coordinate system
          mrconvert: [INFO] image "/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz" opened with dimensions 128x128x74x32, voxel spacing 2x2x2x11.5, datatype Int16LE
          mrconvert: [INFO] found 32x4 diffusion gradient table
          mrconvert: [100%] uncompressing image "/bids_dataset/sub-330/dwi/sub-330_acq-AP_dwi.nii.gz"
          mrconvert: [INFO] creating image "/output/dwi1.mif"...
          mrconvert: [INFO] image "/output/dwi1.mif" created with dimensions 128x128x74x32, voxel spacing 2x2x2x11.5, datatype Int16LE
          mrconvert: [100%] copying from "/bids_data...i/sub-330_acq-AP_dwi.nii.gz" to "/output/dwi1.mif"
mrtrix3_connectome.py: Importing fmap data into temporary directory
mrtrix3_connectome.py: [DEBUG] path.newTemporary() (from image.py:11): /output/mrtrix-tmp-Y3VRMU.json
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:351): mrinfo -> mrinfo
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from image.py:12): Version-matched executable for mrinfo: /mrtrix3/bin/mrinfo
mrtrix3_connectome.py: [DEBUG] run.exeName() (from image.py:12): /mrtrix3/bin/mrinfo -> /mrtrix3/bin/mrinfo
mrtrix3_connectome.py: Loading header for image file '/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz'
mrtrix3_connectome.py: [DEBUG] image.__init__() (from mrtrix3_connectome.py:236): ['/mrtrix3/bin/mrinfo', '/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz', '-json_all', '/output/mrtrix-tmp-Y3VRMU.json']
mrtrix3_connectome.py: [DEBUG] image.__init__() (from mrtrix3_connectome.py:236): {'_intensity_offset': 0.0, '_format': u'NIfTI-1.1 (GZip compressed)', '_transform': [[1.0, 0.0, -0.0, -125.80451965332], [0.0, 1.0, -0.0, -133.989959716797], [-0.0, 0.0, 1.0, -61.0282020568848], [0.0, 0.0, 0.0, 1.0]], '_intensity_scale': 1.0, '_datatype': u'Int16LE', '_name': u'/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz', '_keyval': {u'comments': u'TE=86;Time=143557.320;phase=0;dwell=0.470'}, '_size': [128, 128, 74, 2], '_spacing': [2.0, 2.0, 2.0, 11.5], '_strides': [-1, 2, 3, 4]}
mrtrix3_connectome.py: [DEBUG] path.toTemp() (from mrtrix3_connectome.py:242): fmap1.mif -> /output/fmap1.mif
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:351): mrconvert -> mrconvert
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from run.py:57): Version-matched executable for mrconvert: /mrtrix3/bin/mrconvert
mrtrix3_connectome.py: [DEBUG] run._shebang() (from run.py:67): File "/mrtrix3/bin/mrconvert": Not a text file
mrtrix3_connectome.py: [DEBUG] run.command() (from mrtrix3_connectome.py:242): To execute: [['/mrtrix3/bin/mrconvert', '/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz', '-json_import', '/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.json', '-set_property', 'dw_scheme', '0,0,1,0\\n0,0,1,0', '/output/fmap1.mif', '-info']]
Command:  mrconvert /bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz -json_import /bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.json -set_property dw_scheme "0,0,1,0\n0,0,1,0" /output/fmap1.mif
          mrconvert: [INFO] opening image "/bids_dataset/sub-330/fmap/**sub-330_dir-PA_epi.nii.gz**"...
          mrconvert: [INFO] Axes and transform of image "/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz" altered to approximate RAS coordinate system
          mrconvert: [INFO] image "/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz" opened with dimensions 128x128x74x2, voxel spacing 2x2x2x11.5, datatype Int16LE
          mrconvert: [100%] uncompressing image "/bids_dataset/sub-330/fmap/sub-330_dir-PA_epi.nii.gz"
          mrconvert: [INFO] creating image "/output/fmap1.mif"...
          mrconvert: [INFO] image "/output/fmap1.mif" created with dimensions 128x128x74x2, voxel spacing 2x2x2x11.5, datatype Int16LE
          mrconvert: [100%] copying from "/bids_data...p/sub-330_dir-PA_epi.nii.gz" to "/output/fmap1.mif"
mrtrix3_connectome.py: Importing T1 image into temporary directory
mrtrix3_connectome.py: [DEBUG] path.toTemp() (from mrtrix3_connectome.py:256): T1.mif -> /output/T1.mif
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:351): mrconvert -> mrconvert
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from run.py:57): Version-matched executable for mrconvert: /mrtrix3/bin/mrconvert
mrtrix3_connectome.py: [DEBUG] run._shebang() (from run.py:67): File "/mrtrix3/bin/mrconvert": Not a text file
mrtrix3_connectome.py: [DEBUG] run.command() (from mrtrix3_connectome.py:256): To execute: [['/mrtrix3/bin/mrconvert', '/bids_dataset/sub-330/anat/sub-330_T1w.nii.gz', '/output/T1.mif', '-info']]
Command:  mrconvert /bids_dataset/sub-330/anat/sub-330_T1w.nii.gz /output/T1.mif
          mrconvert: [INFO] opening image "/bids_dataset/sub-330/anat/sub-330_T1w.nii.gz"...
          mrconvert: [INFO] image "/bids_dataset/sub-330/anat/sub-330_T1w.nii.gz" opened with dimensions 176x240x256, voxel spacing 0.99999350309371948x1.0546875x1.0546875, datatype Int16LE
          mrconvert: [100%] uncompressing image "/bids_dataset/sub-330/anat/sub-330_T1w.nii.gz"
          mrconvert: [INFO] creating image "/output/T1.mif"...
          mrconvert: [INFO] image "/output/T1.mif" created with dimensions 176x240x256, voxel spacing 0.99999350309371948x1.0546875x1.0546875, datatype Int16LE
          mrconvert: [100%] copying from "/bids_data...330/anat/sub-330_T1w.nii.gz" to "/output/T1.mif"
mrtrix3_connectome.py: Changing to temporary directory (/output)
mrtrix3_connectome.py: Concatenating DWI and fmap data for combined pre-processing
mrtrix3_connectome.py: [DEBUG] run.exeName() (from run.py:351): mrcat -> mrcat
mrtrix3_connectome.py: [DEBUG] run.versionMatch() (from run.py:57): Version-matched executable for mrcat: /mrtrix3/bin/mrcat
mrtrix3_connectome.py: [DEBUG] run._shebang() (from run.py:67): File "/mrtrix3/bin/mrcat": Not a text file
mrtrix3_connectome.py: [DEBUG] run.command() (from mrtrix3_connectome.py:283): To execute: [['/mrtrix3/bin/mrcat', 'fmap1.mif', 'fmap_cat.mif', '-axis', '3', '-info']]
Command:  mrcat fmap1.mif fmap_cat.mif -axis 3
          mrcat: [ERROR] Expected at least 3 arguments (2 supplied)
mrtrix3_connectome.py:
mrtrix3_connectome.py: [ERROR] Command failed: mrcat fmap1.mif fmap_cat.mif -axis 3 (mrtrix3_connectome.py:283)
mrtrix3_connectome.py: Output of failed command:
                       mrcat: [ERROR] Expected at least 3 arguments (2 supplied)
mrtrix3_connectome.py:
mrtrix3_connectome.py: Changing back to original directory (/)
mrtrix3_connectome.py: Script failed while executing the command: mrcat fmap1.mif fmap_cat.mif -axis 3
mrtrix3_connectome.py: For debugging, inspect contents of temporary directory: /output

mrconvert parcRGB.mif ... Error when using parcellation=perry512

Hi developers
Thanks for the nice container.

I just encountered a minor issue.
When running the container with --parcellation perry512 and --output_verbosity 3,
the container generates an error at the last steps.

mrtrix3_connectome.py: [ERROR] Command failed: mrconvert parcRGB.mif /mrtrix3_out/sub-QL20120814/anat/sub-QL20120814_parc-perry512_colour.nii.gz -strides +1,+2,+3 (mrtrix3_connectome.py:797)
mrtrix3_connectome.py: Output of failed command:
mrconvert: [INFO] opening image "parcRGB.mif"...
mrconvert: [ERROR] failed to open key/value file "parcRGB.mif": No such file or directory
mrconvert: [ERROR] error opening image "parcRGB.mif"

I assume this is because "parcRGB.mif" is not created on line 650 in mrtrix3_connectome.py,
cause mrtrix_lut_file is empty for the specified parcellation.
However on line 797 mrconvert tries to use that ".mif" file and thus generates the Error.

best
Paul

Getting eddy_options

Related to #59, the script is not grabbing the eddy_options correctly. Here's the command (in the current release posted):

eddy_help_process = subprocess.Popen([ eddy_binary, '--help' ], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False)
(eddy_stdout, eddy_stderr) = eddy_help_process.communicate()
app.var(eddy_stdout, eddy_stderr)
eddy_options = []
for line in eddy_stderr:
      line = line.lstrip()
      if line.startswith('--repol'):
        eddy_options.append('--repol')
      elif line.startswith('--mporder') and have_slice_timing and eddy_cuda:
        eddy_options.append('--mporder=' + str(mporder))

As it stands, it is grabbing each character as line, so it isn't actually grabbing any options. For example, since FSL 6.0.0 is installed in the current version, --repol should always be present but isn't getting propagated into the call. I have this working for updated python (v 3.6.8) inside the container.

for line in eddy_stderr.strip().split():
    line = line.decode('utf-8')
    if "--cnr_maps" in line :
        eddy_options.append('--cnr_maps')
    elif line.startswith('--repol'):
        eddy_options.append('--repol')
    elif line.startswith(
            '--mporder') and have_slice_timing and eddy_cuda:
        eddy_options.append('--mporder=' + str(mporder))

Use dwicat

Awaiting MRtrix3 release and updating of this tool to the new Python API.

New script dwicat could be used when concatenating across independent DWI series in order to approximate changes in intensity scaling between protocols.

dwipreproc fails

I started running a single subject as test for this app using the newest version (version 0.4.0). I have a single DWI image in my sub/dwi folder and a single reverse phase-encoded EPI in the sub/fmap folder. The process appears to run fine for the denoising and deringing. Additionally, the program splits the concatenated dwi and rpe image:

mrconvert /bids_dataset/sub-168/dwi/sub-168_run-01_dwi.nii -fslgrad /bids_dataset/sub-168/dwi/sub-168_run-01_dwi.bvec /bids_dataset/sub-168/dwi/sub-168_run-01_dwi.bval -json_import /bids_dataset/sub-168/dwi/sub-168_run-01_dwi.json /output/mrtrix3_connectome.py-tmp-45ONXG/dwi1.mif
mrconvert /bids_dataset/sub-168/fmap/sub-168_run-01_dir-PA_epi.nii -json_import /bids_dataset/sub-168/fmap/sub-168_run-01_dir-PA_epi.json -set_property dw_scheme "0,0,1,0\n0,0,1,0" /output/mrtrix3_connectome.py-tmp-45ONXG/fmap1.mif
mrconvert /bids_dataset/sub-168/anat/sub-168_run-01_T1w.nii /output/mrtrix3_connectome.py-tmp-45ONXG/T1.mif
mrcat fmap1.mif dwi1.mif dwi_fmap_cat.mif -axis 3
dwidenoise dwi_fmap_cat.mif dwi_fmap_cat_denoised.mif
mrdegibbs dwi_fmap_cat_denoised.mif dwi_fmap_cat_denoised_degibbs.mif -nshifts 50
mrconvert dwi_fmap_cat_denoised_degibbs.mif se_epi.mif -coord 3 0:1
mrconvert dwi_fmap_cat_denoised_degibbs.mif dwipreproc_in.mif -coord 3 2:51
os.makedirs('eddyqc')

However, the following error occurs when the process advances to preproc:

dwipreproc: Loading header for image file 'dwi.mif' 
dwipreproc: Loading header for image file 'se_epi.mif'
dwipreproc: [ERROR] No diffusion gradient table found

If I do mrinfo on dwi.mif and se_epi.mif, they both have the same information and it's for the reverse phase-encoded image. I can confirm that the original images have different .json files and are indeed different images.

It seems as though part of the header info from the original dwi image got lost in the process. Is there a way to fix this or coerce dwipreproc to load the correct json??

Automate use of eddy second-level model

Run dirstat on the DWI prior to dwipreproc. If the mean diffusion gradient vector is strongly non-zero, provide -eddy_options " --slm=linear" to dwipreproc.

Support for data without reversed phase-encode SE-EPI images

Initially listed in #3.

  • Support for dual-echo gradient-echo images to calculate field map. The estimate could then be provided to eddy via the --field option (this would require corresponding updates to MRtrix3).

  • Support for data without either reversed phase-encode spin-echo EPI images or GRE-based field map data. This could be done using something like BBR, or using methods currently under private development.

parcellation none followed by --preprocessed run

Hi Robert,

I was trying the following:
Run Mrtrix3_connectome using parcellation none in order to complete the preprocessing.
The goal was to use the resulting directory in subsequent commands with the --preprocessed flag.

docker run -i --rm -v ${PWD}:/bids_dataset -v ${PWD}/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 102 --parcellation none --output_verbosity 3 --debug

followed by this (run from the derivatives dir)

docker run -i --rm -v ${PWD}:/bids_dataset -v ${PWD}/desikan:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 102 --parcellation desikan --preprocessed --output_verbosity 3 --debug

This almost works. But at the very end of the 2nd docker command it fails:

mrtrix3_connectome.py: Changing back to original directory (/)
mrtrix3_connectome.py: Script failed while executing the command: connectome2tck tractogram.tck assignments.csv exemplars.tck -tck_weights_in weights.csv -exemplars parc.mif -files single

If I go to the temp directory, I can run the step that killed it using -force (just calling MRtrix on my mac):

$ connectome2tck tractogram.tck assignments.csv exemplars.tck -tck_weights_in weights.csv -exemplars parc.mif -files single
connectome2tck: [done] reading streamline assignments file
connectome2tck: [WARNING] Parcellation image "parc.mif" provided via -exemplars option contains more nodes (84) than are present in input assignments file "assignments.csv" (83)
connectome2tck: [100%] generating exemplars for connectome
connectome2tck: [100%] finalizing exemplars
connectome2tck: [ERROR] output file "exemplars.tck" already exists (use -force option to force overwrite)
$ connectome2tck tractogram.tck assignments.csv exemplars.tck -tck_weights_in weights.csv -exemplars parc.mif -files single -force
connectome2tck: [WARNING] existing output files will be overwritten
connectome2tck: [done] reading streamline assignments file
connectome2tck: [WARNING] Parcellation image "parc.mif" provided via -exemplars option contains more nodes (84) than are present in input assignments file "assignments.csv" (83)
connectome2tck: [100%] generating exemplars for connectome
connectome2tck: [100%] finalizing exemplars

Any idea what is happening here? What is parcellation none for? Should this work?

Thanks,

Dianne

Name of T1w image too restrictive

Hi,
sub-01_T1w.nii is okay, but the run exits prematurely if the image is named sub-01_acq-mprage_T1w.nii (I think I got this right)...the point is, there are other acceptable names for this file from the BIDS validator point of view, but MRtrix3_connectome doesn't seem to handle these. Thanks for working so hard to provide this interesting tool.

Derivative files to JSON

Rather than storing various derivative numerical data as simple text files, use JSONs with meaningful keys.

Key-value restoration following concatenation for dwidenoise

In retrospect, there's multiple potential risks incurred by concatenating the fmap data with the DWIs in order to enable denoising. If any key-value pairs conflict between them, then they could be wiped / corrupted, and using mrconvert to separate the volumes out again may not work.

This could apply to something like e.g. slice timing. Even though it's very unlikely that such would vary between those images, it's nevertheless possible.

One option would be to JSON export the headers individually, and then JSON import them after dwidenoise has completed.

Runtime Error

I converted a single band DTI set and a pair of TOPUP scans into BIDS data structure using dcm2nii (passes BIDS validation - see screenshot below).

capture

The same set can be processed successfully by the ndmg DTI app.

During the docker run, I am running into runtime errors (see below).

capture2

Any idea how this can be avoided? Happy to share the data for testing/debugging if needed. Thanks.

single reverse phase encode image still causes crash in 0.4.1

Hi,

I am still running into this problem:
I am using the docker container bids/mrtrix_connectome, pulled today May 5, 2018.
It appears to be tagged 0.4.1

Despite this same issue being closed as of 0.4.0, If I have a single reverse phase encode B0 image in the fmap directory, MRtrix fails.

error.txt attached.
I have removed the reverse phase encode image for testing purposes, but how should I proceed?

Thanks much,

Dianne

error.txt

Originally posted by @dkp in #37 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.