GithubHelp home page GithubHelp logo

lts5 / cmp_nipype Goto Github PK

View Code? Open in Web Editor NEW
13.0 13.0 5.0 16.19 MB

This repository is meant to host a beta version for the future release of the ConnectomeMapper.

License: Other

Makefile 0.27% Python 86.16% C++ 11.79% C 0.15% CMake 1.64%

cmp_nipype's People

Contributors

abirba avatar davidrs06 avatar dvenum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cmp_nipype's Issues

Saving connetome as graphml

I think is not working correctly, I got an error.

140402-14:11:25,454 workflow ERROR:
['Node compute_matrice failed to run on host cloudn.']
140402-14:11:25,454 workflow INFO:
Saving crash info to /home/marc/nipype/crash-20140402-141125-marc-compute_matrice.npz
140402-14:11:25,455 workflow INFO:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/multiproc.py", line 15, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1128, in run
self._run_interface()
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1226, in _run_interface
self._result = self._run_command(execute)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1350, in _run_command
result = self._interface.run()
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 823, in run
runtime = self._run_interface(runtime)
File "/usr/local/lib/python2.7/dist-packages/cmp/stages/connectome/connectome.py", line 77, in _run_interface
additional_maps=additional_maps,output_types=self.inputs.output_types)
File "/usr/local/lib/python2.7/dist-packages/cmtklib/connectome.py", line 485, in cmat
'dn_hemisphere':d_gml['dn_hemisphere'],
KeyError: 'dn_hemisphere\nInterface CMTK_cmat failed to run.

Compilation error when running `make` for DTB binaries on macOS 10.12

Hi,

I am getting the following error after I ran cmake . and make in the src/DTB folder:

[ 25%] Linking CXX executable DTB_gfa
Undefined symbols for architecture x86_64:
  "_Xznzclose", referenced from:
      _nifti_fileexists in libniftiio.a(nifti1_io.o)
      _nifti_write_ascii_image in libniftiio.a(nifti1_io.o)
      _nifti_makeimgname in libniftiio.a(nifti1_io.o)
      _nifti_image_write_hdr_img2 in libniftiio.a(nifti1_io.o)
      _nifti_makehdrname in libniftiio.a(nifti1_io.o)
      _nifti_findimgname in libniftiio.a(nifti1_io.o)
      _nifti_image_load_prep in libniftiio.a(nifti1_io.o)
      ...
  "_znzopen", referenced from:
      _nifti_fileexists in libniftiio.a(nifti1_io.o)
      _nifti_write_ascii_image in libniftiio.a(nifti1_io.o)
      _nifti_makeimgname in libniftiio.a(nifti1_io.o)
      _nifti_image_write_hdr_img2 in libniftiio.a(nifti1_io.o)
      _nifti_makehdrname in libniftiio.a(nifti1_io.o)
      _nifti_findimgname in libniftiio.a(nifti1_io.o)
      _nifti_image_load_prep in libniftiio.a(nifti1_io.o)
      ...
  "_znzputs", referenced from:
      _nifti_write_ascii_image in libniftiio.a(nifti1_io.o)
  "_znzread", referenced from:
      _nifti_read_buffer in libniftiio.a(nifti1_io.o)
      _nifti_read_extensions in libniftiio.a(nifti1_io.o)
      _nifti_read_ascii_image in libniftiio.a(nifti1_io.o)
      _nifti_image_read in libniftiio.a(nifti1_io.o)
      _nifti_read_header in libniftiio.a(nifti1_io.o)
      _is_nifti_file in libniftiio.a(nifti1_io.o)
  "_znzrewind", referenced from:
      _nifti_image_read in libniftiio.a(nifti1_io.o)
      _nifti_read_header in libniftiio.a(nifti1_io.o)
  "_znzseek", referenced from:
      _rci_read_data in libniftiio.a(nifti1_io.o)
      _nifti_read_extensions in libniftiio.a(nifti1_io.o)
      _nifti_image_write_hdr_img2 in libniftiio.a(nifti1_io.o)
      _nifti_image_load_prep in libniftiio.a(nifti1_io.o)
      _nifti_read_subregion_image in libniftiio.a(nifti1_io.o)
      _nifti_read_ascii_image in libniftiio.a(nifti1_io.o)
      _nifti_image_load_bricks in libniftiio.a(nifti1_io.o)
      ...
  "_znztell", referenced from:
      _nifti_read_extensions in libniftiio.a(nifti1_io.o)
      _nifti_read_collapsed_image in libniftiio.a(nifti1_io.o)
      _nifti_read_subregion_image in libniftiio.a(nifti1_io.o)
      _nifti_image_load_bricks in libniftiio.a(nifti1_io.o)
  "_znzwrite", referenced from:
      _nifti_write_buffer in libniftiio.a(nifti1_io.o)
      _nifti_write_extensions in libniftiio.a(nifti1_io.o)
      _nifti_write_all_data in libniftiio.a(nifti1_io.o)
      _nifti_image_write_hdr_img2 in libniftiio.a(nifti1_io.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [Applications/DTB_gfa/DTB_gfa] Error 1
make[1]: *** [Applications/DTB_gfa/CMakeFiles/DTB_gfa.dir/all] Error 2
make: *** [all] Error 2

Here are the info about niftilib:

homebrew/science/niftilib: stable 2.0.0
https://niftilib.sourceforge.io/
/usr/local/Cellar/niftilib/2.0.0 (17 files, 872.9KB)
  Built from source on 2017-03-21 at 15:47:09
From: https://github.com/Homebrew/homebrew-science/blob/master/niftilib.rb

Thank you for your help!

Rawdata - NIFTI

Hi!
Can I process images if I have nifti files as the rawdata? For example, I have T1 as nifti and DTI as dicom, it says that T1 morphological data is missing.
I think that connectomemapper should accept NIFTIs as inputs. What is the difference?

Thanks,
Marc.

Custom mapping

Custom map processing crashes when a processing stage is selected without selecting the prerequired stages. The custom mapping should add all stages up to the selected one in order to avoid crashes.

Custom mapping problem

When one selects custom mapping, runs the pipeline and select a new custom mapping option nothing is done (first option remains).

Filepicker for projects always sets home folder

No matter what I select in the Filepicker, the dialog is always set to my home folder (and also creates the project there if I select new project).

It works when I enter the path manually.

New update fails

Hey I am trying now to start a pipeline with custom parcellation but I get this error:

Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.7/dist-packages/cmp/pipelines/common.py", line 61, in run
if self.stages[stage].has_run():
File "/usr/local/lib/python2.7/dist-packages/cmp/stages/parcellation/parcellation.py", line 136, in has_run
return os.path.exists(self.config.nifti_file)
AttributeError: 'ParcellationConfig' object has no attribute 'nifti_file'

But in the config file there is a nifti_file of the custom parcellation part...

Change of project directory name

The base directory name can not be changed in order to repeat a process, because when is checking inputs it may not find the outputs afterwards. I think it should be more flexible. And why the base directory can't be changed once you open a project? (right corner gui)
Thank you,
Marc.

error on mac

Hi,

I am getting the following error when typing 'connectomemapper' into the terminal

"Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac."

Thanks for your help

Peter

Changing to new project

Hello,
something weird happens when you change of project. First I loaded a finished project and checked come outputs, then if I open a new project I can still see the previous project finished outputs. And the parameters have not changed, I still use the previous gradient table and options. It would be better if everything is erased and one can start a project from scratch even if is not the first project one opens.

Greets,
Marc.

fMRI pipeline doesn't work from commandline

When I try to run the fMRI pipeline directly from the commandline -- connectomemapper sub_dir sub_dir/fMRI_config.ini -- it doesn't seem to convert the fMRI dicom images from RAWDATA into NIFTI, and it gives me this error:

Inputs check finished successfully.
Diffusion and morphological data available.
/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/nipype/interfaces/base.py:359: UserWarning: Input apply_xfm requires inputs: in_matrix_file
warn(msg)
/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/nipype/interfaces/base.py:359: UserWarning: Input apply_xfm requires inputs: in_matrix_file
warn(msg)
Traceback (most recent call last):
File "/thayerfs/home/f002b6j/.local/bin/connectomemapper", line 87, in
pipeline.process()
File "/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/cmp/pipelines/diffusion/diffusion.py", line 259, in process
reg_flow = self.create_stage_flow("Registration")
File "/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/cmp/pipelines/common.py", line 141, in create_stage_flow
stage.create_workflow(flow,inputnode,outputnode)
File "/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/cmp/stages/registration/registration.py", line 216, in create_workflow
(fsl_applyxfm_eroded_wm, outputnode, [('out_file','eroded_wm_registered')])
File "/thayerfs/home/f002b6j/.local/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 284, in connect
infostr))
Exception: Some connections were not found
Module inputnode has no output called eroded_wm

Module outputnode has no input called eroded_wm_registered

I'm using the latest commit and latest versions of FSL, FreeSurfer. I have the forked version of nipype.

Dependency checks at startup

Check for missing environment variables (FSL, FREESURFER, and DSI_PATH for dtk). Check for DTB exe in the path (and working?).

error after installation

Dear experts,

After installation I try and run connectomemapper from the command line (on mac) and get the following error;

Traceback (most recent call last):
File "//anaconda/bin/connectomemapper", line 6, in
import wx
ImportError: No module named wx
Peters-MacBook-Pro-2:cmp_nipype-2.1.0-beta 2 petermccolgan$ connectomemapper
Traceback (most recent call last):
File "//anaconda/bin/connectomemapper", line 6, in
import wx
ImportError: No module named wx

Thanks

Peter

Parcellation error

I have selected Lausanne2008 parcellation and when the program looks for this file:
MRISread(/home/marc/TOC/TOC_cmp2//home/marc/TOC/TOC_cmp2/FREESURFER
/surf/rh.smoothwm): could not open file

It says:
No such file or directory
mris_ca_label: could not read surface file /home/marc/TOC/TOC_cmp2//home/marc/TOC/TOC_cmp2/FREESURFER/surf/rh.smoothwm for /home/marc/TOC/TOC_cmp2/FREESURFER

The error seems to be that it is not structuring the path well, it repeats the base directory twice. Do you know why this is happening?

Thank you!
Marc.

Different seed parcellations used in different datasets

I am really trying to figure out why when using the lausanne parcellation, the scale60 parcellation is used to seed streamlines in one dataset, but in another dataset the scale125 parcellation is used. This is an issue that would only apply to the probabilistic tracking pipelines of course - in my case I am using mrtrix.

In particular, I am worried that at the 125 parcellation, some of the seed regions (which are created at the intersection of the dilated GM regions and the WM) will have 0 voxels. Sometimes, this even happens in the scale60 parcellation (particularly in very young subjects with smaller WM) resulting in tracking errors that crash the CMP.

It would be great if the CMP always relied on a single 'scale' parcellation, like the scale60, so that the same methods can be used across datasets. But, so far I haven't figured out how to enforce this.

cmp deletes existing files

When creating a new project with Connectomemapper (the "New Connectome Data" option) the program creates all needed folders. Unfortunately it overwrites all existing folders too. This is rather unpleasent for existing RAWDATA folders.

fsl.Orient doesn't exist

Hello!
I have installed the new connectomemapper in two computers and I get the same error.
src_conv = fsl.Orient(in_file=src_file, get_orient=True).run().outputs.orient
AttributeError: 'module' object has no attribute 'Orient'
Interface SwapAndReorient failed to run.

I can load the library from python
In [2]: import nipype.interfaces.fsl as fsl
In [3]:
And if I try to fsl. and tab to list the options, .Orient doesn't appear.
Where is the problem?

Thanks!
Marc.

FA weighted network

Dear developers,
I'm using the current master version of cmp_nipype. While the website says FA weighted matrix are going to be generated with DTI data, I am not finding it anywhere. I guess the function has not been implemented yet?

Thanks,
Min

Streamtrack loop

Hello,
I was computing some DTI data and it got stuck in the tracking (with mrtrix). It is repeating the process forever and ever.
Running: streamtrack -number 1500000 -mask /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/mask_resample/wm_mask_resampled.nii -maxnum 145155115 -length 500.0 -curvature 1.0 -minlength 20.0 -seed /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/tracking/mrtrix_seeds/ROIv_HR_th_scale33_flirt_out_dil_seed_25.nii.gz -step 0.2 SD_PROB /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/reconstruction/mrtrix_CSD/diffusion_resampled_CSD.mif diffusion_resampled_CSD_tracked.tck
140213-10:05:11,348 workflow ERROR:
['Node _mrtrix_probabilistic_tracking20 failed to run on host cloudn.']
140213-10:05:11,349 workflow INFO:
Saving crash info to /home/marc/cmp_nipype/crash-20140213-100511-marc-_mrtrix_probabilistic_tracking20.npz
140213-10:05:11,377 workflow INFO:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/multiproc.py", line 15, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1128, in run
self._run_interface()
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1226, in _run_interface
self._result = self._run_command(execute)
File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1350, in _run_command
result = self._interface.run()
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 823, in run
runtime = self._run_interface(runtime)
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1104, in _run_interface
self.raise_exception(runtime)
File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1060, in raise_exception
raise RuntimeError(message)
RuntimeError: Command:
streamtrack -number 1500000 -mask /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/mask_resample/wm_mask_resampled.nii -maxnum 145155115 -length 500.0 -curvature 1.0 -minlength 20.0 -seed /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/tracking/mrtrix_seeds/ROIv_HR_th_scale33_flirt_out_dil_seed_21.nii.gz -step 0.2 SD_PROB /home/marc/TOC/TOC_cmp2/NIPYPE/diffusion_pipeline/diffusion_stage/reconstruction/mrtrix_CSD/diffusion_resampled_CSD.mif diffusion_resampled_CSD_tracked.tck
Standard output:
Standard error:
7828702 generated, 491468 selected [ 32%]Killed
Return code: 137
Interface StreamlineTrack failed to run.

Yes I know I put and absurd number of max fibers, but I didn't care much, what I wanted was a 1.5e6 number of fibers. Is that causing the problem? Even if I change the number of max tracks it keeps computing the tracks with that parameter... Any help would be appreciated.
Thanks in advance,
Marc.

Pipeline doesn't work if you move the subject directory

If you move the subject directory to another location and try to run the pipeline, you get an error saying :

traits.trait_errors.TraitError: Each element of the 'roi_files_in_structural_space' trait of a ParcellateOutputSpec instance must be an existing file name, but a value of (OLD DIRECTORY PATH) <type 'str'> was specified.

I'm assuming maybe you can delete some things from the existing NIPYPE folder, but then you'd have to run the entire pipeline again which sort of defeats the purpose.

It fails in motion correction, and keeps going

Hi,
thanks first for all the effort and ongoing working, this will be a great tool.
I am getting errors in the workflow, and trying to understand how it works. I am processing diffusion DSI for this one.

I have errors in several steps, I think that it already crashes in the first one but it keeps running, I see sometimes errors appearing but the process doesnt stop. The main error is in the motion_correction, and regarding the time stamps, it tooks 3 hours for it to end and tell me that there is an error. That step takes that long?
The error tells me that some file doesn't exist, inside "NIPYPE/diffusion_pipeline/preprocessing_stage/motion_correction/0x_2e57599550ab... _unfinished.json"

In segmentation I use existing freesurfer data.
Laussane2008 in parcellation.

Some guidance?
Thanks!
Marc

Improve logging

Add in the GUI visual information about the current state of processing (tip: add a custom logger node after each full stage that writes a message to a variable displayed in the GUI... feasible?). Enhance the .log file with colors.

Screen Res.

Hello,
I am using connectome mapper 2.1.0 and I am having troubles seeing all the window of the program, I cannot see the buttons below or resize it to make it smaller. This is due to the ressolution of my screen? Can you guys do something about it?

Thank you,
Marc.

Window too big

Hello again,
I am reporting an issue similar than one I said before. Now when I get inside some configuration step (edit_stage_configuration) I can't click on the buttons below, are out of sight.

Thanks

bug in tracking.py - hardcoded curvature value

Running CMP from commandline, the value for is not recognized from the .ini file. Instead it appears to be hardocded depending on whether you run probabilistic or deterministic tractography.

If I comment out the _SD_changed() and _tracking_mode_changed() methods in cmp_nipype/cmp/stages/diffusion/tracking.py (lines 66 and 74), that fixes the problem. But assuming these are intended to provide default values in the GUI, I'm guessing there's a better solution. There's also a similar looking method for the Camino_tracking_config - I don't know if that will pose a similar problem.

FLIRT out_file must exist.

I have run flirt command several times and I don't receive that error:

Exception: Subnodes of node: apply_registration_roivs failed:
Subnode 0 failed
Error:
The 'out_file' trait of a FLIRTOutputSpec instance must be an existing file name, but a value of u'/media/nimbus/106_TOC/MRS_020512_P/tp1/NIPYPE/diffusion_pipeline/registration_stage/apply_registration_roivs/mapflow/_apply_registration_roivs0/atlas_registered_flirt.nii.gz' <type 'unicode'> was specified.

What has happened?
Thanks!

libboost1.48.0 error

I have libboost1.54.0 installed, and tried to create a symbolic link using "sudo ln -s libboost.so.1.54.0 libboost.so.1.48.0" but then an error/warning comes about when I try to run DTB_dtk2dir saying "DTB_dtk2dir: Symbol `_ZTVN5boost15program_options16validation_errorE' has different size in shared object, consider re-linking" ... Any ideas how to fix this? Libboost.1.48.0 is very old, and I'm wondering if I can avoid having to download and build it myself.

I'm using UBUNTU 14.04 by the way.

Thanks
Nick

csf_mask needed after 6/3/2014 commit

Hi,
After I downloaded the latest commit (on June 6, 2014), ConnectomeMapper is failing at the Parcellation stage because it cannot find csf_mask.nii.gz.This file does not exist, and it was not required before. I am wondering why this change was made and what kind of csf_mask ConnectomeMapper is expecting. I am currently creating a csf_mask using mri_binarize as a work around. I am using existing Freesurfer data (that was created during a previous ConnectomeMapper run) and the Lausanne2008 Parcellation.

Thanks,
Lisa

fMRI pipeline doesn't work - csf_mask

I still get an error saying "No such file ../Freesurfer/mri/csf_mask.nii.gz" when trying to run the fMRI pipeline. I have the latest commit... Is there a fix to this?

wants to check input on every start

When loading a project all buttons are greyed out until the user checks the input data.
As this happens instantly once it checked the data once, it seems to be cached/marked as checked. Would be nice if cmp would check if the data already was checked when loading a new project.

A lots of problems installing...

Yesterday I got some errors concerning files that cmp doesn't find even they are there. I tried re-installing the latest version but now I have the following error:
fsl.Orient
AttributeError: 'module' object has no attribute 'Orient'
Interface SwapAndReorient failed to run.
I remember having that error before. It's weird because everything was fine yesterday but in the afternoon started to fail. I am trying to process DTI with custom parcellation.
Best,
Marc

Additional maps

Addition of other connectivity weighted maps as RD, MD, etc... for DTI

apply_roivs error mrtrix

When running tracking with MRTRIX I always get this error:

Exception: Subnodes of node: apply_registration_roivs failed:
Subnode 0 failed
Error:
The 'out_file' trait of a FLIRTOutputSpec instance must be an existing file name, but a value of u'/media/nimbus/106_TOC/MRS_020512_P/tp2mrtrix/NIPYPE/diffusion_pipeline/registration_stage/apply_registration_roivs/mapflow/_apply_registration_roivs0/atlas_registered_flirt.nii.gz' <type 'unicode'> was specified.

If I close and open connectome mapper, it works fine.

LOG file > 40MB. Suggestion.

Hey,
I was trying to check the log to find some weird thing is happening with the tracking in mrtrix and just realise that the size of the logfile is way too big, most of the file is just mrtrix telling is finding fibers and percentatge, useless info I think.
I suggest this solution, now the log file is only 20kB (using custom parc), to redirect the stderr to null in this command (line 396 of tracking.py more or less):
mrtrix_tracking.inputs.args = '2>/dev/null'
For large collections, I think will save a reasonable quantity of space just with this.

Cheers,
Marc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.