GithubHelp home page GithubHelp logo

miykael / atlasreader Goto Github PK

View Code? Open in Web Editor NEW
88.0 88.0 32.0 35.77 MB

Python interface for generating coordinate tables and region labels from statistical MRI images

License: BSD 3-Clause "New" or "Revised" License

Python 3.00% Jupyter Notebook 96.72% TeX 0.28%

atlasreader's Introduction

Hi there 👋

I'm Michael, a senior machine learning researcher & neuroscientist fascinated by hidden patterns in the digital world. My curiosity and expertise extend across neuroimaging, computer vision, vital signs, AR/VR, and multi-sensor sensing. With a strong background in signal processing, open source, and Python, I explore these domains with an open and innovative mindset.

Eager to push boundaries and think outside the box, I welcome opportunities to craft unique solutions and collaborate on new projects. Don't hesitate to contact me!

For more about me, check out my personal page under: https://miykael.github.io/

atlasreader's People

Contributors

annad15 avatar danjgale avatar dependabot[bot] avatar kirstiejane avatar miykael avatar peerherholz avatar remi-gau avatar rmarkello avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlasreader's Issues

Is default cluster_extent of 20 good or bad?

I'm not sure if it's a good idea to have the default value of cluster_extent at 20, here.

I personally prefer to have it at around 5. We could put it at 0, but I agree, creating a output plot for every 1-voxel cluster might be overkill.

What does everybody think?

Add ability to query atlases with coordinate

It'd be nice to have some basic CLI or even just easily-exposed python functionality to query the available atlases for a given coordinate. Right now users are required to provide a statistical map, but it might be nice just to say "Hey, I wonder what anatomical region coordinate [x, y, z] is in the Harvard Oxford atlas" without opening up e.g., FSL or AFNI or whatnot.

This shouldn't take too much work, since read_atlas_cluster() basically already does this!

Data file paths need updating

While I was working on documentation and testing out some functionalities, I had to fix the data file paths because the atlas and templates directories have been relocated. My solution – only to work in a pinch – was just adding relative paths in the existing code. A longer term and a more stable solution is needed.

JOSS REVIEW: redundant install requires?

Install requires are listed in the info.py file as well as a requirements.txt file. This seems redundant, but perhaps there is a good reason to have them in both places? Also, the requirements.txt file specifies a version of nilearn (0.5.0a), whereas the info.py version does not.

Ideas for other atlases?

During a discussion with a colleague, I was asked if atlasreader also gives out all the brodmann areas. In this context, he also told me that he would be interested in the colin27, Conte69, and PALS atlas.

Without too much overloading the data/atlas folder, which other atlases do you think we should include?

Does anyone know where we can easily and without license issue get the colin27, conte69 or PALS atlas? Or one that has all BA in MNI space?

Better error handling for when stats image is empty after thresholding

As mentioned in #74, if the threshold and cluster extent parameter kill off all clusters in a stats image, the image is empty and leads to the following error message:

/home/line/anaconda3/lib/python3.6/site-packages/nilearn/plotting/displays.py:684:
UserWarning: empty mask
  get_mask_bounds(new_img_like(img, not_mask, affine))

We should better handle this. I think it either should return an empty glass brain image or it should give a nice message that said "No cluster survived the restrictions of 'threshold = 100' and 'cluster_extend = 50'. Highest value in image is 55."

Also, why does it crash? atlasreader should also work if a completly empty image is given as input.

Split `atlas_reader.create_output` function into multiple funcs

The create_output() function is rather sizeable and aims to do multiple things, including running the cluster analyses, generating the output CSV file, and creating the cluster map images; it would be great to split it up a bit to improve modularity and ease testing and readability! In my mind those could be three functions (at least!), something like:

  1. gen_cluster_df(), which does most of the work (wrapping the first part of the function)
  2. save_table(), which outputs the CSV file (and maybe other table formats, like LaTeX, as desired)
  3. save_cluster_images(), which outputs the cluster map images (allowing more control over image formatting!)

I'm happy to discuss the best mechanism for doing this, but would love to see it chopped up a bit before proceeding too far.

Cluster extent parameter seems to be broken

There seems to be a bug in the code on how it handles cluster extent.

For example, if I take this stat_map_01.nii.gz, that has 4 custers with sizes 878, 726, 587 and 1. And if I set the minimum cluster extent at 600, the whole thing crashes with:

Traceback (most recent call last):
  File "/home/line/anaconda3/lib/python3.6/site-packages/nilearn/_utils/niimg_conversions.py", line 428, in concat_niimgs
    first_niimg = check_niimg(next(literator), ensure_ndim=ndim)
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/line/anaconda3/bin/atlasreader", line 11, in <module>
    sys.exit(main())
  File "/home/line/anaconda3/lib/python3.6/site-packages/atlasreader/cli.py", line 93, in main
    min_distance=opts.min_distance)
  File "/home/line/anaconda3/lib/python3.6/site-packages/atlasreader/atlasreader.py", line 665, in create_output
    cluster_extent=cluster_extent)
  File "/home/line/anaconda3/lib/python3.6/site-packages/atlasreader/atlasreader.py", line 424, in process_img
    extract_type='connected_components')[0]]
  File "/home/line/anaconda3/lib/python3.6/site-packages/nilearn/regions/region_extractor.py", line 237, in connected_regions
    regions_extracted_img = concat_niimgs(all_regions_imgs)
  File "/home/line/anaconda3/lib/python3.6/site-packages/nilearn/_utils/niimg_conversions.py", line 430, in concat_niimgs
    raise TypeError('Cannot concatenate empty objects')
TypeError: Cannot concatenate empty objects

It seems that this is due to nilearn's connected_regions function here. But it's not clear to me yet, how to solve this issue or what causes it in the first place.

JOSS REVIEW: repo is large

Repo is rather large (~34MB) because the atlases are included. Not required for acceptance (by me at least), but i'd suggest moving them to a separate location and fetch them programmatically instead storing a local copy in the repo. Also, are there any license considerations of the atlases given that you copied them into the repo? I see the warning that each atlas has its own license, but have you made certain its acceptable to store a copy of them in your package?

Duplication of atlas names in cli help text

If you specify a wrong atlasname while using the cli, you get the following message:

atlasreader: error: argument -a/--atlas: invalid choice: 'destrieuxx' (choose from
    'AAL', 'Desikan_Killiany', 'Destrieux', 'Harvard_Oxford', 'Juelich',
    'Neuromorphometrics', 'aal', 'desikan_killiany', 'destrieux', 'harvard_oxford',
    'juelich', 'neuromorphometrics', 'all')

As you can see, this shows all atlases once in normal and once in lowercase version. I recommend to get rid of the duplication (in the help text).

What about 4D images?

Should atlasreader also accept 4D images? At the moment it accepts 4D images, but it only uses the first volume. I think I'm ok with that (for the moment).

Otherwise, I could imagine looping through the volumes and adding an additional postfix after the filename.

Any thoughts?

MNI152 overlay of atlases

Preparation for the question

I've plotted all atlases on the MNI template with the following command:

from nilearn import plotting
from glob import glob
import nibabel as nb

atlases = glob('atlases/*gz')
template = 'templates/MNI152_T1_1mm_brain.nii.gz'

for a in atlases:
    atlas_dim = len(nb.load(a).shape)

    if atlas_dim == 4:
        plotting.plot_prob_atlas(a, bg_img=template, title=a[8:-7],
                                 cut_coords=[0,0,0], threshold=0.25, draw_cross=False,
                                 output_file=a[8:-7]+'.png');
    else:
        plotting.plot_roi(a, bg_img=template, title=a[8:-7],
                                 cut_coords=[10,0,0], draw_cross=False,
                                 output_file=a[8:-7]+'.png');

Generating the following figures:

Atlas AAL

atlas_aal

Atlas Aicha

atlas_aicha

Atlas Desikan-Killiany

atlas_desikan_killiany

Atlas Destrieux

atlas_destrieux

Atlas Harvard-Oxford

atlas_harvard_oxford

Atlas Juelich

atlas_juelich

Atlas Neuromorphometrics

atlas_neuromorphometrics

Atlas Talairach BA (Brodmann)

atlas_talairach_ba

Atlas Talairach Gyrus

atlas_talairach_gyrus

Question

  1. Do you think that we need to point out in the code, that not all atlases are perfectly aligned to the MNI atlas? See for example the FSL's Talairach atlases.
  2. Should we include such figures in the JOSS paper?

Personally, I don't think that we should put effort into better normalizing Talairach (and similar atlases) to the MNI template. People who use a particular atlas should be aware of their defaults. What do you think?

Function crashes with empty stat_maps

It's possible that I load empty stats_maps into atlasreader (when I'm looping through some results).

Currently, this leads to the following error:

---------------------------------------------------------------------------
~/anaconda3/lib/python3.6/site-packages/atlasreader/atlasreader.py in create_output(filename, atlas, voxel_thresh, cluster_extent, prob_thresh, min_distance, outdir)
    650                                                 cluster_extent=cluster_extent,
    651                                                 prob_thresh=prob_thresh,
--> 652                                                 min_distance=min_distance)
    653 
    654     # write output .csv files

~/anaconda3/lib/python3.6/site-packages/atlasreader/atlasreader.py in get_statmap_info(stat_img, atlas, voxel_thresh, cluster_extent, prob_thresh, min_distance)
    560     clust_img = process_img(stat_img,
    561                             voxel_thresh=voxel_thresh,
--> 562                             cluster_extent=cluster_extent)
    563 
    564     clust_info, peaks_info = [], []

~/anaconda3/lib/python3.6/site-packages/atlasreader/atlasreader.py in process_img(stat_img, voxel_thresh, cluster_extent)
    418         clusters += [connected_regions(image.new_img_like(thresh_img, data),
    419                                        min_region_size=min_region_size,
--> 420                                        extract_type='connected_components')[0]]
    421 
    422     return image.concat_imgs(clusters)

~/anaconda3/lib/python3.6/site-packages/nilearn/regions/region_extractor.py in connected_regions(maps_img, min_region_size, extract_type, smoothing_fwhm, mask_img)
    235         all_regions_imgs.extend(regions)
    236 
--> 237     regions_extracted_img = concat_niimgs(all_regions_imgs)
    238 
    239     return regions_extracted_img, index_of_each_map

~/anaconda3/lib/python3.6/site-packages/nilearn/_utils/niimg_conversions.py in concat_niimgs(niimgs, dtype, ensure_ndim, memory, memory_level, auto_resample, verbose)
    428         first_niimg = check_niimg(next(literator), ensure_ndim=ndim)
    429     except StopIteration:
--> 430         raise TypeError('Cannot concatenate empty objects')
    431     except DimensionError as exc:
    432         # Keep track of the additional dimension in the error

TypeError: Cannot concatenate empty objects

I guess the best solution is to either catch this condition early enough or to create an empty dummy image that can be further passed through the atlasreader?

My desired outcome is to nonetheless have a glass_brain plot, but just an empty one. No peak wise plots or CSV are required. What do you guys think?

can not install through pip

Hi,

I'm getting this error while installing through pip:

Collecting atlasreader
Could not find a version that satisfies the requirement atlasreader (from versions: )
No matching distribution found for atlasreader

Python packaging

Currently all the work in this repo exists as a single file (atlas_reader.py). I think this will eventually become somewhat unwieldy as functionality is added/updated, so I'm planning to work on quickly converting everything into a more standard python package framework. I'll try to whip this up quickly so edits can be made—I'll open a PR soon!

ENH: Switch default filename to atlasreader and adjust relevant documentation

This is a bit of a silly issue to submit, which I stumbled upon when I provided atlasreader.create_output with a nibabel.nifti1.Nifti1Image object instead of a file name.

Currently create_output uses the same approach as nilearn; it will take "NIfTI-like" inputs. The documentation right now doesn't state that (pretty sure I forgot to add this back at NH18!) so it needs to be updated. As well, when a nibabel object is passed, the default output files use a mniatlasreader prefix. I suggest changing this to just atlasreader to reflect the (new!) name of the project.

I'll submit a PR shortly unless there's any reason to keep things as they are!

Code of conduct

I think it would be great to have a Code of Conduct (for so many reasons, including all those listed here), especially given this is a collaborative project being actively developed at a hackathon! Contributor Convenant has a wonderful template available here that I use in some of my projects.

@miykael I think it would make sense, as the owner of the repo, for you to be the main person for handling reporting, but I am also happy to be listed for this.

Does anyone else who is interested in getting involved in the project have any thoughts on this?

Leverage nilearn.datasets for atlases + templates

Right now we're limiting the atlas + template options to the ones that are shipped with the repository (though see #4 for discussion of removing those), but we could also expand this to include any of the atlases / templates that are accessible via the nilearn.datasets package! If one of those atlases is requested (say, atlas_msdl), then the appropriate nilearn fetch-style command could be called and the atlas downloaded + used.

I think this would be really worthwhile since nilearn is incredibly good about incorporating new atlases quickly after they're released. This may (at least temporarily) resolve the need for a separate repo in #4 (sorry, corgi)!

CLI in demo notebook not up to date

The demo notebook hasn't been updated to work with the new CLI. I guess the older version I had running for #32 was still using old files I had from a while ago, so it appeared to work despite the changes. Oops. I'm working on fixing it right now, with new files.

Relatedly, if we make any changes, I think its a good idea that we should run the demo notebook as a last test (in addition to testing the code). This way we can integrate changes into the notebook right away. Thoughts?

Thoughts on licensing?

Is there any reason the license is GPL? I'm more a fan of MIT or BSD, especially for these sorts of projects that could very reasonably be incorporated into other projects!

What are your thoughts about potentially changing it to something more permissive?

Clean up output .csv file

Hi all,

Just tossing out a few ideas regarding the output file. Currently the output file contains both peak and cluster information, as shown below:

ClusterID Peak_Location Cluster_Mean Volume Harvard_Oxford
Cluster01 60.0_-19.0_46.0 7.2740965 27324 48.91% Postcentral_Gyrus
Cluster02 -24.0_-31.0_73.0 -7.2317147 11772 63.53% Postcentral_Gyrus; 36.24% Precentral_Gyrus
Cluster03 -9.0_-58.0_-17.0 6.786522 5049 88.77% No_label
Cluster04 51.0_-22.0_19.0 6.641388 4293 39.62% Parietal_Operculum_Cortex; 37.74% Central_Opercular_Cortex
Cluster05 24.0_-49.0_-26.0 -6.504359 3618 88.06% No_label
Cluster06 6.0_-10.0_52.0 6.0106235 2646 63.27% Juxtapositional_Lobule_Cortex_(formerly_Supplementary_Motor_Cortex)
Cluster07 33.0_-7.0_-2.0 6.2991576 378 85.71% Right_Putamen
PeakID Peak_Location Peak_Value Volume Harvard_Oxford
Peak01 60.0_-19.0_46.0 7.941345 27324 44% Postcentral_Gyrus
Peak02 -24.0_-31.0_73.0 -7.9414444 11772 47% Postcentral_Gyrus
Peak03 -9.0_-58.0_-17.0 7.941345 5049 0% No_label
Peak04 51.0_-22.0_19.0 7.941345 4293 55% Parietal_Operculum_Cortex
Peak05 24.0_-49.0_-26.0 -7.9414444 3618 0% No_label
Peak06 6.0_-10.0_52.0 7.941345 2646 53% Juxtapositional_Lobule_Cortex_(formerly_Supplementary_Motor_Cortex)
Peak07 33.0_-7.0_-2.0 7.9053116 378 71% Right_Putamen

You'll notice a second header row with PeakID is followed by the peak information rows. This makes it difficult to use with say, pandas, because now the column types will all be set as object and won't reflect the actual data type (e.g., Cluster_Mean should be float).

So, my first suggestion is to break up these tables into a peak.csv and a cluster.csv with a way to link the peaks with their clusters across tables (i.e. using a column that matches a peak to its cluster).

My second suggestion is to break coordinate information into separate x, y, and z columns (i.e. peak_x, peak_y, peak_z) so that these can fit cleanly in a table.

Let me know what you guys think or what other improvements might be worthwhile. I'm happy to work on this.

Demo notebook breaks on up-to-date version of nilearn

The fetch_neurovault_motor_task() and fetch_neurovault_auditory_computation_task() functions no longer exist in nilearn. It would be great to update the demo notebook to reflect the newest fetch functions from nilearn, like fetch_localizer_button_task() and fetch_localizer_calculation_task()!

Cutting atlases out of `mni_atlas_reader`

There was some talk here at the hackathon about trying to splice the atlases out of the current repository and make them into their own repository, which could serve as a very useful resource for anyone looking for brain atlases! This question was previously raised on the BIDS Discussion List and some relevant discussion occurred there.

@PeerHerholz mentioned that Lead DBS has already collected a good number of atlases which might be a good place to start!

What are peoples' thoughts? Should a new repo be created to host atlases (this package could then grab that repo as additional "package data" when downloaded!). If so, what should the name be? @miykael

Using this to label something other than a stat image

Hey guys. So awesome that this project is going strong!

So I am trying to use this tool to give anatomical labels to all of the ROIs in the Power 264-node functional atlas. The power atlas is just a csv file of xyz MNI coordinates, but I turned it into an image with 5mm spheres around each ROI (and then smoothed it). When I run this image through create_output with these settings:

create_output(img, cluster_extent=0, atlas=['aal','Harvard-Oxford'],outdir='.')
I get this error:

/Users/saigerutherford/.local/lib/python3.6/site-packages/nilearn/plotting/displays.py:684: UserWarning: empty mask get_mask_bounds(new_img_like(img, not_mask, affine))

Is this just too many clusters to identify and they're too small? When I plot the image using plotting.plot_glass_brain(img)
it looks fine. Thanks for any feedback you can give :)

Hyphens or underscores in atlas names

Currently the atlas files have mixed uses of hyphens (e.g., desikan-killiany) and underscores (e.g., harvard_oxford). I think we should likely settle on either hyphens or underscores and rename all instances of the other to fit!

My preference is underscores, but I am happy to do whatever everyone thinks best!

Argparse, non sys.argv

User-provided input in atlas_reader.py is currently being pulled directly from the command line via calls to sys.argv. It would be wonderful to see this handled instead by argparse, which provides a load of benefits over directly accessing sys.argv, including detailed documentation on what inputs need to (versus can) be provided!

This would require replacing all the calls to sys.argv in atlas_reader.py with arguments in an argparse.ArgumentParser object. Ideally, some information about what those arguments should look like would also be included! A how-to guide for argparse is available here, for those who are new to it.

`no_label` category in tables should be combined

I'm currently working on the example notebook, and when I tried to use FreeSurfer's Destrieux atlas, I can see that some clusters have multiple no_label regions (see cluster 2):

tables

This somehow shouldn't happen. So, first we need to figure out why this is the case and than make sure that those percentages/labels are unified.

Should atlasreader CLI include defaults in the help message?

When you run the help flag of the cli, atlasreader gives the following output:

usage: atlasreader [-h] [-a atlas [atlas ...]] [-t threshold] [-c extent]
                   [-p threshold] [-o outdir] [-d distance]
                   file

positional arguments:
  file                  The full or relative path to the statistical mapfrom
                        which cluster information should be extracted.

optional arguments:
  -h, --help            show this help message and exit
  -a atlas [atlas ...], --atlas atlas [atlas ...]
                        Atlas(es) to use for examining anatomical delineation
                        of clusters in provided statistical map. Default: all
                        available atlases.
  -t threshold, --threshold threshold
                        Value threshold that voxels in provided file must
                        surpass in order to be considered in cluster
                        extraction.
  -c extent, --cluster extent
                        Required number of contiguous voxels for a cluster to
                        be retained for analysis.
  -p threshold, --probability threshold
                        Threshold to consider when using a probabilistic atlas
                        for extracting anatomical cluster locations. Value
                        will apply to all request probabilistic atlases, and
                        should range between 0 and 100.
  -o outdir, --outdir outdir
                        Output directory for created files. If it is not
                        specified, then output files are created in the same
                        directory as the statistical map that is provided.
  -d distance, --mindist distance
                        If specified, the program will attempt to find
                        subpeaks within detected clusters, rather than a
                        single peak per cluster. The specified value will
                        determine the minimum distance required between
                        subpeaks.

As you can see, only the atlas flag has information about default parameters.

It's a quick fix, but we should add the default parameters to the help text here.

Have option to specify output directory

Currently, output files save to the directory of the input stat map. An idea is to have this as the default, but also have the option to specify a new directory should if preferred.

How should `read_atlas_cluster` compute 100%?

read_atlas_cluster currently tells you the extend of a particular cluster, i.e. "100% of the voxels in the cluster are part of the occipital lobe".

What do we want to happen if a cluster is extending into an area that is not part of the atlas? Something as follows: "90% occipital lobe; 10% unknown"?

I think the code already does this. But we should test it and make sure that this is also the behaviour that we want, as the cluster.csv is something new that we introduce.

Use `dataobj` instead of `get_data()` for atlas querying

As of #35, any atlases needed for the analysis are loaded into memory (by calling atlas.get_data()) once, towards the beginning of a workflow (e.g., get_statmap_info() or create_output()). By ensuring this is only done once we saves a lot of time, since loading the data is quite time-consuming (we were previously loading the data on every single loop, so, hurray for that fix).

However, this entire process could be pretty dramatically sped up by never loading the atlases into memory at all (that is, never calling atlas.get_data()). Instead, we could replace that by simply accessing the needed data via the atlas.dataobj array proxy. This might take a bit of finessing since the dataobj object only supports integer based indexing (i.e., atlas.dataobj[0, 0, 0], not atlas.dataobj[((0, 0, 0), (1, 1, 1), (2, 2, 2))]). Still, I think it might be a worthwhile endeavor—or, at the very least, something to look into a bit more.

Write sentences in markdown on individual lines

I was just taking a look at the small phrasing updates in #90. It’s quite hard to review text where the paragraph is on one line because GitHub renders it all as having changed.

What do you all think about writing one sentence per line? It will still render as a paragraph but it’s easier to see which parts have changed or not?

If you like the idea we can add it as a style to the contributing guidelines, and it would also be a good first issue for a new contributor if they wanted to?

In my opinion we don’t need to re-format everything from the start, just maybe as we edit a paragraph, we adjust it to having sentences on separate lines.

Multiple peaks per cluster

Should we extract multiple peaks per cluster, like in SPM?

An idea would be to extract all peaks per cluster (ordered by intensity) but exclude new peaks if they are too close to already included peaks.

Docstrings needed

We need doc-strings! The current functions in atlas_reader.py have one-line descriptions of their functionality, if that. I am personally a fan of numpy-style doc-strings since they interface so nicely with online documentation (like Sphinx!).

It would be great if someone would be willing to go through the functions and document them with at least (1) a short description, (2) parameters + types, and (3) return values.

I talked to @danjgale about doing this, but if anyone else has thoughts on doc-strings this is the place to discuss!

JOSS REVIEW: Not able to retrieve statmap from neurovault

Running STAT_IMG = fetch_neurovault_motor_task().images[0] in the example notebook and unit tests fails for me. Here's the full traceback:

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1317, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1229, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1275, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1224, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1016, in _send_output
    self.send(msg)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 956, in send
    self.connect()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1392, in connect
    server_hostname=server_hostname)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 412, in wrap_socket
    session=session
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 850, in _create
    self.do_handshake()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1108, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/nilearn-0.5.0-py3.7.egg/nilearn/datasets/neurovault.py", line 1008, in _get_batch
    resp = opener.open(request, timeout=timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 563, in error
    result = self._call_chain(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 755, in http_error_302
    return self.parent.open(new, timeout=req.timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
    response = self._open(req, data)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
    '_open', req)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1360, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)>
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-3-a5e164ce6739> in <module>()
      2 from nilearn.datasets import fetch_neurovault_motor_task
      3 motor_images = fetch_neurovault_motor_task()
----> 4 stat_img = motor_images.images[0]

IndexError: list index out of range

Should additional plotting functions be included?

atlasreader's create_output function currently takes the following inputs:

  • filename
  • atlas='all'
  • voxel_thresh=1.96
  • cluster_extent=20
  • prob_thresh=5
  • min_distance=None
  • outdir=None

None of them specifies actually how to plot the figures. I like the black background version, but this might not be to everybody's preference. I'm also often changing some of the default plotting variables.

Should we therefore give users the additional option of:

  • colorbar=True
  • figure=None
  • title=None (i.e. on/off)
  • annotate=True
  • draw_cross=True
  • black_bg='auto'
  • cmap='virids'
  • symmetric_cbar='auto'
  • plot_abs=True
  • dim='auto'
  • alpha=0.7
  • vmin=None
  • vmax=None
  • resampling_interpolation='continuous'

"Unnamed: 0" column in peak.csv file

Not sure if this is particular to my current case, but I have a noninformative and obsolete column in my csv file called Unnamed: 0:

example_peak

Is this just in my case or is this intended?

Is there a particular order to the clusters / csv files?

There doesn't seem to be a particular order to the cluster indices. It's neither by peak value, mean value or cluster extent:

example_peak

Or am I missing something? Otherwise, I would recommend to go by cluster size (as is standard, I think). Alternative could be peak or average cluster value.

Packaging the toolbox

@rmarkello - I'm not really sure what the setup.py file implies. But do we want to package the toolbox with pypi so that people can download it with a pip command? Do you otherwise know how to put it into a conda framework?

CSV Output - peak and mean cluster value

Not sure if this is just the case for me, but I ran atlasreader on some data and I observed two particular issues in the CSV tables.

  • for peaks.csv - all the peak_values where 0.0
  • for clusters.csv - mean_value is accurate to ~13 decimal places

Is that the same for others as well? Also, what do you think about restricting the number of decimal places for mean value? Or is just a stylistic obsession of mine, to round decimal numbers to a few decimal places?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.