ohbm / hackathon2019 Goto Github PK
View Code? Open in Web Editor NEWWebsite and projects for the OHBM Hackathon in Rome 2019
Home Page: https://ohbm.github.io/hackathon2019
Website and projects for the OHBM Hackathon in Rome 2019
Home Page: https://ohbm.github.io/hackathon2019
Cool acronym to be determined
Have you ever created a design matrix and had trouble with adding this one custom regressor? Or adding SPM motion parameters to an FSL design? Or how to orthogonalise some specific regressors, but not the other ones?
Traditional GUIs try to construct design matrices all at once, leaving little room for customisation. Wouldn't it be great to build a GLM design piece by piece, like stacking up Lego bricks? This project is about making such a toolbox.
My proposal is to make a toolbox with which you can connect little pieces of the design to build the full design. I started this at some point by making a dozen components as MATLAB functions
that can be connected. This is part of my own code toolbox, but I would like the design matrix to be a separate toolbox, ideally reimplemented in Python.
Connect it to a visual editor such as GiraffeTools to edit and save the design in an open format:
The stretch goals and blue sky goals target to provide incremental compatibility and integration with existing software and online visualisation and communication.
Non-existent yet.
Brainhack slack for now. I'll make a dedicated channel or workspace as soon as there's a team.
I might be somewhat busy as I'm also organising the OHBM Open Science Room. I can't fully commit to this project just yet, but I'll definitely help out during the Hackathon. I'm completely fine with someone else taking the lead on this.
niimasker
is a command-line wrapper for nilearn's Masker objects, which let you easily extract out time series from your functional data (and gives you a number of options for post-processing during extraction). I'm in a lab with a number of non-Python users who would benefit greatly from this ability, and this project was a spur-of-the-moment project idea I had a couple of weeks ago when discussing my fMRI pipeline with my colleagues (we're trying to get a more standardized workflow going – fmriprep, etc). Because niimasker
is run via the command-line, pretty much anyone with some bash knowledge can use it (or at least that's what I'm working towards).
I developed much of this last week in a "mini-sprint" (i.e. a colleague needed data "yesterday"). While its core functionality is working, there's lots to be done. I've included a number of issues in the repo already: https://github.com/danjgale/nii-masker/issues. So, there are some exciting features to add (e.g., a visual report à la fmriprep) as well as some testing/CI to set up. These outline some of the things I'd like to accomplish at the hackathon.
The goal is to create a totally intuitive tool for anyone, so all contributions from all backgrounds and perspectives are encouraged. Non-expert/technical users can contribute by providing feedback and design ideas to make niimasker
more approachable and user-friendly.
https://github.com/danjgale/nii-masker/
I can set up a channel on the brainhack mattermost/slack if this gains interest. I would also like to keep a lot of conversation "in the open" directly in github issues as well.
While research MRI anatomical images are usually 3D (e.g. FLASH), clinical scans are typically 2D acquisitions with thick slices. In this project, we take up the challenge of longitudinal registration with 2D scans as would typically be acquired in a long term clinical trail (e.g. for multiple sclerosis). Longitudinal brain imaging can be particularly useful in the analysis of volumetric changes or lesion burden, and shows great promise for the development of novel biomarkers.
Registration is a key step in the pipeline that affects all further downstream analysis of neuroimaging data. Although using cross-sectional tools to process longitudinal data is unbiased, this ignores the common information across scans. Longitudinal processing aims to reduce the within-subject variability. Both SPM and FreeSurfer offer tools for longitudinal registration of scans across multiple (more than two) time points and, as with most image processing tools, these have naturally been developed with research-quality data in mind. As researchers are increasingly gaining access to clinical data, however, it would be timely to determine how current longitudinal processing tools perform on lower-quality 2D MRI scans.
Using the publicly-available OASIS dataset, we would like to investigate the performance of the SPM and FreeSurfer longitudinal registration tools. The OASIS-3 (Longitudinal Neuroimaging, Clinical, and Cognitive Dataset for Normal Aging and Alzheimer’s Disease) dataset consists of images from c.1000 subjects, many of which are accompanied by volumetric segmentation files produced through FreeSurfer. With these files as a 'gold-standard', we will average slices from 3D acquisitions to simulate 2D acquisitions and assess the accuracy of each processing tool.
Any of the following:
Experience in programming (mainly Matlab, C or C++)
Experience with FreeSurfer or SPM12
Experience with structural image analysis
Contributions towards any of the following milestones would be very welcome!
Downsample OASIS T1 3D data to lower-resolution 2D images
Isolate the longitudinal registration codebase from FreeSurfer
Longitudinal registration of 2D images in SPM and FreeSurfer
Assessment of segmentation performance to original 3D images
The OASIS project
Chapter 27 (Longitudinal registration) of the SPM 12 manual
FSL longitudinal processing
Join the chat in our mattermost channel :)
In EEG neurofeedback timing is perhaps not always so cricial that feedback needs to happen within < 1 miliseconds (for that you'd need real-time Operating systems), but it's still impotant enough that the "Neurofeedback Loop" need to happen relatively fast - and consistent, and in pace with the data acquisition. Lag is unavoidable and acceptable, within limits. Lag can be introduced because you need to do some mathematical operation (filtering especially), or because of Input/Output 'clogging'. Acceptable lag in EEG NF is in the order of 150-200 miliseconds, but the faster the better.
Programming in C will more likely give you that kind of speed, but making (and compiling) things in Python makes it a bit harder to communicate with the Python community, and Python should (in principle) be fast enough also for Neurofeedback purposes. There are currently (as far as I know) two main repositories of Python-based Neurofeedback software: Pyff/Wyrm, made by Bastian Venthur in 2010 (and since more-or-less abandoned). And nfb (see REF PAPER).
The issue is Lag. In python, one type of lag is due to the Global Interpreter Lock. Basically it means that python interpreter can read & interpret only one python line at a time, so all other lines have to wait their turn. So that means if you have something else that needs to be done in the neurofeedback loop - writing a file, updating your screen, keeping track of parameters, or doing an analysis while the NF loop is running, the Neurofeeback loop has to stop - or you delegate work to another python interpreter.
Delegation is by python's multiprocessing module. There are other ways to do parallel evaluations with threading and asynchronous programming, but the also need to obey the GIL. Basically you start another Python session (and interpreter) - that can run on a separate core and do stuff while the main process hanles the neurofeedback loop, thereby reducing the Lag. The two processes communicate via files (bad), pipes/queues (preferred), or shared memory (probably the fastest, but care is needed).
However... starting a Process takes time. And also, putting stuff into a queue and picking stuff out of a queue ALSO takes time. The bigger the data exchanged the longer it takes. The project I propose is to do some benchmarking/stress testing and measure how long it takes to start up a process, and how much data can you move around at what speeds, in the contesxt of some Neurofeedback and real-time processing that i've been working on. Ideally, the speed should be infinite and the lag 0 of such operations. A lag of more than 30-50 miliseconds can already tamper with the consistency of the Neurofeedback loop.
Multiprocessing, Queues, GIL
Docker has become one of the (if not the) virtualization techniques within the realm of open & reproducible science, as well as automated analyses (e.g., BIDS apps). However, depending on the individual background and training, the application and utilization can range from straightforward
to what is this sorcery?
. This one day hands-on workshop therefore aims to provide a solid and comprehensive introduction to Docker, ranging from basic concepts over managing & using existing Docker images to building Docker images from scratch, automatizing their respective task.
As this is a workshop that aims to introduce participating folks to the docker ecosystem, the most important things to bring along are interest & curiosity. Nevertheless, a basic understanding of operating systems and computer hardware, as well as its architecture would be helpful. The same accounts for basic shell
experience.
Given docker's flexibility and shear endless possibilities, a lot of folks with different backgrounds and research interests could benefit.
A GitHub repo with all materials can be found here. Please note, that the materials will be finalized within the next few weeks based on feedback and suggestions.
If you have questions wrt this workshop please don't hesitate to contact me by opening an issue within the workshop's repo or join the channel and drop a message (@PeerHerholz).
Neurodocker as web application
@kaczmarj made a really nice tool Neurodocker
that makes docker files given an input of MRI analysis toolboxes. Let's make a web application out of this. I made a very basic start for this here: https://neurodocker.herokuapp.com.
https://github.com/kaczmarj/neurodocker
https://github.com/TimVanMourik/NeurodockerWeb
Slack for now
Teaching an Old BIDS New Tricks - Semantic Markup of BIDS data
ReproNim / OHBM TrainTrack Untutorial option. The BIDS data representation can be extended through use of NIDM (the NeuroImage Data Model) in order to represent more detailed semantics of the information contained. This tutorial and hands-on demo session will start to get you up to speed with this technology. This will feature the csv2nidm tool from PyNIDM.
No specific skills should be needed to use this tool; python programming skills necessary to contribute to the codebase.
Please help us to add links to Brainhack AMX 2015 resources to our Tutorial_Resources.md.
It would be lovely if you could add them to thematic groups as we have started (and add new groups).
Name of the tutorial (00 min).
Thank you very much for your help!
Team neuroscience is becoming increasingly common in today’s research. Along with this trend, the number of individuals involved in one project/paper is also increasing to hundreds and even thousands. Preparation of author block for such papers becomes a challenging job. Particularly, if it is done manually, to format the author affiliations is painful when co-authors ask for some updates.
The basic idea of this project is to make an easy generator of author line. With such a tool, researchers could easily generate an author line which integrates authors’ first and last names, affiliations, and, when needed, information of highest degree and email address.
Given recent big success of team neuroscience studies, we expect that this tool would be very useful for future large-scale studies.
Experience in (or willing to learn) programming in Python
You can contribute to this project by helping with
An Easy Generator of Author Line
Awesome Script to Export Freesurfer-based Parcellation/Segmentation Stats and Provenance as JSON-LD and NIDM
This project ultimately aims to facilitate both query and analysis of parcellation/segmentation based regional statistics across popular softwares such as Freesurfer, FSL, and ANTS. Currently each software produces its own output format and brain region labels are specific to the atlas used in generating the regional statistics. This makes life difficult when trying to search for "nucleaus accumbens" volume, for example, across the different software products. Further, knowing which version of the software tool used and what atlas and version of the atlas in a structured representation facilitating query is lacking. To this end we propose augmenting the various segmentation tools with scripts that will: (1) map atlas-specific anatomical nomeclature to anatomical concepts hosted in terminology resources (e.g. InterLex); (2) capture better structured provenance about the input image(s) and the atlases used for the segmentation; (3) export the segmentation results and the provenance as either JSON-LD, NIDM which can then link the derived data to broader records of the original project metadata, or as an additional component of a BIDS derivative.
We aim to tackle this problem in steps. For this hackathon project we'll be focusing on conversion from Freesurfer's mri_segstats program output along with some additional parsing/conversion of Freesurfer log files.
Python and structural neuroimaging experience. If one has experience with rdflib or PROV that would also be helpful. Any neuroanatomists in the audience? Would be helpful to have someone vet our mappings from atlas labels to anatomy terms.
This project will need expertise in programming, structural neuroimaging, and anatomy. To make this project sucessful we need individuals who have skills in any of these domains to help with: (1) understand Freesurfer's segmentation results format and log files; (2) programming up a script in Python; (3) understand anatomy well enough to select the proper anatomical concept that maps to a specific atlas designation of a region and can define new anatomy terms where needed, linking them to broader concepts to facilitate segmentation results queries across softwares.
segstats_jsonld
with WIP ReadMe.md containing
NOTE, this temporary github repo may change to be under ReproNim space
Haven't gotten this far yet but questions can be posted as issues in the GitHub repo linked above for via slack (@dbkeator) / mattermost (@dbkeator) / gmail ([email protected])
There are quite a few JS brain image viewers out there, but they overwhelmingly focus on the rendering side of things rather than the UI side. The goal of this project is to develop a high-level, modular JS library that (a) defines a common API for viewers, (b) implements support for widely used viewers (e.g., Papaya), and (c) provides a set of customizable widgets/components that can be easily injected into new JS projects. If successful, users should be able to construct relatively sophisticated dashboards (including things like image thresholding and color assignment, customized orth views, multiple layers, etc.) in just a few lines of JS code.
All kinds of contributions are welcome, but the project is likely to benefit particularly from the involvement of people with JavaScript experience and/or general experience building APIs and architecting modular libraries.
There's room for contribution from folks with a wide range of backgrounds and experience levels. We will be particularly interested in soliciting opinions on what core features the package should include, and how users expect to interact with good visualization tools.
Folks with prior JavaScript experience may want to take a look at a few of the existing viewers, e.g., Papaya, PyCortex, and brainsprite.js. Participants with prior programming experience who are new to JavaScript may want to whisper a few quiet prayers and then take the plunge into a JS tutorial or six.
https://github.com/neurostuff/BVT — but that's currently just a placeholder.
Hi! This list is a great idea.
Wouldn't be useful to divide the talks at neurohackademy into more specific headlines? (e.g. terminal, containers, open-science tools, machine learning and deep learning, statistics, software development, etc.). If you think this is a good idea, I could do it (and could also add the duration of these videos next to the links).
Brain imAgiNg Analysis iN Arcana (Banana) is a collection of imaging analysis methods implemented in the Arcana framework, and is proposed as a code-base for collaborative development of neuroimaging workflows. Unlike traditional "linear" workflows, analyses implemented in Arcana are constructed on-the-fly from cascades of modular pipelines that generate derivatives from a mixture of acquired data and prequisite derivatives (similar to Makefiles). Given the "data-centric" architecture of this approach, there should be a natural harmony between it and the ongoing standardisation of BIDS derivatives.
The primary goal of this project is to closely align the analysis methods implemented in Banana with the BIDS standard, in particular BIDS derivatives, in order to make them familiar to new users and interoperable with other packages. Further to this, in cases where a de facto standard for a particular
workflow exists (e.g. fmriprep) Banana should aim to mirror this standard by default. The extensibility of Arcana's object-orientated architecture could then be utilised to tailor such standard workflows to the needs of specific studies (via class inheritance).
There is also plenty of scope to expand the imaging contrasts/modalities supported by Banana, so if you have expertise in a particular area and are interested in implementing it in Banana we can definitely look to do that as well.
Any of the following:
Skim through the Arcana paper for the basic concepts,
Arcana BioXiv paper (in press Neuroinformatics, to be 10.1007/s12021-019-09430-1)
There is also some online documentation,
Arcana is built on top of Nipype so understanding Nipype concepts would also be useful,
There is a new channel on the BrainHack mattermost here
I would like to visually build a Nipype workflow. This is already possible with GiraffeTools but only with standard Nipype nodes. It would be really cool if you could include ANY of your own functions straight away: wrap them into Nipype-modules and show them to the world.
This project is largely based on this issue
This Hackathon is a particularly good moment to do this, because we can see what users and developers need in building workflows.
Jenny Rieck & Derek Beaton
C-MARINeR is a focused sub-project MARINeR: Multivariate Analysis and Resampling Inference for Neuroimaging in R. The "C" stands generally for connectivity, but specifically and statistically: covariance or correlation. The C-MARINeR project aims to develop and distribute an R package and ShinyApp. Together, R + Shiny allows for ease of use and, hopefully, simpler exploration of such complex data, and quicker adoption of the techniques.
CovSTATIS is the base method in C-MARINeR. CovSTATIS is effectively a multi-table PCA designed for covariance matrices. CovSTATIS allows for multiple connectivity (correlation or more generally covariance) matrices to be integrated into a single analysis. CovSTATIS produces component (a.k.a. factor) maps with respect to the compromise matrix (weighted average), and then projects each individual matrix back onto the components.
K+1CovSTATIS is a novel extension of CovSTATIS that allows us to use a "target" or reference matrix. For example, a theoretical resting state structure (a la Yeo/Schaffer maps). K+1CovSTATIS also produces component (a.k.a. factor) maps with respect to the compromise matrix (weighted average), except the compromise matrix is no longer a weighted average of all matrices, rather, it is a weighted average of all matrices with respect to a "target" matrix. Then each of those matrices are projected back onto the components.
Our primary goal is to make a small package and ShinyApp to perform the same types of analyses we use for integrating and analyzing multiple connectivity matrices (across tasks, individuals, and groups). We want to make CovSTATIS and similar methods easily accessible.
Goals & tasks are split across multiple types, including development, design, testing, etc...
Quests: R, various R packages, git/github, RStudio, Shiny, R Markdown
Side quests: HTML, CSS, Possibly Rcpp/RcppEigen/RcppArmadillo, LaTeX, R Markdown, graphic design
For the C-MARINeR project, there are many ways to contribute across a variety of skill levels and experience across domains.
The “main quests” require at least moderate-to-high expertise and familiarity with R, Shiny, and/or principal components analysis. These tasks are the primary focus for us and where we will spend most (or all) of our time.
The “side quests” are meant to cover tasks beyond the primary requirements but still key parts of the project. These exist across generating data, writing documentation, design (graphic, interface), optimization, tests, and extensions. Some of these require at least familiarity with R, but many others can be done without programming experience, or even in other languages (i.e., translation of the project).
If you want to participate in any of the main or side quests, or even have ideas for additional tasks please reach out to us.
Milestones for OHBM 2019 Hackathon are dependent on what is accomplished by the end of CAN/ACN BrainHackTO: 2019
Nobrainer is a tensorflow 2.0 based framework for creating and distributing neural network models for MR image processing. The goal of this project is to discuss the structure of Nobrainer and to make it easy for people to create and publish reusable models. Some of the recent work has focused on generative models for MR.
How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator?
We would love for individuals to post issues describing use cases, feature requests, and contribute code or new models to the project.
Try to define intermediate goals (milestones).
In addition to the code repo, these notebooks are intended to help guide individuals:
https://github.com/neuronets/nobrainer#guide-jupyter-notebooks-
Issues on github repo
Please help us to add links to Brainhacking 101 resources to our Tutorial_Resources.md (Intro to git and GitHub has already been added, but the others are missing)
It would be lovely if you could add them to thematic groups as we have started (and add new groups).
Name of the tutorial (00 min).
Thank you very much for your help!
Omer Faruk Gulban (ORCID)
Ultra high field MRI (7 Tesla and above) allowed researchers to acquire human brain images at mesoscopic (0.1 to 0.5 mm) isotropic voxel resolutions in-vivo*. Here is an example of such image (350 micron isotropic) acquired at 9.4T scanner using a custom-design coil at Maastricht University:
There are several interesting details that appear at this resolution which are not visible in conventional in-vivo anatomical images. Such as the smaller blood vessels within gray and white matter (see the dark lines) or layers within gray matter (faintly visible in this image). Generating such images currently requires averaging across multiple repeated acquisitions. This is because the benefits of ultra high field are traded away to increase the spatial resolution at the cost of decreased signal-to-noise ratio (SNR). Consequently, repeating acquisitions to increase SNR takes a lot of time, so much so that there is no time left for acquiring functional images within the same scanning session.
In this project, I would like to test the possibility of replacing the repeated image acquisitions (to some extent) with a specific type of filtering to increase SNR. By saying specific, I mean a family of filters that make use of a tensor field derived from the images themselves. These tensors are called structure tensors.
I have selected this type of filter to satisfy a few constraints. The selected filter should be:
Here is an animation created from one of my pilot implementations on an artificially noised 7T T1w image:
I think this implementation can be improved, applied to other image types and validated further.
This project is by no means a novel implementation of such a filter (see Mirebeau et al. 2015). However, the application to ultra-high field MRI in the context of multi-echo and complex domain images might be novel. If for nothing, I think this project would be helpful for interested people to gain deeper understanding of tensor fields, their role in diffusion and insight on some of the current challenges of in-vivo mesoscopic MRI at 7 & 9.4T.
People can join by contributing to the following:
Programming: Scrutinizing code by writing test cases, optimizing for faster runtime, improving user interface (see related tutorials here).
Documenting: Improving docstrings (see tutorials), application to different cases, helping in quantification of performance against other methods.
In other ways that I couldn't think of here.
Discuss conceptual and implementational details of the filter.
Implement the filter usable though a command-line interface.
Apply it to empirical data (e.g. 7T & 9.4T images that I will bring) and evaluate the results.
I am planning to implement this filter as an additional feature in a small free and open source project that has a few other image processing algorithms implemented for 2D and 3D images.
Chat on gitter.
We need more neuro-memes for our software presentations!
Very limited
Featuring the hashtag #NeuroMemes
Add below!
MRtrix3 provides a set of tools to perform various types of diffusion MRI analyses, from various forms of tractography through to next-generation group-level analyses.
The majority of tools provided within MRtrix3 are built using C++, and hence those underlying APIs are only accessible to researchers with the requisite skills in that language.
More recently however we have incorporated a relatively simple Python API, which is intended for the automation of higher-level image processing tasks that can be achieved using a combination of existing lower-level commands (whether from MRtrix3 or other softwares). Many frequently-used commands provided with MRtrix3 already make use of this API.
It is additionally possible for stand-alone processing scripts to make use of this API, which then inherit the various benefits provided by the API:
Integrated command-line parsing capability, with an interface identical to MRtrix3 commands;
Command-line terminal output that is consistent with other MRtrix3 Python scripts, with multiple available levels of terminal verbosity;
Self-generation of inline paginated help page, as well as Markdown and ReStructured Text documentation;
Integrated management of scratch directory for intermediate data processing;
Compatibility with both Python2 and Python3;
Various convenience functions that have been accumulated over time due to their utility in tasks regularly encountered in the development of such processing scripts; e.g. wrapping functionalities of other software packages, robust parsing of user inputs, provenance management.
Note: This library does not involve the direct manipulation of image data within Python itself; it is purely dedicated to the automation of processing tasks that can be built from a sequence of existing commands.
If there were sufficient interest, I could perform an ad hoc session demonstrating the basic usage of this API, as well as provide support to anybody intending to develop tools using this API during the hackathon.
Some requisite experience with Python is necessary; an attendee without such would likely be unable to recognise the distinction between general Python capabilities and the capabilities of this specific API. Beyond that, some familiarity with MRtrix3 would be highly recommended, as knowledge of the appropriate underlying commands for basic image manipulation operations means that time can be focused on the development of higher-level functionalities.
Processing pipeline projects that are implemented in "raw" Python (i.e. without use of an established API) will tend to run into the very same implementation hurdles that justified the development of the MRtrix3 Python API. By providing a "stepping stone" to the use of this particular API, this TrainTrack may help to fast-track new projects, by avoiding the overhead of these myriad generic scripting challenges, and enabling more rapid commencement of work on the actual novel aspects of any particular project. Scripts developed against this API may later be distributed individually and executed by anyone with a valid MRtrix3 installation, or, if sufficiently novel / relevant / useful, could be integrated into the MRtrix3 package itself.
The preprint of the MRtrix3 manuscript provides simple example commands in both C++ and Python (see Appendix B).
The code for those Python scripts provided with MRtrix3 is open-source, and can give some indication of how the API is used.
(Note: this hyperlink directs to development branch code, as the Python API will soon be undergoing changes as part of the upcoming "3.0_RC4
" tag)
My BIDS App "MRtrix3_connectome" demonstrates how a relatively large and complex processing pipeline can be fully automated and provided to the research public using this API.
(Note: the current version of this App is built against the Python API in MRtrix3 version 3.0_RC3
, which is the current public release; this will hopefully be updated to reflect the upcoming API changes prior to the hackathon)
MRtrix3 Python API files
(Note: Hyperlink is for development branch, where the most recent API updates currently reside in preparation for tag update 3.0_RC4
)
MRtrix3_connectome BIDS App
Also: Online documentation for those Python scripts currently provided as part of MRtrix3. This documentation is self-generated from the source code, which is one of the benefits of use of this API.
MRtrix3 community forum, for general MRtrix3 information and discussion
My profile on the MRtrix3 community forum; I can be contacted there directly for questions that are specific to the Hackathon and may not be relevant to the MRtrix3 community more generally.
Remi Gau (ORCID)
In 2012, in his review of the methods and results reporting of more than 200 fMRI papers, Joshua Carp wrote: "Although many journals urge authors to describe their methods to a level of detail such that independent investigators can fully reproduce their efforts, the results described here suggest that few studies meet this criterion."
A few years ago, in order to improve the situation with respect to reproducibility in f/MRI research, the Committee on Best Practices in Data Analysis and Sharing (COBIDAS) of OHBM released a report to promote best practices for methods and results reporting. This was recently followed by a similar initiative for EEG and MEG.
So far these guidelines do not seem to have been widely adopted and anecdotal evidence (see that twitter poll and thread) suggests that even among people who know about the report few of them use it to write or review papers. One likely reason for this might be the unwieldy nature of the report. Anyone who has used this checklist tends to agree that it is a great resource but that it is a bit cumbersome to interpret and apply.
So the short term goal of this project is to facilitate the use of this checklist. But, if done right, this could also in the long term enhance the adoption of emerging neuroimaging standards (the Brain imaging data structure, fMRIprep, NIDM...), facilitate data sharing and pre-registration, help with peer-review...
The short term goal of this project is to make the COBIDAS report easier to use: we want to create a website with a clickable checklist that generates a json file at the end.
By turning the checklist into a website users could more rapidly click through it and provide more of the information requested by the COBIDAS report. This would generate a small text file (a json file) that summarizes what option was chosen for each item of the checklist. This machine readable file could then be used to automatically generate part of the methods section of an article.
Other potential goals (e.g. interaction with BIDS and NIDM, further integration with main neuroimaging softwares...) and potential applications (improving data-sharing and peer-review) of this project are described in this repository.
One or more of those:
Discuss conceptual and structural details of the COBIDAS-json file.
Create a template of the COBIDAS-json file
Create a proof of concept website that can:
Jeanette Mumford has a 30 min video on her youtube channel explaining the background behind the COBIDAS report and giving a run through of the checklist.
The COBIDAS report:
A spreadsheet version of the COBIDAS checklist (thanks to Cass!!!)
The secret lives of experiments: methods reporting in the fMRI literature
A manifesto for reproducible science
The github repository of this project can be found here
Come and join us on the cobidas_checklist
channel on the brainhack mattermost .
ReproIn - the ReproNim image input management system
ReproNim / OHBM TrainTrack Untutorial option. ReproIn (https://github.com/ReproNim/reproin) provides a turnkey flexible setup for automatic generation of shareable, version-controlled BIDS datasets from MR scanners. This tutorial and hands-on demo session will start to get you up to speed with this technology.
A desire to get your MR data from the scanner into BIDS (and DataLad).
You can find ReproIn at GitHub: https://github.com/ReproNim/reproin
Although many code in Python and share their code, it's still a bit niche to take up proper testing of its implementation, and package it for easy distribution via pip. That's partly due to lack of understanding of how they work, and how easy they are once they work. I want to demystify them with a live coding task.
Basic python skills would help, but anyone with a reasonable programming skills will benefit from it.
To be added
To be done
Link to the communication channel for your project. You can, for example, create a slack channel for your project inside the Brainhack slack community, and include a slack badge to invite people to Brainhack slack, where they can then find and join your channel.
Or create a community on gitter and link to the chat by including a Gitter badge linking to your community
I am not yet 100% sure I can make it to OHBM yet!
Please help us to add links to Brainhack Global 2017 resources to our Tutorial_Resources.md
It would be lovely if you could add them to thematic groups as we have started (and add new groups).
Name of the tutorial (00 min).
Thank you very much for your help!
tedana is a Python package for denoising multi-echo fMRI data. One project goal is to implement a range of denoising methods (in addition to two ICA-based decision trees under current development), so that users may choose for themselves which to use. At the hackathon, we would like to discuss a decision tree created by @cjl2007 (currently implemented in MATLAB here) and to implement a version of it in Python within tedana.
Those with an interest in (and preferably experience with) multi-echo fMRI or decomposition-based denoising (e.g., AROMA) would be able to contribute at a conceptual level. Those with Python coding skills can contribute to the actual implementation of the methods.
This project will include both a discussion of denoising strategies to apply within tedana and a hacking portion in which we hope to implement one such strategy in Python within tedana. Neuroimagers and computational scientists may be able to contribute to either part of the project.
Here is a walkthrough of tedana’s pipeline.
The tedana repository with README and contributing guidelines.
Please help us to add links to Brainhack EDT 2014 resources to our Tutorial_Resources.md (Intro to git and GitHub has already been added, but the others are missing)
It would be lovely if you could add them to thematic groups as we have started (and add new groups).
Name of the tutorial (00 min).
Thank you very much for your help!
Better data structures for machine learning in NI
See more:
raamana/pyradigm#17
Python
Object oriented programming
User experience / Designers
How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator? You can check the Mozilla Open Leadership material on personas and contribution guidelines.
Try to define intermediate goals (milestones).
TBA
Play with the MLDataset
from pyradigm
: http://pyradigm.readthedocs.io/
https://github.com/raamana/pyradigm
pyradigm channel on brainhack slack
Ilkay Isik ORCID
Functionally defining regions of interests is a common methodology in cognitive neuroscience due to the greater sensitivity and higher functional resolution it provides over group-based methods (Nieto-Castanon and Fedorenko, 2012). In this approach, a set of functional regions are defined in each individual using a localizer contrast targeting the cognitive process of interest (e.g. fusiform face area (FFA) obtained by contrasting Faces vs Objects).
However, there is not a commonly accepted and automatized way of delineating and selecting these ROIs. The traditional method is to select subject specific ROIs by examining the activation maps for the localizer contrast and decide which voxels to include manually using anatomy knowledge as a guide. However, even expert coders might disagree due to the high individual variability. Furthermore, when these ROIs happen to be located close to each other, it is not very straightforward how to draw the border between them.
Fedorenko et al (2010) and Julian et al (2012) were concerned about these problems
and they offered the following steps in order to automate the creation of ROIs algorithmically:
The authors have used Matlab to accomplish these goals.
In this project, we aim to use Python tools and create a Python package which automatically creates functional region of interests.
So, this project itself is not a new idea but I believe this will be a great learning experience for me and for anyone who wants to join and contribute.
You can contribute to this project by helping with
Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010). New Method for fMRI Investigations of Language: Defining ROIs Functionally in Individual Subjects. Journal of Neurophysiology, 104(2), 1177–1194. http://doi.org/10.1152/jn.00032.2010
Julian, J. B., Fedorenko, E., Webster, J., & Kanwisher, N. (2012). An algorithmic method for functionally defining regions of interest in the ventral visual pathway. NeuroImage, 60(4), 2357–2364. https://doi.org/10.1016/j.neuroimage.2012.02.055
Nieto-Castañón, A., & Fedorenko, E. (2012). Subject-specific functional localizers increase sensitivity and functional resolution of multi-subject analyses. NeuroImage, 63(3), 1646–1669. https://doi.org/10.1016/j.neuroimage.2012.06.065
Stephan Heunis
Responsible sharing of data and code that underlie the results of a scientific study is an important step towards improving research transparency, fostering inclusivity and building public trust in science. In health sciences, and neuroimaging research in particular, an important factor when sharing data is privacy of personal or sensitive data. Ethical review boards at research institutions are responsible for reviewing a study protocol and deciding whether it can continue based on its adherence to the relevant ethical and research integrity principles, wich typically include regulations on personal data privacy. In the European Union, such data privacy requirements are subject to the General Data Protection Regulation (GDPR) as implemented by its member countries.
Despite the increased importance that funders and instituions are starting to place on open science practices, no clear, thorough and openly available guides exist for publicly sharing neuroimaging data under GDPR. One resource, Open Brain consent, has rendered an important service by making template consent forms available in multiple languages with the aim of allowing "collected imaging data to be shared as openly as possible while providing adequate guarantees for subjects’ privacy". However, some aspects related to GDPR are lacking, e.g. more detailed information on the process of acquiring, processing and anonymising data; specifications on data processing and protection roles; and a detailed data privacy statement.
The overall goal of this OHBM hackathon project is to extend the content of Open Brain consent with GDPR-related templates and thorough real-world examples. Ideally, this additional information would serve as a step-by-step guide for researchers during the process of obtaining ethical approval for an EU-based study, specifically where the aim is to share neuroimaging data publicly. Some progress has been made previously, see issue 24 on the Open Brain Consent github page. Our goal is to extend this with (among others):
Anyone with experience in one or more of the following aspects could contribute:
Additionally, people with the following skills/attributes could also contribute, irrespective of previous experience:
We have started a google doc with links to background reading material, useful resources and preliminary notes. We will likely use this google doc throughout the hackathon. Please feel free to add your comments and content to this document.
This is the Github Repo of the existing Open Brain Consent website, with an explanatory ReadMe.
If you want to contribute to this project, please feel free to join the Brainhack Mattermost community server and join our existing communication channel "open_brain_gdpr" or find me (Stephan Heunis / jsheunis) with a direct message. During the hackathon we will keep a video call open continuously for remote participants. You can access this video call at any time via hangouts.
2to3: Porting your package from python 2 to 3
As the Python 2 is reaching end of life, the need to transition to Python 3 is imminent and important. Hence, this tutorial would help orient those who need to, with the necessity, sufficient guidance and discussion on issues related to it.
I am yet to define the full scope of this (whether I have time to do this myself). Will update this soon with all the details.
TBA
Tagged under migration here
TBA
TBA
DIPY is the largest community-based open-source software project and it implements many methods in computational neuroanatomy, with an emphasis on methods for the analysis of diffusion MRI (dMRI) data. DIPY offers a new system of command-line interfaces that ease the use of the Python API implemented for clinician/neuroimagers. The goal is to add new functionalities and simplify the command line creation. The second project is based on FURY, a scientific visualization library, born from a DIPY spin-off. The goal is to add some widget and a function to simplify Atlas visualization.
Everybody is welcomed! from python Beginner to expert! if you are interested in :
Neuroimagers and computational scientists may be able to contribute to either part of the project. more details below:
Class
.pip install dipy
)pip install fury
)This is an example of converting an in house animal pipeline project to hopefully a feat or c-pac for animal studies. In Neuroimaging we're generally limited to working with a small range of animals. Even if you're working with macaques you still need to do a lot of anatomical processing that's distinct to human brains. So this year we designed a surface generation pipeline that is easily adaptable to multiple animals
What we'd like to do is now extend the pipelines along with some other in house pipelines we have for preprocessing of fMRI and DTI data with the simple idea being that following anatomical processing (which can also be just brain extraction and not the full surface generation). We'd like to set it up so you have either a feat or c-pac like GUI or commandline where you can set up everything with one key difference: an added animal option.
This project is really open to everyone.
On the technical side a good level of bash, and python would be great.
Additionally if you have experience in making docker containers or GUI's to make it a more user friendly pipeline that would also be great!
On a non-technical side I realized recently that there currently is no documentation for the actual pipeline. Additionally if you're interested in adding brains of new animals and want to share data you absolutely can! If you just want to add brains you can help us add some from the brain catalogue using brain box: http://brainbox.pasteur.fr/
This really is a project for everyone. As I mentioned on the non-technical side we could potentially add quite a few new animal brains in order to generate their surfaces.
Neuroimagers can help us with the design and optimization of the current and new pipelines. Specifically:
In the surface generation pipeline
Computer Science:
Part of the problem in neuroimaging pipelines is that they're not always intuitive to install or use.
I'm just here to learn:
Come with an open mind and if you can some open data!
The precon_all repo: https://github.com/recoveringyank/precon_all
Here's our mattermost link! https://mattermost.brainhack.org/brainhack/channels/precon_all
The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a longitudinal natural history study. It is a large multicenter study designed to identify clinical, MRI, genetic, and biochemical markers for the early detection and tracking of Alzheimer's disease (AD). In particular, identifying biomarkers sensitive to mild cognitive impairment (MCI) is important to better categorize the transitional stages between normal aging and AD, and to evaluate targeted treatments.
Data from ADNI is publicly available. The third phase of ADNI (ADNI-3) began in late 2016, with subject imaging beginning in mid-2017. ADNI-3 includes an advanced multi-shell diffusion MRI acquisition, besides the basic single-shell acquisition [1] (see Figure 1). Multi-shell dMRI allows for the reconstruction of diffusion models beyond Diffusion Tensor Imaging (DTI).
ADNI-3 Advanced multi-shell protocol:
Figure 1. Comparison of “basic” and “advanced” diffusion MRI protocols in ADNI-3. Taken from Reid et al. 2017 [1].
In multi-shell data, multi-compartment models can be used to delineate the signal contributions of different tissue compartments, which in turn tell us something about the tissue’s microstructural composition. Conveniently, Dmipy is an open source tool designed to modularly generate and fit any state-of-the-art multi-compartment diffusion models on-the-fly. Here, we aim at fitting all possible multi-shell models for the ADNI3 advanced diffusion protocol with Dmipy and benchmark which model is best to be used as an imaging biomarker to track the progression of Alzheimer’s Disease in the elderly.
Multi-compartment models that are relevant for multi-shell microstructure exploration are: Ball and Stick [2], NODDI-Watson [3], NODDI-Bingham [4], Multi-compartment microscopic diffusion imaging (MC-MDI) [5] and Multi-Tissue CSD [6]. Aside from parametric models, we also evaluate if signal-based markers from signal models such as MAP-MRI [7] can be valuable markers for tracking AD (RTOP, RTAP, RTPP, MSD, NG).
The aim of this project is to determine the best diffusion model (if any) to measure the intra-cellular, extracellular volume fractions, and the dispersion of fibers, whose change should correlate with the pathological progression of AD.
For each dMRI measure, we will run a logistic regression with TV-L1 regularization (Nilearn package) across voxels to classify individuals with mild cognitive impairment (MCI; N=17; mean age: 76.8±7.5 yrs; 14M/3F) from those who are cognitively normal (CN; N=39; mean age: 73.2±7.2 yrs; 25M/14F) to identify which dMRI measure gives the highest classification accuracy. Among dMRI measures yielding >80% accuracy we will compare the Jaccard/Dice similarity coefficient from the resulting maps of classifying regions to identify which dMRI measures give similar information in similar regions and which offer additional information about underlying pathological changes.
We may use different classification labels between groups, which can be based on commonly used screening tools for detecting dementia and AD as the Alzheimer’s Disease Assessment Scale 13 (ADAS-cog8), the Mini-Mental State Examination (MMSE9), and the Clinical Dementia Rating scale sum-of-boxes (CDR-sob10), amyloid PET scores, or cerebrospinal fluid (CSF) markers.
We welcome any curious brainhacker who is interested in improving the understanding of the Alzheimer's Disease and/or wants to see how simple it can be to study tissue microstructure with python.
The goal is to track the changes of tissue microstructure in AD. Ideally, we will find a microstructural biomarker that lets us anticipate the classical symptoms of AD, giving us the possibility to set up the corresponding therapy in advance. We will be analyzing many different models for each subject; this will raise problems related to dimensionality reduction and feature selection.
Your collaboration will be precious in:
You can have a look at the website of ADNI to get know more about the data we are processing. To get informations about the fitting of tissue microstructure models, you can look at the website of Dmipy.
This issue will be kept as reference discussion channel. Question can also be directly addressed to @villalonreina (ADNI) and @rutgerfick (Dmipy).
Understanding the maturation of the human brain from a smooth surface to its highly convoluted state at birth is an essential quest in the field of neuroscience. In the last decade the development of fast MR imaging protocols and advanced image processing methods has enabled imaging of the fetal brain at unprecedented detail. However, data availability is very limited due to comparatively rare examinations, small study sizes and high population variability.
In the spirit of open and repeatable research, we present the preliminary release of a dataset of 33 pre-processed MRI acquisitions of healthy fetal brains of 26 individuals imaged between GW 20 and GW 36. Furthermore, we provide cortical surface models human fetal cerebral hemispheres consisting of densely sampled surface triangulations that are matched between hemispheres and across time to serve as a standardized reference frame for surface-based analysis of cerebral development in utero.
During the hackaton, I'd welcome anyone interested to do so to get in touch and bounce around ideas how to get the most out of this data.
Since this is a very open project, people with all type of skills can contribute, but experience with visualization and maybe computational geometry might come in handy.
Brainstorming on how to visualize and interpret the growth of the fetal brain in utero and what methods to apply for fun and profit.
Unfortunately, I cannot (yet) put the data online - people interested in working on it will have to provide their names and contact email and I will provide a download link.
https://mattermost.brainhack.org/brainhack/channels/ohbm19_hackaton_fetal
The hMRI toolbox allows you to generate quantitative MRI data from a series of "raw" multi-echo structural images and field maps, i.e. the Multi-Parametric Mapping (MPM) protocol. So far, the toolbox is not BIDS compliant but it would clearly help everyone if it did...
Anyone with some experience in Matlab, quantitative MRI, SPM-extension toolbox development or the will to learn these skills.
The hMRI project has been supported by a few labs already and used by a few more. Harmonizing the way the sequence parameters are saved and accessed would help data management, QA, and sharing.
One BIDS Extension Proposal (BEP001) focuses on standardizing such structural acquisitions that include multiple contrasts (multi echo, flip angle, inversion time) sequences. This effort thus aims at integrating the hMRI toolbox within the BEP001. The latter is still in development, therefore adjustments if needed are still possible.
Intermediate steps:
The public distribution of hMRI toolbox code is available here but I'll make the latest private version available for the development.
Example data are available here, specifically the "800µm 64 channel protocol" data set.
Currently, large scale imaging studies are becoming increasingly popular within the Neuroimaging community. As datasets grow larger and larger, however, performing standard GLM analysis is becoming increasingly challenging. Heavy demands are placed on memory usage and computation time, and variability in masks from each subject can cause severe erosion of the analysis mask unless the model allows for missing data.
To address these issues we recently created BLM, a tool for computing "Big" Linear Models in a parallel (cluster) setting, implemented in python. However, this project is still in it's early days and there are many features we would like to add to it. For example:
The ideal prerequisites for this project would be familiarity with Python, computer clusters and linear models. However, anyone who wants to give BLM a try or make suggestions is welcome to join!
Predominantly, we are looking for computational scientists and statisticians as much of what needs to be done is code-based. However, anyone and everyone is welcome to join and try running BLM and let us know how they get on. If any neuroimagers or psychologists have any suggestions for features they would find useful and would like to discuss implementing as well please feel free to come talk to us!
Our intermediate goal is to complete at least 2-3 of the items we listed in the project description section.
In terms of preparation, the best thing to do would be to have a read of the readme.md
file on the BLM repository and try out BLM for yourself!
The GitHub repository can be found here.
I have set up a Mattermost channel named "BLM" on the Hackathon Mattermost.
The high anatomic specificity of MRI may depict focal lesions and can be expertly assessed by visual analysis through neuroradiologists (Von Oertzen et al. 2002). Still, it is important to find ways to improve the diagnostic yield from MRI through optimized MRI protocols, expert neuroradiological assessment and quantitative analysis of post-processed volumetric MRI (Sisodiya et al. 1995, Huppertz et al. 2005).
This project focuses on quantitative analysis to improve detection of focal cortical dysplasia (FCD), which is a common lesion associated with medically refractory epilepsy and often epileptogenic. FCD is a type of cortical malformation that is neuroradiologically characterized by cortical thickening, GM/WM blurring and transmantle signs, which are abnormal extensions of GM towards the ventricles (Barkovich et al. 1997, Huppertz et al. 2005). FCDs are the most common lesions in children and is the third most common lesion after hippocampal sclerosis (HS) and tumors in adult patients.
Within our study, a dedicated epilepsy MRI research protocol including isotropic 3D T1-weighted and FLAIR was performed on patients with medically refractory focal epilepsy, who were deemed to be non-lesional based on previous MRI. The most recent MRIs conducted in context of this study allowed (i) a clinical diagnostic assessment by an experienced neuroradiologist and (ii) the application of an automated quantitative voxel-based lesion detection technique on patients' MRIs in order to find potentially epileptogenic lesions such as FCDs.
I have used MATLAB to program an automatic cortical lesion finder tool and would like to translate it into Python together with you!
Experience in Python (and possibly MATLAB, not a requirement)
Creativity for incorporating SPM12, nipype and nilearn (for voxel-based morphometry)
As of now, we only have a limited number of cortical lesions. The idea is to make this project available to clinicians as collaborators, incorporate their feedback and improve the detection rate and usability of the software.
(i) Design a user-friendly and low-level Graphical User Interface;
(ii) read in MRI data (nifti or preferred DICOM);
(iii) translate MATLAB/SPM12 algorithms using GitHub packages
Carpet plot
Carpet plots are amazing tools to "unroll" a 4D dataset such as fMRI scan, and make their visualization really easy, esp. to detect anamolies for QC purposes. Their full potential is not realized yet due to lack of good tools, as well as lack of application to new and interesting aspects/modalities (such as DWI, 4th dim being gradient, instead of time as in fMRI). An attempt has been made to provide a self-contained class Carpet
in mrivis to provide generic yet convenient interface to realize the full potential of the Carpet class - howevery more work needs to be done to implement some features and smooth out existing ones.
Look at : raamana/mrivis#13
Neuroimaging (basic)
Python (intermediate and basic)
Data viz (basic and advanced)
This project can be a wonderful collaboration betweeen neuroimagers, CS and artists.
Take a look at the docs and repo:
https://raamana.github.io/mrivis/readme.html
This is a demo notebook for different vis. classses in mrivis:
https://nbviewer.jupyter.org/github/raamana/mrivis/blob/master/docs/example_notebooks/mrivis_demo_vis_classes.ipynb#Carpet
https://github.com/raamana/mrivis
mrivis channel on brainhack mattermost
Adding ability regress covariates in neuropredict
Needs to be within inner-CV
Provide different popular options to handle covariates
See raamana/neuropredict#7 for more details
Statistics
Python
Some Machine learning doesn’t hurt, but not required
How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator? You can check the Mozilla Open Leadership material on personas and contribution guidelines.
Try to define intermediate goals (milestones).
to be expanded: I’d collaborate with them to identify the challenges they face, define problem better and offer a viable solution after interactive consultation
TBA
Pyff is a Python module that can be combined with Psychopy to perform Neurofeedback experiments. Pyff can load in and run stimulus paradigms and communicate via TCP/IP to other computers to update stimuli in real-time. In order to do so, it starts up a seeparate process with a main thread (since all screen refresh/3D/graphical stuff needs to be in a main thread), and a separate thread that monitors incoming network traffic.
This spearate thread relies heavily on asyncio/asynchat to prevent the that thread from killing itself if something goes wrong with the network traffic part (which usually does). Asynchat/Asyncio is a type of asynchronous programming where the interpreter can continue with other code while one line dealing with network traffic is waiting. Asynchronous programming has underdone many iterations, and one of the major one is that it is now implemented in python 3's async module and asyncio/asynchat no longer exists in favor of the more general async module.
The documentation is however quite bad. The work I propose is to take a look to see if pyff python 2's asyncio/asynchat can be deciphered into equivalent async code, and furthermore to more fully convert pyff into the python 3 realm.
Preferably something about async programming
https://github.com/jnvandermeer/nf-stim-review
2to3 program
This would be a good match for the traintrack python2 to python3 issue raised earlier (issue #25)
Do two people who watch the same movie have similar patterns of neural activity? If we were to describe the movie to someone else, are the neural patterns when we describe the movie similar to when we watch the movie? To help understand the neural correlates of narrative comprehension and event structures in these stories, we will use Inter-Subject Correlations (ISC) (Hasson et al., 2004; Simony et al., 2016), Shared Response Modeling (SRM) (Chen et al., 2015), and Event Segmentation methods (Baldassano et al., 2017) and apply them to datasets wherein subjects watch movies or listen to stories.
Knowledge of python and basic MVPA applied to fMRI datasets.
TBA.
Intermediate goals (milestones).
Install a working version of brainiak and a ready to use dataset.
Execute tutorials related ISC, SRM, and Event Segmentation.
Python
ISC, SRM, and Event Segmentation Tutorials
BrainIAK
Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., & Norman, K. A. (2017). Discovering Event Structure in Continuous Narrative Perception and Memory. Neuron, 95(3), 709-721.e5. https://doi.org/10.1016/j.neuron.2017.06.041
Chen, P.-H. (Cameron), Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., & Ramadge, P. J. (2015). A Reduced-Dimension fMRI Shared Response Model. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 460–468). Curran Associates, Inc. Retrieved from http://papers.nips.cc/paper/5855-a-reduced-dimension-fmri-shared-response-model.pdf
Hasson, U., Nir, Y., Levy, I., Fuhrmann, G., & Malach, R. (2004). Intersubject Synchronization of Cortical Activity During Natural Vision. Science, 303(5664), 1634–1640. https://doi.org/10.1126/science.1089506
Simony, E., Honey, C. J., Chen, J., Lositsky, O., Yeshurun, Y., Wiesel, A., & Hasson, U. (2016). Dynamic Reconfiguration of the Default Mode Network During Narrative Comprehension. Nature Communications, 7, 12141. https://doi.org/10.1038/ncomms12141
BrainIAK
Tutorials: We have released a set of educational materials for public use.
https://mattermost.brainhack.org/brainhack/channels/brainiak
TrainTrack: DataLad - Everything you ever wanted to know, but were afraid to ask...
ReproNim / OHBM TrainTrack Untutorial option. DataLad (https://www.datalad.org/) builds on top of git-annex and extends it with an intuitive command-line interface. It enables users to operate on data using familiar concepts, such as files and directories, while transparently managing data access and authorization with underlying hosting providers. This tutorial and hands-on demo session will start to get you up to speed with this technology.
A desire to better manage your data and processing.
You can find DataLad on GitHub: https://github.com/datalad
and on the web at: https://www.datalad.org/
Documentation at: http://docs.datalad.org/en/latest/
C-PAC - fMRI Preprocessing
Introduction to C-PAC, the Configurable Pipeline for the Analysis of Connectomes.
Basic shell experience. BIDS is a plus!
fMRI preprocessing made easy - C-PAC goal is to provide an accessible interface for a customizable preprocessing pipeline without requiring programming skills. Some parameters can encompass a list of choices, leaving for C-PAC to preprocess your data with the combination of each set of parameters (e.g. global signal regression On and Off).
Install Docker: https://docs.docker.com/install/
Download the last C-PAC version: docker pull fcpindi/c-pac
Download a raw BIDS dataset locally.
C-PAC documentation: https://fcp-indi.github.com
https://mattermost.brainhack.org/brainhack/channels/cpac
If you face a problem or have questions, you can open an issue on Github and we can help you asap: https://github.com/FCP-INDI/C-PAC/issues
Hi! Thanks for putting together this great list of resources.
Have you already considered adding links to videos from previous Brainhacks as listed at: http://www.brainhack.org/lectures.html?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.