GithubHelp home page GithubHelp logo

ohbm / hackathon2019 Goto Github PK

View Code? Open in Web Editor NEW
81.0 24.0 61.0 4.66 MB

Website and projects for the OHBM Hackathon in Rome 2019

Home Page: https://ohbm.github.io/hackathon2019

HTML 50.20% JavaScript 29.42% CSS 20.38%

hackathon2019's Introduction

shiba love

hackathon2019

Website and projects for the OHBM Hackathon in Rome 2019

Gitter chat      Join Brainhack's Mattermost

Brainhacks have become part of our way of doing science in a connected world since the time we participated in the first ever Brainhack. By creating a friendly and inclusive environment during 3 days of intense collaborations, they can provide the means to create and strengthen collaborations that can be pursued online the rest of the year.

Brainhacks are the opportunity to try new ideas, to discover new tools, to adopt open science best practices and actively push its boundaries, and to meet the people from all disciplines that are inventing tomorrow’s brain mapping.

We aim at providing a space where we can all thrive, no matter if it is our first hackathon or if we have never missed one, sharing the best of the support we have received by the open science community during the years.

Open science is essential for being able to work together effectively. We want to help build a community where openness is the norm and siloes the exception.

List of projects

You can find a living document of the Hackathon projects at this google sheet.

It contains information on:

  • Project name
  • Issue in OHBM Hackathon GitHub repository (link)
  • Other project links
  • Team leader (point person to tag if you have a question!)
  • Names of team members
  • Mattermost channel name (https://mattermost.brainhack.org)
  • Other online contact
  • Physical location (Thurs PM)
  • Physical location (Fri AM)
  • Physical location (Fri PM)
  • Physical location (Sat AM)

Please make sure your group is represented there to make it easy for others to find you and join in!

Venues

There are four physical locations for the largest OHBM Hackathon yet!

  • Mercato Centrale, Via Giovanni Giolitti, 36, 00185 Roma RM
  • Sapienza, Piazzale Aldo Moro, 5, 00185 Roma RM
  • Thursday only Una Hotels Deco Roma, Via Giovanni Amendola, 57, 00185 Roma RM
  • Friday and Saturday only Palazzo Montemartini Rome, Largo Giovanni Montemartini, 00185 Roma RM

Code of conduct

The following text is taken from the OHBM Code of Conduct, available in full at https://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=3846:

OHBM stands against discrimination in all forms and at every organizational level. Discrimination based on, but not limited to geographic location, gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, accent, race, ethnicity, age or religion does not abide by OHBM’s values. We do not tolerate discrimination or harassment of conference participants and organizers.

All hackathon participants are expected to follow the OHBM Code of Conduct at all times during the OHBM Hackathon.

Reporting and enforcement information is detailed at the Code of Conduct page. You may complete an online report (anonymous or not, just add your contact details if you'd like the executive team to get in touch) or contact the OHBM executive team at +1 612-749-1154.

hackathon2019's People

Contributors

avakiai avatar complexbrains avatar crocodoyle avatar danjgale avatar dnkennedy avatar jdkent avatar katjaq avatar kirstiejane avatar lestropie avatar manojneuro avatar martinagvilas avatar pbellec avatar r03ert0 avatar raamana avatar remi-gau avatar rutgerfick avatar timvanmourik avatar vartikaj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hackathon2019's Issues

Automated Cortical Lesion Detection using Python Tools

Cortical Lesion Finder

Barbara A.K. Kreilkamp

Project Description

The high anatomic specificity of MRI may depict focal lesions and can be expertly assessed by visual analysis through neuroradiologists (Von Oertzen et al. 2002). Still, it is important to find ways to improve the diagnostic yield from MRI through optimized MRI protocols, expert neuroradiological assessment and quantitative analysis of post-processed volumetric MRI (Sisodiya et al. 1995, Huppertz et al. 2005).

This project focuses on quantitative analysis to improve detection of focal cortical dysplasia (FCD), which is a common lesion associated with medically refractory epilepsy and often epileptogenic. FCD is a type of cortical malformation that is neuroradiologically characterized by cortical thickening, GM/WM blurring and transmantle signs, which are abnormal extensions of GM towards the ventricles (Barkovich et al. 1997, Huppertz et al. 2005). FCDs are the most common lesions in children and is the third most common lesion after hippocampal sclerosis (HS) and tumors in adult patients.

Within our study, a dedicated epilepsy MRI research protocol including isotropic 3D T1-weighted and FLAIR was performed on patients with medically refractory focal epilepsy, who were deemed to be non-lesional based on previous MRI. The most recent MRIs conducted in context of this study allowed (i) a clinical diagnostic assessment by an experienced neuroradiologist and (ii) the application of an automated quantitative voxel-based lesion detection technique on patients' MRIs in order to find potentially epileptogenic lesions such as FCDs.

I have used MATLAB to program an automatic cortical lesion finder tool and would like to translate it into Python together with you!

image

Skills required to participate

Experience in Python (and possibly MATLAB, not a requirement)
Creativity for incorporating SPM12, nipype and nilearn (for voxel-based morphometry)

Integration

As of now, we only have a limited number of cortical lesions. The idea is to make this project available to clinicians as collaborators, incorporate their feedback and improve the detection rate and usability of the software.

Milestones:

(i) Design a user-friendly and low-level Graphical User Interface;
(ii) read in MRI data (nifti or preferred DICOM);
(iii) translate MATLAB/SPM12 algorithms using GitHub packages

Preparation material

Join Mattermost channel

@cortical_lesion_finder

GitHub Repository

Cortical Lesion Finder

More References

  • Von Oertzen, T.J., Urbach, H., Jungbluth, S., Kurthen, M., Reuber, M., Fernandez, G. et al. (2002). Standard magnetic resonance imaging is inadequate for patients with refractory focal epilepsy. Journal of Neurology, Neurosurgery, and Psychiatry, 73(6), 643-647. http://doi.org/10.1136/jnnp.73.6.643
  • Sisodiya, S.M., Free, S.L., Fish, D.R., Shorvon, S.D. (1995) Increasing the yield from volumetric MRI in patients with epilepsy. Magnetic Resonance Imaging 13:1147–1152.
  • Barkovich, A.J., Kuzniecky, R.I., Bollen, A.W., Grant, P.E. (1997) Focal transmantle dysplasia: a specific malformation of cortical development. Neurology 49, 1148–1152.

nii-masker: A command-line wrapper for nilearn's Masker tools

nii-masker

Project Description

niimasker is a command-line wrapper for nilearn's Masker objects, which let you easily extract out time series from your functional data (and gives you a number of options for post-processing during extraction). I'm in a lab with a number of non-Python users who would benefit greatly from this ability, and this project was a spur-of-the-moment project idea I had a couple of weeks ago when discussing my fMRI pipeline with my colleagues (we're trying to get a more standardized workflow going – fmriprep, etc). Because niimasker is run via the command-line, pretty much anyone with some bash knowledge can use it (or at least that's what I'm working towards).

I developed much of this last week in a "mini-sprint" (i.e. a colleague needed data "yesterday"). While its core functionality is working, there's lots to be done. I've included a number of issues in the repo already: https://github.com/danjgale/nii-masker/issues. So, there are some exciting features to add (e.g., a visual report à la fmriprep) as well as some testing/CI to set up. These outline some of the things I'd like to accomplish at the hackathon.

Skills required to participate

  • Any experience with Python and nilearn
  • Any HTML knowledge would be helpful for development of the visual report!

Integration

The goal is to create a totally intuitive tool for anyone, so all contributions from all backgrounds and perspectives are encouraged. Non-expert/technical users can contribute by providing feedback and design ideas to make niimasker more approachable and user-friendly.

Preparation material

Link to your GitHub repo

https://github.com/danjgale/nii-masker/

Communication

I can set up a channel on the brainhack mattermost/slack if this gains interest. I would also like to keep a lot of conversation "in the open" directly in github issues as well.

BIDS-ifying the hMRI toolbox

BIDS-ifying the hMRI toolbox

Project Description

The hMRI toolbox allows you to generate quantitative MRI data from a series of "raw" multi-echo structural images and field maps, i.e. the Multi-Parametric Mapping (MPM) protocol. So far, the toolbox is not BIDS compliant but it would clearly help everyone if it did...

hMRI_mapcreation

Skills required to participate

Anyone with some experience in Matlab, quantitative MRI, SPM-extension toolbox development or the will to learn these skills.

Integration

The hMRI project has been supported by a few labs already and used by a few more. Harmonizing the way the sequence parameters are saved and accessed would help data management, QA, and sharing.

One BIDS Extension Proposal (BEP001) focuses on standardizing such structural acquisitions that include multiple contrasts (multi echo, flip angle, inversion time) sequences. This effort thus aims at integrating the hMRI toolbox within the BEP001. The latter is still in development, therefore adjustments if needed are still possible.

Intermediate steps:

  • BIDS-ify the example provided example data
  • updates the toolbox to use the BIDS form of the parameters
  • extend the Dicom-to-Nifti conversion according to BIDS

Preparation material

The public distribution of hMRI toolbox code is available here but I'll make the latest private version available for the development.
Example data are available here, specifically the "800µm 64 channel protocol" data set.

Nipype + GiraffeTools, support for custom functions

Nipype + GiraffeTools, support for custom functions

Project Description

I would like to visually build a Nipype workflow. This is already possible with GiraffeTools but only with standard Nipype nodes. It would be really cool if you could include ANY of your own functions straight away: wrap them into Nipype-modules and show them to the world.

This project is largely based on this issue

Skills required to participate

  • Heard of the word 'Nipype'

Integration

This Hackathon is a particularly good moment to do this, because we can see what users and developers need in building workflows.

Preparation material

Link to your GitHub repo

https://github.com/TimVanMourik/GiraffeTools

TrainTrack: Help transitioning from Python 2 to 3

Name of your awesome project

2to3: Porting your package from python 2 to 3

Project Description

As the Python 2 is reaching end of life, the need to transition to Python 3 is imminent and important. Hence, this tutorial would help orient those who need to, with the necessity, sufficient guidance and discussion on issues related to it.

Skills required to participate

  • Basic python skills
  • Patience to debug and test

Integration

I am yet to define the full scope of this (whether I have time to do this myself). Will update this soon with all the details.

TBA

Preparation material

Tagged under migration here

Link to your GitHub repo

TBA

Communication

TBA

TrainTrack: DataLad

Name of your awesome project

TrainTrack: DataLad - Everything you ever wanted to know, but were afraid to ask...

Project Description

ReproNim / OHBM TrainTrack Untutorial option. DataLad (https://www.datalad.org/) builds on top of git-annex and extends it with an intuitive command-line interface. It enables users to operate on data using familiar concepts, such as files and directories, while transparently managing data access and authorization with underlying hosting providers. This tutorial and hands-on demo session will start to get you up to speed with this technology.

Skills required to participate

A desire to better manage your data and processing.

Link to your GitHub repo

You can find DataLad on GitHub: https://github.com/datalad
and on the web at: https://www.datalad.org/

Documentation at: http://docs.datalad.org/en/latest/

TrainTrack: ReproIn - The ReproNim image input management system

Name of your awesome project

ReproIn - the ReproNim image input management system

Project Description

ReproNim / OHBM TrainTrack Untutorial option. ReproIn (https://github.com/ReproNim/reproin) provides a turnkey flexible setup for automatic generation of shareable, version-controlled BIDS datasets from MR scanners. This tutorial and hands-on demo session will start to get you up to speed with this technology.

Skills required to participate

A desire to get your MR data from the scanner into BIDS (and DataLad).

Link to your GitHub repo

You can find ReproIn at GitHub: https://github.com/ReproNim/reproin

TrainTrack: Reproducible Science. Schedule Day 2 morning.

Please create your schedule! What would you like to see? Who could provide this expertise? Would you like to offer a workshop?
It is important that the sessions will be as hands-on as possible :) – the rest is left fully to your creativity! We are looking forward to learning about your interests.

doge_reproducibility

headlines for resources

Hi! This list is a great idea.

Wouldn't be useful to divide the talks at neurohackademy into more specific headlines? (e.g. terminal, containers, open-science tools, machine learning and deep learning, statistics, software development, etc.). If you think this is a good idea, I could do it (and could also add the duration of these videos next to the links).

Multi-table PCA methods for group and individual functional connectivity

C-MARINeR

Jenny Rieck & Derek Beaton

Project Description

C-MARINeR is a focused sub-project MARINeR: Multivariate Analysis and Resampling Inference for Neuroimaging in R. The "C" stands generally for connectivity, but specifically and statistically: covariance or correlation. The C-MARINeR project aims to develop and distribute an R package and ShinyApp. Together, R + Shiny allows for ease of use and, hopefully, simpler exploration of such complex data, and quicker adoption of the techniques.

Background

CovSTATIS is the base method in C-MARINeR. CovSTATIS is effectively a multi-table PCA designed for covariance matrices. CovSTATIS allows for multiple connectivity (correlation or more generally covariance) matrices to be integrated into a single analysis. CovSTATIS produces component (a.k.a. factor) maps with respect to the compromise matrix (weighted average), and then projects each individual matrix back onto the components.

covstatis_outline

K+1CovSTATIS is a novel extension of CovSTATIS that allows us to use a "target" or reference matrix. For example, a theoretical resting state structure (a la Yeo/Schaffer maps). K+1CovSTATIS also produces component (a.k.a. factor) maps with respect to the compromise matrix (weighted average), except the compromise matrix is no longer a weighted average of all matrices, rather, it is a weighted average of all matrices with respect to a "target" matrix. Then each of those matrices are projected back onto the components.

Quests and missions

Overview

Our primary goal is to make a small package and ShinyApp to perform the same types of analyses we use for integrating and analyzing multiple connectivity matrices (across tasks, individuals, and groups). We want to make CovSTATIS and similar methods easily accessible.

Goals & tasks are split across multiple types, including development, design, testing, etc...

Main quests (ordered)

  • CovSTATIS R package: fully functional covstatis() function
  • ShinyApp to interface with CovSTATIS package for covstatis() input and output (visualization, exploration)
  • Visualizers: both in the R package and Shiny
  • CovSTATIS R package: fully functional kplus1_covstatis() function
  • ShinyApp to interface with CovSTATIS package for kplus1_covstatis() input and output (visualization, exploration)
  • CovSTATIS R package: Distance-based equivalents of covstatis() and klpus1_covstatis() (i.e., DiSTATIS, the distance matrix version of covSTATIS).

Hard mode

  • CovSTATIS R package: resampling estimates through bootstrapping

Side quests

  • CovSTATIS R package: Speed & memory optimization
  • CovSTATIS R package: Formal (unit) tests
  • R & General: Data (for possible inclusion and distribution in package)
  • R & General: Documentation & vignettes
  • ShinyApp: Reactive plots
  • ShinyApp: Tool tips
  • General: Logo design
  • NIfTI and other imaging formats I/O
  • Translation of project into other languages (e.g., Python, Matlab/Octave, Java, Assembly, LOLCode)

Tools

Quests: R, various R packages, git/github, RStudio, Shiny, R Markdown

Side quests: HTML, CSS, Possibly Rcpp/RcppEigen/RcppArmadillo, LaTeX, R Markdown, graphic design

Skills

For the C-MARINeR project, there are many ways to contribute across a variety of skill levels and experience across domains.

How to participate

The “main quests” require at least moderate-to-high expertise and familiarity with R, Shiny, and/or principal components analysis. These tasks are the primary focus for us and where we will spend most (or all) of our time.

The “side quests” are meant to cover tasks beyond the primary requirements but still key parts of the project. These exist across generating data, writing documentation, design (graphic, interface), optimization, tests, and extensions. Some of these require at least familiarity with R, but many others can be done without programming experience, or even in other languages (i.e., translation of the project).

If you want to participate in any of the main or side quests, or even have ideas for additional tasks please reach out to us.

Milestones

Milestones for OHBM 2019 Hackathon are dependent on what is accomplished by the end of CAN/ACN BrainHackTO: 2019

Links and Materials

Generic Carpet class for visualization of higher-dimensional MR scans (4D)

Name of your awesome project

Carpet plot

Project Description

Carpet plots are amazing tools to "unroll" a 4D dataset such as fMRI scan, and make their visualization really easy, esp. to detect anamolies for QC purposes. Their full potential is not realized yet due to lack of good tools, as well as lack of application to new and interesting aspects/modalities (such as DWI, 4th dim being gradient, instead of time as in fMRI). An attempt has been made to provide a self-contained class Carpet in mrivis to provide generic yet convenient interface to realize the full potential of the Carpet class - howevery more work needs to be done to implement some features and smooth out existing ones.

Look at : raamana/mrivis#13

Skills required to participate

Neuroimaging (basic)
Python (intermediate and basic)
Data viz (basic and advanced)

Integration

This project can be a wonderful collaboration betweeen neuroimagers, CS and artists.

Preparation material

Take a look at the docs and repo:
https://raamana.github.io/mrivis/readme.html

This is a demo notebook for different vis. classses in mrivis:
https://nbviewer.jupyter.org/github/raamana/mrivis/blob/master/docs/example_notebooks/mrivis_demo_vis_classes.ipynb#Carpet

Link to your GitHub repo

https://github.com/raamana/mrivis

Communication

mrivis channel on brainhack mattermost

Extending DIPY Visualization and workflows (command line) framework

Extending DIPY Vizualisation or workflows framework

Project Description

DIPY is the largest community-based open-source software project and it implements many methods in computational neuroanatomy, with an emphasis on methods for the analysis of diffusion MRI (dMRI) data. DIPY offers a new system of command-line interfaces that ease the use of the Python API implemented for clinician/neuroimagers. The goal is to add new functionalities and simplify the command line creation. The second project is based on FURY, a scientific visualization library, born from a DIPY spin-off. The goal is to add some widget and a function to simplify Atlas visualization.

Skills required to participate

Everybody is welcomed! from python Beginner to expert! if you are interested in :

  • improving your understanding or extending of DIPY visualization, join us!
  • extending DIPY command line framework, join us!

Integration

Neuroimagers and computational scientists may be able to contribute to either part of the project. more details below:

Workflows Project:

  • Create a worflows from a decorator instead of a Class.
  • Create New DIPY Workflow (command line)
  • For beginner, you will be able to create and contribute your own command line on DIPY
  • Increase documentation

Visualization Project:

  • Visualization dynamic Atlas easily (medium)
  • Dynamic Mosaic visualization (medium)
  • Create a new widget (Combobox) (medium)
  • Create new brain effect with shader (advanced user)
  • Increase documentation

Preparation material

Link to your GitHub repo

Communication

https://gitter.im/nipy/dipy

TrainTrack: Teaching an Old BIDS New Tricks - Semantic Markup of BIDS data

Name of your awesome project

Teaching an Old BIDS New Tricks - Semantic Markup of BIDS data

Project Description

ReproNim / OHBM TrainTrack Untutorial option. The BIDS data representation can be extended through use of NIDM (the NeuroImage Data Model) in order to represent more detailed semantics of the information contained. This tutorial and hands-on demo session will start to get you up to speed with this technology. This will feature the csv2nidm tool from PyNIDM.

Skills required to participate

No specific skills should be needed to use this tool; python programming skills necessary to contribute to the codebase.

Link to your GitHub repo

GitHub: https://github.com/incf-nidash/PyNIDM

Explorative analysis of fetal cortical surface data

Explorative analysis of fetal cortical surface data

Project Description

Understanding the maturation of the human brain from a smooth surface to its highly convoluted state at birth is an essential quest in the field of neuroscience. In the last decade the development of fast MR imaging protocols and advanced image processing methods has enabled imaging of the fetal brain at unprecedented detail. However, data availability is very limited due to comparatively rare examinations, small study sizes and high population variability.

In the spirit of open and repeatable research, we present the preliminary release of a dataset of 33 pre-processed MRI acquisitions of healthy fetal brains of 26 individuals imaged between GW 20 and GW 36. Furthermore, we provide cortical surface models human fetal cerebral hemispheres consisting of densely sampled surface triangulations that are matched between hemispheres and across time to serve as a standardized reference frame for surface-based analysis of cerebral development in utero.

During the hackaton, I'd welcome anyone interested to do so to get in touch and bounce around ideas how to get the most out of this data.

Skills required to participate

Since this is a very open project, people with all type of skills can contribute, but experience with visualization and maybe computational geometry might come in handy.

Aims

Brainstorming on how to visualize and interpret the growth of the fetal brain in utero and what methods to apply for fun and profit.

Data availability

Unfortunately, I cannot (yet) put the data online - people interested in working on it will have to provide their names and contact email and I will provide a download link.

Communication

https://mattermost.brainhack.org/brainhack/channels/ohbm19_hackaton_fetal

JavaScript toolkit for modular brain visualization

Name TBD

Project Description

There are quite a few JS brain image viewers out there, but they overwhelmingly focus on the rendering side of things rather than the UI side. The goal of this project is to develop a high-level, modular JS library that (a) defines a common API for viewers, (b) implements support for widely used viewers (e.g., Papaya), and (c) provides a set of customizable widgets/components that can be easily injected into new JS projects. If successful, users should be able to construct relatively sophisticated dashboards (including things like image thresholding and color assignment, customized orth views, multiple layers, etc.) in just a few lines of JS code.

Skills required to participate

All kinds of contributions are welcome, but the project is likely to benefit particularly from the involvement of people with JavaScript experience and/or general experience building APIs and architecting modular libraries.

Integration

There's room for contribution from folks with a wide range of backgrounds and experience levels. We will be particularly interested in soliciting opinions on what core features the package should include, and how users expect to interact with good visualization tools.

Preparation material

Folks with prior JavaScript experience may want to take a look at a few of the existing viewers, e.g., Papaya, PyCortex, and brainsprite.js. Participants with prior programming experience who are new to JavaScript may want to whisper a few quiet prayers and then take the plunge into a JS tutorial or six.

Link to your GitHub repo

https://github.com/neurostuff/BVT — but that's currently just a placeholder.

TrainTrack: Development & distribution of Python scripts using MRtrix3

Development & distribution of Python scripts using MRtrix3

Project Description

MRtrix3 provides a set of tools to perform various types of diffusion MRI analyses, from various forms of tractography through to next-generation group-level analyses.

The majority of tools provided within MRtrix3 are built using C++, and hence those underlying APIs are only accessible to researchers with the requisite skills in that language.

More recently however we have incorporated a relatively simple Python API, which is intended for the automation of higher-level image processing tasks that can be achieved using a combination of existing lower-level commands (whether from MRtrix3 or other softwares). Many frequently-used commands provided with MRtrix3 already make use of this API.

It is additionally possible for stand-alone processing scripts to make use of this API, which then inherit the various benefits provided by the API:

  • Integrated command-line parsing capability, with an interface identical to MRtrix3 commands;

  • Command-line terminal output that is consistent with other MRtrix3 Python scripts, with multiple available levels of terminal verbosity;

  • Self-generation of inline paginated help page, as well as Markdown and ReStructured Text documentation;

  • Integrated management of scratch directory for intermediate data processing;

  • Compatibility with both Python2 and Python3;

  • Various convenience functions that have been accumulated over time due to their utility in tasks regularly encountered in the development of such processing scripts; e.g. wrapping functionalities of other software packages, robust parsing of user inputs, provenance management.

Note: This library does not involve the direct manipulation of image data within Python itself; it is purely dedicated to the automation of processing tasks that can be built from a sequence of existing commands.

If there were sufficient interest, I could perform an ad hoc session demonstrating the basic usage of this API, as well as provide support to anybody intending to develop tools using this API during the hackathon.

Skills required to participate

Some requisite experience with Python is necessary; an attendee without such would likely be unable to recognise the distinction between general Python capabilities and the capabilities of this specific API. Beyond that, some familiarity with MRtrix3 would be highly recommended, as knowledge of the appropriate underlying commands for basic image manipulation operations means that time can be focused on the development of higher-level functionalities.

Integration

Processing pipeline projects that are implemented in "raw" Python (i.e. without use of an established API) will tend to run into the very same implementation hurdles that justified the development of the MRtrix3 Python API. By providing a "stepping stone" to the use of this particular API, this TrainTrack may help to fast-track new projects, by avoiding the overhead of these myriad generic scripting challenges, and enabling more rapid commencement of work on the actual novel aspects of any particular project. Scripts developed against this API may later be distributed individually and executed by anyone with a valid MRtrix3 installation, or, if sufficiently novel / relevant / useful, could be integrated into the MRtrix3 package itself.

Preparation material

  • The preprint of the MRtrix3 manuscript provides simple example commands in both C++ and Python (see Appendix B).

  • The code for those Python scripts provided with MRtrix3 is open-source, and can give some indication of how the API is used.
    (Note: this hyperlink directs to development branch code, as the Python API will soon be undergoing changes as part of the upcoming "3.0_RC4" tag)

  • My BIDS App "MRtrix3_connectome" demonstrates how a relatively large and complex processing pipeline can be fully automated and provided to the research public using this API.
    (Note: the current version of this App is built against the Python API in MRtrix3 version 3.0_RC3, which is the current public release; this will hopefully be updated to reflect the upcoming API changes prior to the hackathon)

Link to your GitHub repo

Also: Online documentation for those Python scripts currently provided as part of MRtrix3. This documentation is self-generated from the source code, which is one of the benefits of use of this API.

Communication

  • MRtrix3 community forum, for general MRtrix3 information and discussion

  • My profile on the MRtrix3 community forum; I can be contacted there directly for questions that are specific to the Hackathon and may not be relevant to the MRtrix3 community more generally.

Extending Open-Brain-Consent to support GDPR requirements and include step-by-step examples

Extending Open-Brain-Consent to support GDPR requirements and include step-by-step examples

Stephan Heunis

Project Description

Responsible sharing of data and code that underlie the results of a scientific study is an important step towards improving research transparency, fostering inclusivity and building public trust in science. In health sciences, and neuroimaging research in particular, an important factor when sharing data is privacy of personal or sensitive data. Ethical review boards at research institutions are responsible for reviewing a study protocol and deciding whether it can continue based on its adherence to the relevant ethical and research integrity principles, wich typically include regulations on personal data privacy. In the European Union, such data privacy requirements are subject to the General Data Protection Regulation (GDPR) as implemented by its member countries.

Despite the increased importance that funders and instituions are starting to place on open science practices, no clear, thorough and openly available guides exist for publicly sharing neuroimaging data under GDPR. One resource, Open Brain consent, has rendered an important service by making template consent forms available in multiple languages with the aim of allowing "collected imaging data to be shared as openly as possible while providing adequate guarantees for subjects’ privacy". However, some aspects related to GDPR are lacking, e.g. more detailed information on the process of acquiring, processing and anonymising data; specifications on data processing and protection roles; and a detailed data privacy statement.

The overall goal of this OHBM hackathon project is to extend the content of Open Brain consent with GDPR-related templates and thorough real-world examples. Ideally, this additional information would serve as a step-by-step guide for researchers during the process of obtaining ethical approval for an EU-based study, specifically where the aim is to share neuroimaging data publicly. Some progress has been made previously, see issue 24 on the Open Brain Consent github page. Our goal is to extend this with (among others):

  • A primer on the implications of specific GDPR recitals for open neuroimage data sharing
  • A template information letter
  • A template data privacy statement
  • A template informed consent letter
  • Updated resources regarding neuroimage anonymisation tools and processes
  • A step-by-step guide to putting the above information together for an ethical review board, and a similar guide to executing the steps once approval is granted.

Skills required to participate

Anyone with experience in one or more of the following aspects could contribute:

  • Neuroimage data anonymisation, open data structures (BIDS), and data sharing. (tool use, scripting/programming, and/or process knowledge)
  • Neuroimaging-related ethical approval processes in EU institutions
  • Ensuring adherence of (neuroimaging) studies to GDPR regulations
  • Data stewards or data protection officers

Additionally, people with the following skills/attributes could also contribute, irrespective of previous experience:

  • A passion for open data and code and for helping others achieve this in practice
  • Technical writing
  • Tutorial writing
  • Graphic design (to create illustrative examples and explanations)

Preparation material

We have started a google doc with links to background reading material, useful resources and preliminary notes. We will likely use this google doc throughout the hackathon. Please feel free to add your comments and content to this document.

Link to GitHub repo

This is the Github Repo of the existing Open Brain Consent website, with an explanatory ReadMe.

Communication

If you want to contribute to this project, please feel free to join the Brainhack Mattermost community server and join our existing communication channel "open_brain_gdpr" or find me (Stephan Heunis / jsheunis) with a direct message. During the hackathon we will keep a video call open continuously for remote participants. You can access this video call at any time via hangouts.

TrainTrack: C-PAC - fMRI Preprocessing

C-PAC - fMRI Preprocessing

Project Description

Introduction to C-PAC, the Configurable Pipeline for the Analysis of Connectomes.

Skills required to participate

Basic shell experience. BIDS is a plus!

Integration

fMRI preprocessing made easy - C-PAC goal is to provide an accessible interface for a customizable preprocessing pipeline without requiring programming skills. Some parameters can encompass a list of choices, leaving for C-PAC to preprocess your data with the combination of each set of parameters (e.g. global signal regression On and Off).

Preparation material

Install Docker: https://docs.docker.com/install/
Download the last C-PAC version: docker pull fcpindi/c-pac
Download a raw BIDS dataset locally.

C-PAC documentation: https://fcp-indi.github.com

Link to your GitHub repo

C-PAC

Communication

https://mattermost.brainhack.org/brainhack/channels/cpac

If you face a problem or have questions, you can open an issue on Github and we can help you asap: https://github.com/FCP-INDI/C-PAC/issues

Neurodocker web

Name

Neurodocker as web application

Project Description

@kaczmarj made a really nice tool Neurodocker that makes docker files given an input of MRI analysis toolboxes. Let's make a web application out of this. I made a very basic start for this here: https://neurodocker.herokuapp.com.

Skills required to participate

  • Some Docker knowledge
  • Interest in learning a little bit of web development

Link to your GitHub repo

https://github.com/kaczmarj/neurodocker
https://github.com/TimVanMourik/NeurodockerWeb

Communication

Slack for now

new resources

Hi again!

There is a bunch of great talks dealing with open-science tools and neuroimaging at mind 2018 and mind 2017. Should I add them?

I listed some of them here, along with other computational resources.

Tracking microstructural biomarkers of Alzheimer’s Disease via diffusion MRI

Tracking microstructural biomarkers of Alzheimer’s Disease via diffusion MRI

The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a longitudinal natural history study. It is a large multicenter study designed to identify clinical, MRI, genetic, and biochemical markers for the early detection and tracking of Alzheimer's disease (AD). In particular, identifying biomarkers sensitive to mild cognitive impairment (MCI) is important to better categorize the transitional stages between normal aging and AD, and to evaluate targeted treatments.

Data from ADNI is publicly available. The third phase of ADNI (ADNI-3) began in late 2016, with subject imaging beginning in mid-2017. ADNI-3 includes an advanced multi-shell diffusion MRI acquisition, besides the basic single-shell acquisition [1] (see Figure 1). Multi-shell dMRI allows for the reconstruction of diffusion models beyond Diffusion Tensor Imaging (DTI).

ADNI-3 Advanced multi-shell protocol:

  • Siemens Prisma/PrismaFit
  • 32 or 64 channel receive array
  • SW VE11C and up
  • Simultaneous Multi-Slice (SMS, a.k.a multi-band)
  • Image resolution: 232 x 232 x 160mm
  • Voxel size: 2 x 2 x 2mm
  • TE = 71ms; TR = 3300ms
  • Three shells: (112 total diffusion weighted directions)
    • 13 b=0
    • 6 b=500 s/mm2
    • 48 b=1000 s/mm2
    • 60 b=2000 s/mm2
  • Small delta = 13.6 ms; Big delta = 35.0 ms

ADNI-3 acquisition protocol
Figure 1. Comparison of “basic” and “advanced” diffusion MRI protocols in ADNI-3. Taken from Reid et al. 2017 [1].

In multi-shell data, multi-compartment models can be used to delineate the signal contributions of different tissue compartments, which in turn tell us something about the tissue’s microstructural composition. Conveniently, Dmipy is an open source tool designed to modularly generate and fit any state-of-the-art multi-compartment diffusion models on-the-fly. Here, we aim at fitting all possible multi-shell models for the ADNI3 advanced diffusion protocol with Dmipy and benchmark which model is best to be used as an imaging biomarker to track the progression of Alzheimer’s Disease in the elderly.

Multi-compartment models that are relevant for multi-shell microstructure exploration are: Ball and Stick [2], NODDI-Watson [3], NODDI-Bingham [4], Multi-compartment microscopic diffusion imaging (MC-MDI) [5] and Multi-Tissue CSD [6]. Aside from parametric models, we also evaluate if signal-based markers from signal models such as MAP-MRI [7] can be valuable markers for tracking AD (RTOP, RTAP, RTPP, MSD, NG).

The aim of this project is to determine the best diffusion model (if any) to measure the intra-cellular, extracellular volume fractions, and the dispersion of fibers, whose change should correlate with the pathological progression of AD.

Comparison across dMRI models

For each dMRI measure, we will run a logistic regression with TV-L1 regularization (Nilearn package) across voxels to classify individuals with mild cognitive impairment (MCI; N=17; mean age: 76.8±7.5 yrs; 14M/3F) from those who are cognitively normal (CN; N=39; mean age: 73.2±7.2 yrs; 25M/14F) to identify which dMRI measure gives the highest classification accuracy. Among dMRI measures yielding >80% accuracy we will compare the Jaccard/Dice similarity coefficient from the resulting maps of classifying regions to identify which dMRI measures give similar information in similar regions and which offer additional information about underlying pathological changes.

Note

We may use different classification labels between groups, which can be based on commonly used screening tools for detecting dementia and AD as the Alzheimer’s Disease Assessment Scale 13 (ADAS-cog8), the Mini-Mental State Examination (MMSE9), and the Clinical Dementia Rating scale sum-of-boxes (CDR-sob10), amyloid PET scores, or cerebrospinal fluid (CSF) markers.

Skills required to participate

We welcome any curious brainhacker who is interested in improving the understanding of the Alzheimer's Disease and/or wants to see how simple it can be to study tissue microstructure with python.

Integration

The goal is to track the changes of tissue microstructure in AD. Ideally, we will find a microstructural biomarker that lets us anticipate the classical symptoms of AD, giving us the possibility to set up the corresponding therapy in advance. We will be analyzing many different models for each subject; this will raise problems related to dimensionality reduction and feature selection.

Your collaboration will be precious in:

  • selecting the relevant features to be analyzed;
  • performing an accurate statistical analysis of the results.

Preparation material

You can have a look at the website of ADNI to get know more about the data we are processing. To get informations about the fitting of tissue microstructure models, you can look at the website of Dmipy.

Links

Communication

This issue will be kept as reference discussion channel. Question can also be directly addressed to @villalonreina (ADNI) and @rutgerfick (Dmipy).

References

  1. Reid, R. I. et al. THE ADNI3 DIFFUSION MRI PROTOCOL: BASIC + ADVANCED. Alzheimers. Dement. 13, P1075–P1076 (2017).
  2. Behrens, T. E. J. et al. Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 50, 1077–1088 (2003).
  3. Zhang, H., Schneider, T., Wheeler-Kingshott, C. A. & Alexander, D. C. NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain. Neuroimage 61, 1000–1016 (2012).
  4. Tariq, M., Schneider, T., Alexander, D. C., Gandini Wheeler-Kingshott, C. A. & Zhang, H. Bingham–NODDI: Mapping anisotropic orientation dispersion of neurites using diffusion MRI. Neuroimage 133, 207–223 (2016).
  5. Kaden, E., Kelm, N. D., Carson, R. P., Does, M. D. & Alexander, D. C. Multi-compartment microscopic diffusion imaging. Neuroimage 139, 346–359 (2016).
  6. Jeurissen, B., Tournier, J.-D., Dhollander, T., Connelly, A. & Sijbers, J. Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data. Neuroimage 103, 411–426 (2014).
  7. Fick, R. H. J., Wassermann, D., Caruyer, E. & Deriche, R. MAPL: Tissue microstructure estimation using Laplacian-regularized MAP-MRI and its application to HCP data. Neuroimage 134, 365–385 (2016).
  8. Rosen, W. G., Mohs, R. C. & Davis, K. L. A new rating scale for Alzheimer’s disease. Am. J. Psychiatry 141, 1356–1364 (1984).
  9. Folstein, M. F., Folstein, S. E. & McHugh, P. R. ‘Mini-mental state’: a practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198 (1975).
  10. Berg, L. Clinical Dementia Rating (CDR). Psychopharmacol. Bull. 24, 637–639 (1988).

Arbitrary user-defined Attributes for pyradigm

Name of your awesome project

Better data structures for machine learning in NI

Project Description

See more:
raamana/pyradigm#17

Skills required to participate

Python
Object oriented programming
User experience / Designers

Integration

How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator? You can check the Mozilla Open Leadership material on personas and contribution guidelines.
Try to define intermediate goals (milestones).

TBA

Preparation material

Play with the MLDataset from pyradigm: http://pyradigm.readthedocs.io/

Link to your GitHub repo

https://github.com/raamana/pyradigm

Communication

pyradigm channel on brainhack slack

A checklist for improving neuroimaging methods and results reporting

Improving the COBIDAS checklist for better neuroimaging methods and results reporting

Remi Gau (ORCID)

Project Description

In 2012, in his review of the methods and results reporting of more than 200 fMRI papers, Joshua Carp wrote: "Although many journals urge authors to describe their methods to a level of detail such that independent investigators can fully reproduce their efforts, the results described here suggest that few studies meet this criterion."

A few years ago, in order to improve the situation with respect to reproducibility in f/MRI research, the Committee on Best Practices in Data Analysis and Sharing (COBIDAS) of OHBM released a report to promote best practices for methods and results reporting. This was recently followed by a similar initiative for EEG and MEG.

So far these guidelines do not seem to have been widely adopted and anecdotal evidence (see that twitter poll and thread) suggests that even among people who know about the report few of them use it to write or review papers. One likely reason for this might be the unwieldy nature of the report. Anyone who has used this checklist tends to agree that it is a great resource but that it is a bit cumbersome to interpret and apply.

So the short term goal of this project is to facilitate the use of this checklist. But, if done right, this could also in the long term enhance the adoption of emerging neuroimaging standards (the Brain imaging data structure, fMRIprep, NIDM...), facilitate data sharing and pre-registration, help with peer-review...

Short term goal

The short term goal of this project is to make the COBIDAS report easier to use: we want to create a website with a clickable checklist that generates a json file at the end.

By turning the checklist into a website users could more rapidly click through it and provide more of the information requested by the COBIDAS report. This would generate a small text file (a json file) that summarizes what option was chosen for each item of the checklist. This machine readable file could then be used to automatically generate part of the methods section of an article.

Other potential goals (e.g. interaction with BIDS and NIDM, further integration with main neuroimaging softwares...) and potential applications (improving data-sharing and peer-review) of this project are described in this repository.

Skills required to participate

One or more of those:

  • To be enthusiastic about reproducibility
  • Familiarity with the COBIDAS report for f/MRI, MEEG,
  • To know something about web design,
  • Familiarity with one or more of the main neuroimaging software for fMRI (SPM, FSL...) or for M/EEG (Fieldtrip, EEGlab...)

Milestones

  • Discuss conceptual and structural details of the COBIDAS-json file.

  • Create a template of the COBIDAS-json file

  • Create a proof of concept website that can:

    • given a template COBIDAS-json file generates a checklist to clicked through by users,
    • outputs a populated COBIDAS-json file once the user is done,
    • generate a method section using a populated COBIDAS-json file.

Preparation material

Jeanette Mumford has a 30 min video on her youtube channel explaining the background behind the COBIDAS report and giving a run through of the checklist.

The COBIDAS report:

A spreadsheet version of the COBIDAS checklist (thanks to Cass!!!)

The secret lives of experiments: methods reporting in the fMRI literature

A manifesto for reproducible science

GitHub repo

The github repository of this project can be found here

Communication

Come and join us on the cobidas_checklist channel on the brainhack mattermost .

Neurofeedback in Python - how to transform Pyff (stimulus delivery) from the old Python 2 into the new Python 3 Realm.

Neurofeedback in Python - how to transform Pyff (stimulus delivery) from the old Python 2 into the new Python 3 Realm.

Background

Pyff is a Python module that can be combined with Psychopy to perform Neurofeedback experiments. Pyff can load in and run stimulus paradigms and communicate via TCP/IP to other computers to update stimuli in real-time. In order to do so, it starts up a seeparate process with a main thread (since all screen refresh/3D/graphical stuff needs to be in a main thread), and a separate thread that monitors incoming network traffic.

This spearate thread relies heavily on asyncio/asynchat to prevent the that thread from killing itself if something goes wrong with the network traffic part (which usually does). Asynchat/Asyncio is a type of asynchronous programming where the interpreter can continue with other code while one line dealing with network traffic is waiting. Asynchronous programming has underdone many iterations, and one of the major one is that it is now implemented in python 3's async module and asyncio/asynchat no longer exists in favor of the more general async module.

The documentation is however quite bad. The work I propose is to take a look to see if pyff python 2's asyncio/asynchat can be deciphered into equivalent async code, and furthermore to more fully convert pyff into the python 3 realm.

knowledge

Preferably something about async programming

github repository

https://github.com/jnvandermeer/nf-stim-review
2to3 program

This would be a good match for the traintrack python2 to python3 issue raised earlier (issue #25)

Comparing longitudinal registration tools for 2D MRI

Comparing longitudinal registration tools for 2D MRI

Project Description

While research MRI anatomical images are usually 3D (e.g. FLASH), clinical scans are typically 2D acquisitions with thick slices. In this project, we take up the challenge of longitudinal registration with 2D scans as would typically be acquired in a long term clinical trail (e.g. for multiple sclerosis). Longitudinal brain imaging can be particularly useful in the analysis of volumetric changes or lesion burden, and shows great promise for the development of novel biomarkers.

Registration is a key step in the pipeline that affects all further downstream analysis of neuroimaging data. Although using cross-sectional tools to process longitudinal data is unbiased, this ignores the common information across scans. Longitudinal processing aims to reduce the within-subject variability. Both SPM and FreeSurfer offer tools for longitudinal registration of scans across multiple (more than two) time points and, as with most image processing tools, these have naturally been developed with research-quality data in mind. As researchers are increasingly gaining access to clinical data, however, it would be timely to determine how current longitudinal processing tools perform on lower-quality 2D MRI scans.

Using the publicly-available OASIS dataset, we would like to investigate the performance of the SPM and FreeSurfer longitudinal registration tools. The OASIS-3 (Longitudinal Neuroimaging, Clinical, and Cognitive Dataset for Normal Aging and Alzheimer’s Disease) dataset consists of images from c.1000 subjects, many of which are accompanied by volumetric segmentation files produced through FreeSurfer. With these files as a 'gold-standard', we will average slices from 3D acquisitions to simulate 2D acquisitions and assess the accuracy of each processing tool.

Skills required to participate

Any of the following:

  • Experience in programming (mainly Matlab, C or C++)

  • Experience with FreeSurfer or SPM12

  • Experience with structural image analysis

Integration

Contributions towards any of the following milestones would be very welcome!

Milestones

  1. Downsample OASIS T1 3D data to lower-resolution 2D images

  2. Isolate the longitudinal registration codebase from FreeSurfer

  3. Longitudinal registration of 2D images in SPM and FreeSurfer

  4. Assessment of segmentation performance to original 3D images

Preparation material

The OASIS project
Chapter 27 (Longitudinal registration) of the SPM 12 manual
FSL longitudinal processing

Papers

Communication

Join the chat in our mattermost channel :)

Animal and Non-standard brain pipelines

Animal /Nonstandard Brain Pipelines

Project Description

This is an example of converting an in house animal pipeline project to hopefully a feat or c-pac for animal studies. In Neuroimaging we're generally limited to working with a small range of animals. Even if you're working with macaques you still need to do a lot of anatomical processing that's distinct to human brains. So this year we designed a surface generation pipeline that is easily adaptable to multiple animals

What we'd like to do is now extend the pipelines along with some other in house pipelines we have for preprocessing of fMRI and DTI data with the simple idea being that following anatomical processing (which can also be just brain extraction and not the full surface generation). We'd like to set it up so you have either a feat or c-pac like GUI or commandline where you can set up everything with one key difference: an added animal option.

Skills required to participate

This project is really open to everyone.
On the technical side a good level of bash, and python would be great.
Additionally if you have experience in making docker containers or GUI's to make it a more user friendly pipeline that would also be great!

On a non-technical side I realized recently that there currently is no documentation for the actual pipeline. Additionally if you're interested in adding brains of new animals and want to share data you absolutely can! If you just want to add brains you can help us add some from the brain catalogue using brain box: http://brainbox.pasteur.fr/

Integration

This really is a project for everyone. As I mentioned on the non-technical side we could potentially add quite a few new animal brains in order to generate their surfaces.

Neuroimagers can help us with the design and optimization of the current and new pipelines. Specifically:
In the surface generation pipeline

  • Add ex-vivo support for surface generation
  • Add support for non T1 images i.e. highres T2
  • Add support for small animal imaging. Currently the pipeline has only been tested on images using clinical scanners and more or less standard resolution. A pre-pipeline step to adjust voxel sizes for rodents would be great!
  • Add in support to more easily use hand edited WM masks when necessary

Computer Science:
Part of the problem in neuroimaging pipelines is that they're not always intuitive to install or use.

  • Help create a docker container for use in creating an animal friendly computing environment
  • Help use create a GUI for using the pipelines so that working with animals can be as easy as working with humans

I'm just here to learn:

  • The documentation is largely lacking. A manual on how to create the prerequisite images for the pipeline would be great!
  • Thanks to brain catalogue we have a ton of animal brains we can try the surface pipelines on.
    Create the initial masks required to run everything and get comfortable running and editing pipelines!

Preparation material

Come with an open mind and if you can some open data!

Link to your GitHub repo

The precon_all repo: https://github.com/recoveringyank/precon_all

Communication

Here's our mattermost link! https://mattermost.brainhack.org/brainhack/channels/precon_all

BLM: Parallelised Computing for Big Linear Models

BLM: Parallelised Computing for Big Linear Models

Project Description

Currently, large scale imaging studies are becoming increasingly popular within the Neuroimaging community. As datasets grow larger and larger, however, performing standard GLM analysis is becoming increasingly challenging. Heavy demands are placed on memory usage and computation time, and variability in masks from each subject can cause severe erosion of the analysis mask unless the model allows for missing data.

To address these issues we recently created BLM, a tool for computing "Big" Linear Models in a parallel (cluster) setting, implemented in python. However, this project is still in it's early days and there are many features we would like to add to it. For example:

  • Imputation models
  • Permutation and Bootstrapping
  • Spatially varying predictors
  • RFT smoothness estimation
  • Optimization for models with 100's of predictors (some possibly nuisance not requiring inference)

Skills required to participate

The ideal prerequisites for this project would be familiarity with Python, computer clusters and linear models. However, anyone who wants to give BLM a try or make suggestions is welcome to join!

Integration

Predominantly, we are looking for computational scientists and statisticians as much of what needs to be done is code-based. However, anyone and everyone is welcome to join and try running BLM and let us know how they get on. If any neuroimagers or psychologists have any suggestions for features they would find useful and would like to discuss implementing as well please feel free to come talk to us!

Our intermediate goal is to complete at least 2-3 of the items we listed in the project description section.

Preparation material

In terms of preparation, the best thing to do would be to have a read of the readme.md file on the BLM repository and try out BLM for yourself!

Link to your GitHub repo

The GitHub repository can be found here.

Communication

I have set up a Mattermost channel named "BLM" on the Hackathon Mattermost.

Generating BIDS derivatives with (a) Banana

Generating BIDS derivatives with (a) Banana

Project Description

Brain imAgiNg Analysis iN Arcana (Banana) is a collection of imaging analysis methods implemented in the Arcana framework, and is proposed as a code-base for collaborative development of neuroimaging workflows. Unlike traditional "linear" workflows, analyses implemented in Arcana are constructed on-the-fly from cascades of modular pipelines that generate derivatives from a mixture of acquired data and prequisite derivatives (similar to Makefiles). Given the "data-centric" architecture of this approach, there should be a natural harmony between it and the ongoing standardisation of BIDS derivatives.

The primary goal of this project is to closely align the analysis methods implemented in Banana with the BIDS standard, in particular BIDS derivatives, in order to make them familiar to new users and interoperable with other packages. Further to this, in cases where a de facto standard for a particular
workflow exists (e.g. fmriprep) Banana should aim to mirror this standard by default. The extensibility of Arcana's object-orientated architecture could then be utilised to tailor such standard workflows to the needs of specific studies (via class inheritance).

There is also plenty of scope to expand the imaging contrasts/modalities supported by Banana, so if you have expertise in a particular area and are interested in implementing it in Banana we can definitely look to do that as well.

Skills required to participate

Any of the following:

  • Python
  • Workflow design (preferably some Nipype but not essential)
  • Detailed knowledge BIDS specification (or part thereof)
  • Domain-specific knowlege of analysis of a particular imaging modality that
    you would like to see implemented in Banana (e.g. EEG, MEG, etc..)

Integration

  • Python programmers and workflow designers who are looking to implement and maintain a suite of generic analysis methods should be able to help extend existing classes and implement new ones for different imaging contrast/modalities not currently covered.
  • Domain-experts (e.g. EEG, MEG, pre-clinical MRI) who a interested implementing existing workflows within in a portable, extensible framework could help to guide the implementation, check the derivatives they create are correct, etc...
  • 1st and 2nd year PhD students who are planning the analysis for their thesis, could look to create their own customised "study" classes that extend from the generic base classes in Banana to integrate all their analysis in the same code-base (and re-use common derivatives/QC, maintain provenance records).

Preparation material

Skim through the Arcana paper for the basic concepts,

Arcana BioXiv paper (in press Neuroinformatics, to be 10.1007/s12021-019-09430-1)

There is also some online documentation,

arcana docs

Arcana is built on top of Nipype so understanding Nipype concepts would also be useful,

nipype docs

Link to your GitHub repo

Banana Github Repo

Communication

There is a new channel on the BrainHack mattermost here

Automatized creation of functional region of interests (ROIs) using Python tools

Automatized creation of functional region of interests (ROIs) using Python tools

Ilkay Isik ORCID

Project Description

Functionally defining regions of interests is a common methodology in cognitive neuroscience due to the greater sensitivity and higher functional resolution it provides over group-based methods (Nieto-Castanon and Fedorenko, 2012). In this approach, a set of functional regions are defined in each individual using a localizer contrast targeting the cognitive process of interest (e.g. fusiform face area (FFA) obtained by contrasting Faces vs Objects).
However, there is not a commonly accepted and automatized way of delineating and selecting these ROIs. The traditional method is to select subject specific ROIs by examining the activation maps for the localizer contrast and decide which voxels to include manually using anatomy knowledge as a guide. However, even expert coders might disagree due to the high individual variability. Furthermore, when these ROIs happen to be located close to each other, it is not very straightforward how to draw the border between them.

Fedorenko et al (2010) and Julian et al (2012) were concerned about these problems
and they offered the following steps in order to automate the creation of ROIs algorithmically:

Fedorenko_2010

Image used with permission from Dr. Evelina Fedorenko

The authors have used Matlab to accomplish these goals.
In this project, we aim to use Python tools and create a Python package which automatically creates functional region of interests.
So, this project itself is not a new idea but I believe this will be a great learning experience for me and for anyone who wants to join and contribute.

Skills required to participate

  • Experience in programming in Python
  • Familiarity with the functional localizer literature (not necessary)

Integration

You can contribute to this project by helping with

Preparation material

  • Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010). New Method for fMRI Investigations of Language: Defining ROIs Functionally in Individual Subjects. Journal of Neurophysiology, 104(2), 1177–1194. http://doi.org/10.1152/jn.00032.2010

  • Julian, J. B., Fedorenko, E., Webster, J., & Kanwisher, N. (2012). An algorithmic method for functionally defining regions of interest in the ventral visual pathway. NeuroImage, 60(4), 2357–2364. https://doi.org/10.1016/j.neuroimage.2012.02.055

  • Nieto-Castañón, A., & Fedorenko, E. (2012). Subject-specific functional localizers increase sensitivity and functional resolution of multi-subject analyses. NeuroImage, 63(3), 1646–1669. https://doi.org/10.1016/j.neuroimage.2012.06.065

Link to your GitHub repo

ProjectGitHubRepo

Communication

Join slack_brainhack_3 and find channel #pyfuncroi

Adding ability to regress covariates in neuropredict

Name of your awesome project

Adding ability regress covariates in neuropredict

Project Description

Needs to be within inner-CV
Provide different popular options to handle covariates

See raamana/neuropredict#7 for more details

Skills required to participate

Statistics
Python
Some Machine learning doesn’t hurt, but not required

Integration

How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator? You can check the Mozilla Open Leadership material on personas and contribution guidelines.
Try to define intermediate goals (milestones).

to be expanded: I’d collaborate with them to identify the challenges they face, define problem better and offer a viable solution after interactive consultation

Preparation material

TBA

Link to your GitHub repo

neuropredict

Communication

#neuropredict

Gitter chat

TrainTrack: BrainIAK - The Neural Correlates of Narrative Comprehension and Event Structure in Naturalistic Stimuli

TrainTrack: The Neural Correlates of Narrative Comprehension and Event Structure in Naturalistic Stimuli

Project Description

Do two people who watch the same movie have similar patterns of neural activity? If we were to describe the movie to someone else, are the neural patterns when we describe the movie similar to when we watch the movie? To help understand the neural correlates of narrative comprehension and event structures in these stories, we will use Inter-Subject Correlations (ISC) (Hasson et al., 2004; Simony et al., 2016), Shared Response Modeling (SRM) (Chen et al., 2015), and Event Segmentation methods (Baldassano et al., 2017) and apply them to datasets wherein subjects watch movies or listen to stories.

Skills required to participate

Knowledge of python and basic MVPA applied to fMRI datasets.

Integration

TBA.

Intermediate goals (milestones).
Install a working version of brainiak and a ready to use dataset.
Execute tutorials related ISC, SRM, and Event Segmentation.

Preparation material

Python
ISC, SRM, and Event Segmentation Tutorials
BrainIAK

Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., & Norman, K. A. (2017). Discovering Event Structure in Continuous Narrative Perception and Memory. Neuron, 95(3), 709-721.e5. https://doi.org/10.1016/j.neuron.2017.06.041

Chen, P.-H. (Cameron), Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., & Ramadge, P. J. (2015). A Reduced-Dimension fMRI Shared Response Model. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 460–468). Curran Associates, Inc. Retrieved from http://papers.nips.cc/paper/5855-a-reduced-dimension-fmri-shared-response-model.pdf

Hasson, U., Nir, Y., Levy, I., Fuhrmann, G., & Malach, R. (2004). Intersubject Synchronization of Cortical Activity During Natural Vision. Science, 303(5664), 1634–1640. https://doi.org/10.1126/science.1089506

Simony, E., Honey, C. J., Chen, J., Lositsky, O., Yeshurun, Y., Wiesel, A., & Hasson, U. (2016). Dynamic Reconfiguration of the Default Mode Network During Narrative Comprehension. Nature Communications, 7, 12141. https://doi.org/10.1038/ncomms12141

Link to your GitHub repo

BrainIAK
Tutorials: We have released a set of educational materials for public use.

Communication

https://mattermost.brainhack.org/brainhack/channels/brainiak

Spatial filter design based on structure tensors for mesoscopic MR images

Spatial filter design based on structure tensors for mesoscopic MR images

Omer Faruk Gulban (ORCID)

Project Description

Ultra high field MRI (7 Tesla and above) allowed researchers to acquire human brain images at mesoscopic (0.1 to 0.5 mm) isotropic voxel resolutions in-vivo*. Here is an example of such image (350 micron isotropic) acquired at 9.4T scanner using a custom-design coil at Maastricht University:

figure_1

There are several interesting details that appear at this resolution which are not visible in conventional in-vivo anatomical images. Such as the smaller blood vessels within gray and white matter (see the dark lines) or layers within gray matter (faintly visible in this image). Generating such images currently requires averaging across multiple repeated acquisitions. This is because the benefits of ultra high field are traded away to increase the spatial resolution at the cost of decreased signal-to-noise ratio (SNR). Consequently, repeating acquisitions to increase SNR takes a lot of time, so much so that there is no time left for acquiring functional images within the same scanning session.

In this project, I would like to test the possibility of replacing the repeated image acquisitions (to some extent) with a specific type of filtering to increase SNR. By saying specific, I mean a family of filters that make use of a tensor field derived from the images themselves. These tensors are called structure tensors.

I have selected this type of filter to satisfy a few constraints. The selected filter should be:

  • able to preserve important edges while mitigating noise.
  • applicable to partial coverage (custom coil) images.
  • applicable to multi-echo MR images.
  • applicable to complex-domain images.

Here is an animation created from one of my pilot implementations on an artificially noised 7T T1w image:

I think this implementation can be improved, applied to other image types and validated further.

This project is by no means a novel implementation of such a filter (see Mirebeau et al. 2015). However, the application to ultra-high field MRI in the context of multi-echo and complex domain images might be novel. If for nothing, I think this project would be helpful for interested people to gain deeper understanding of tensor fields, their role in diffusion and insight on some of the current challenges of in-vivo mesoscopic MRI at 7 & 9.4T.


Skills required to participate

  • Experience in programming (mainly Python, Cython or C might come in handy (see related tutorials here).
  • Experience in (anatomical) MR image analysis or acquisition.

Integration

People can join by contributing to the following:

  • Programming: Scrutinizing code by writing test cases, optimizing for faster runtime, improving user interface (see related tutorials here).

  • Documenting: Improving docstrings (see tutorials), application to different cases, helping in quantification of performance against other methods.

  • In other ways that I couldn't think of here.

Milestones

  1. Discuss conceptual and implementational details of the filter.

  2. Implement the filter usable though a command-line interface.

  3. Apply it to empirical data (e.g. 7T & 9.4T images that I will bring) and evaluate the results.

Preparation material

GitHub repository

I am planning to implement this filter as an additional feature in a small free and open source project that has a few other image processing algorithms implemented for 2D and 3D images.

Communication

Chat on gitter.

Extending Nobrainer and neuronets org - deep learning MR models

Extending Nobrainer - a deep learning framework for neuroimaging

Project Description

Nobrainer is a tensorflow 2.0 based framework for creating and distributing neural network models for MR image processing. The goal of this project is to discuss the structure of Nobrainer and to make it easy for people to create and publish reusable models. Some of the recent work has focused on generative models for MR.

Skills required to participate

  • Python
  • Tensorflow 2.0 (Keras layers)
  • MR image processing

Integration

How would your project integrate a neuroimager/clinician/psychologist/computational scientist/maker/artist as collaborator?

We would love for individuals to post issues describing use cases, feature requests, and contribute code or new models to the project.

Try to define intermediate goals (milestones).

  • Increase trained models provided by nobrainer
  • Increase baseline architectures available
  • Provide model comparison

Preparation material

In addition to the code repo, these notebooks are intended to help guide individuals:

https://github.com/neuronets/nobrainer#guide-jupyter-notebooks-

Link to your GitHub repo

https://github.com/neuronets

Communication

Issues on github repo

TrainTrack: Using docker for open & reproducible science - an introduction

Using docker for open & reproducible science - an introduction

Project Description

Docker has become one of the (if not the) virtualization techniques within the realm of open & reproducible science, as well as automated analyses (e.g., BIDS apps). However, depending on the individual background and training, the application and utilization can range from straightforward to what is this sorcery?. This one day hands-on workshop therefore aims to provide a solid and comprehensive introduction to Docker, ranging from basic concepts over managing & using existing Docker images to building Docker images from scratch, automatizing their respective task.

Skills required to participate

As this is a workshop that aims to introduce participating folks to the docker ecosystem, the most important things to bring along are interest & curiosity. Nevertheless, a basic understanding of operating systems and computer hardware, as well as its architecture would be helpful. The same accounts for basic shell experience.

Integration

Given docker's flexibility and shear endless possibilities, a lot of folks with different backgrounds and research interests could benefit.

Link to your GitHub repo

A GitHub repo with all materials can be found here. Please note, that the materials will be finalized within the next few weeks based on feedback and suggestions.

Communication

If you have questions wrt this workshop please don't hesitate to contact me by opening an issue within the workshop's repo or join the slack_brainhack_3 channel and drop a message (@PeerHerholz).

TrainTrack: Working open. Schedule Day 1 afternoon session 2.

Please create your schedule! What would you like to see? Who could provide this expertise? Would you like to offer a workshop?
It is important that the sessions will be as hands-on as possible :) – the rest is left fully to your creativity! We are looking forward to learning about your interests.
doge_workingOpen

Neurofeedback in Python

Neurofeedback in Python - the need for speed

Background

In EEG neurofeedback timing is perhaps not always so cricial that feedback needs to happen within < 1 miliseconds (for that you'd need real-time Operating systems), but it's still impotant enough that the "Neurofeedback Loop" need to happen relatively fast - and consistent, and in pace with the data acquisition. Lag is unavoidable and acceptable, within limits. Lag can be introduced because you need to do some mathematical operation (filtering especially), or because of Input/Output 'clogging'. Acceptable lag in EEG NF is in the order of 150-200 miliseconds, but the faster the better.

Programming in C will more likely give you that kind of speed, but making (and compiling) things in Python makes it a bit harder to communicate with the Python community, and Python should (in principle) be fast enough also for Neurofeedback purposes. There are currently (as far as I know) two main repositories of Python-based Neurofeedback software: Pyff/Wyrm, made by Bastian Venthur in 2010 (and since more-or-less abandoned). And nfb (see REF PAPER).

Issues

The issue is Lag. In python, one type of lag is due to the Global Interpreter Lock. Basically it means that python interpreter can read & interpret only one python line at a time, so all other lines have to wait their turn. So that means if you have something else that needs to be done in the neurofeedback loop - writing a file, updating your screen, keeping track of parameters, or doing an analysis while the NF loop is running, the Neurofeeback loop has to stop - or you delegate work to another python interpreter.

Delegation is by python's multiprocessing module. There are other ways to do parallel evaluations with threading and asynchronous programming, but the also need to obey the GIL. Basically you start another Python session (and interpreter) - that can run on a separate core and do stuff while the main process hanles the neurofeedback loop, thereby reducing the Lag. The two processes communicate via files (bad), pipes/queues (preferred), or shared memory (probably the fastest, but care is needed).

However... starting a Process takes time. And also, putting stuff into a queue and picking stuff out of a queue ALSO takes time. The bigger the data exchanged the longer it takes. The project I propose is to do some benchmarking/stress testing and measure how long it takes to start up a process, and how much data can you move around at what speeds, in the contesxt of some Neurofeedback and real-time processing that i've been working on. Ideally, the speed should be infinite and the lag 0 of such operations. A lag of more than 30-50 miliseconds can already tamper with the consistency of the Neurofeedback loop.

Aims

  • get an idea on what typical lags are when shuffling around data and delegating work
  • a decorator or a module for timing a typical NF experiment

Knowledge

Multiprocessing, Queues, GIL

Github Repository

https://github.com/jnvandermeer/nf-rtime-preview

TrainTrack: Turning a Python Script to a Sharable and Testable Package

Turning Your Python Script into a Sharable and Testable Package

Project Description

Although many code in Python and share their code, it's still a bit niche to take up proper testing of its implementation, and package it for easy distribution via pip. That's partly due to lack of understanding of how they work, and how easy they are once they work. I want to demystify them with a live coding task.

Skills required to participate

Basic python skills would help, but anyone with a reasonable programming skills will benefit from it.

Preparation material

  • Install Python 3
  • Install pytest and hypothesis

Link to your GitHub repo

To be added

Communication

To be done

Link to the communication channel for your project. You can, for example, create a slack channel for your project inside the Brainhack slack community, and include a slack badge slack_brainhack_3 to invite people to Brainhack slack, where they can then find and join your channel.
Or create a community on gitter and link to the chat by including a Gitter badge linking to your community
Gitter chat

PS

I am not yet 100% sure I can make it to OHBM yet!

An Easy Generator of Author Line

An Easy Generator of Author Line

Project Description

Team neuroscience is becoming increasingly common in today’s research. Along with this trend, the number of individuals involved in one project/paper is also increasing to hundreds and even thousands. Preparation of author block for such papers becomes a challenging job. Particularly, if it is done manually, to format the author affiliations is painful when co-authors ask for some updates.
The basic idea of this project is to make an easy generator of author line. With such a tool, researchers could easily generate an author line which integrates authors’ first and last names, affiliations, and, when needed, information of highest degree and email address.
Given recent big success of team neuroscience studies, we expect that this tool would be very useful for future large-scale studies.

Skills required to participate

Experience in (or willing to learn) programming in Python

Integration

You can contribute to this project by helping with

  • Programming
  • Documenting
  • In other ways that I couldn't think of here.

Preparation material

  • Python programming tutorials
  • Proper documenting/commenting
  • Olds, James L. "The rise of team neuroscience." Nature Reviews Neuroscience 17.10 (2016): 601.
  • Thompson, Paul M., et al. "The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data." Brain imaging and behavior 8.2 (2014): 153-182.
  • Open Science Collaboration. "Estimating the reproducibility of psychological science." Science 349.6251 (2015): aac4716.

Link to your GitHub repo

An Easy Generator of Author Line

Communication

Join the chat at https://gitter.im/easyAuthorLine/community

Awesome Script to Export Freesurfer-based Parcellation/Segmentation Stats and Provenance as JSON-LD and NIDM

Name of your awesome project

Awesome Script to Export Freesurfer-based Parcellation/Segmentation Stats and Provenance as JSON-LD and NIDM

Project Description

This project ultimately aims to facilitate both query and analysis of parcellation/segmentation based regional statistics across popular softwares such as Freesurfer, FSL, and ANTS. Currently each software produces its own output format and brain region labels are specific to the atlas used in generating the regional statistics. This makes life difficult when trying to search for "nucleaus accumbens" volume, for example, across the different software products. Further, knowing which version of the software tool used and what atlas and version of the atlas in a structured representation facilitating query is lacking. To this end we propose augmenting the various segmentation tools with scripts that will: (1) map atlas-specific anatomical nomeclature to anatomical concepts hosted in terminology resources (e.g. InterLex); (2) capture better structured provenance about the input image(s) and the atlases used for the segmentation; (3) export the segmentation results and the provenance as either JSON-LD, NIDM which can then link the derived data to broader records of the original project metadata, or as an additional component of a BIDS derivative.

We aim to tackle this problem in steps. For this hackathon project we'll be focusing on conversion from Freesurfer's mri_segstats program output along with some additional parsing/conversion of Freesurfer log files.

Skills required to participate

Python and structural neuroimaging experience. If one has experience with rdflib or PROV that would also be helpful. Any neuroanatomists in the audience? Would be helpful to have someone vet our mappings from atlas labels to anatomy terms.

Integration

This project will need expertise in programming, structural neuroimaging, and anatomy. To make this project sucessful we need individuals who have skills in any of these domains to help with: (1) understand Freesurfer's segmentation results format and log files; (2) programming up a script in Python; (3) understand anatomy well enough to select the proper anatomical concept that maps to a specific atlas designation of a region and can define new anatomy terms where needed, linking them to broader concepts to facilitate segmentation results queries across softwares.

Preparation material

Link to your GitHub repo

segstats_jsonld
with WIP ReadMe.md containing
NOTE, this temporary github repo may change to be under ReproNim space

Communication

Haven't gotten this far yet but questions can be posted as issues in the GitHub repo linked above for via slack (@dbkeator) / mattermost (@dbkeator) / gmail ([email protected])

A modular design matrix toolbox

A Modular Design Matrix Toolbox

Cool acronym to be determined

Project Description

Have you ever created a design matrix and had trouble with adding this one custom regressor? Or adding SPM motion parameters to an FSL design? Or how to orthogonalise some specific regressors, but not the other ones?

Traditional GUIs try to construct design matrices all at once, leaving little room for customisation. Wouldn't it be great to build a GLM design piece by piece, like stacking up Lego bricks? This project is about making such a toolbox.

My proposal is to make a toolbox with which you can connect little pieces of the design to build the full design. I started this at some point by making a dozen components as MATLAB functions
that can be connected. This is part of my own code toolbox, but I would like the design matrix to be a separate toolbox, ideally reimplemented in Python.

Goals

Main goals

  • Think about the minimal needs for specifying a design matrix (e.g., time unit, regressor specification, etc.).
  • Implement a structure (e.g., Python class, file type) to put this into code
  • Think of several components that could be implemented (e.g., see these
    MATLAB functions)

Stretch goals

  • Write compatibility functions to SPM, FSL, etc. to export the design to use it in standard pipelines.
  • Write design matrix modules for all existing SPM/FSL functionality
  • Implement different HRF shapes for the toolbox.
  • Thinking about first and second level designs.

Blue sky goals

Connect it to a visual editor such as GiraffeTools to edit and save the design in an open format:
DESIGN

Skills required to participate

  • Some knowledge of GLMs (ideally at least on expert on the team)
  • Basic Python or interested to learn it. Ideally one Python expert to do the toolbox overhead.
    • As each function can be considered separately (=modular), this is a really good project to hop on if you're a Python beginner.

Integration

The stretch goals and blue sky goals target to provide incremental compatibility and integration with existing software and online visualisation and communication.

Preparation material

Link to your GitHub repo

Non-existent yet.

Communication

Brainhack slack for now. I'll make a dedicated channel or workspace as soon as there's a team.

Additional information

I might be somewhat busy as I'm also organising the OHBM Open Science Room. I can't fully commit to this project just yet, but I'll definitely help out during the Hackathon. I'm completely fine with someone else taking the lead on this.

Neuroscience memes (pre-hackathon)

Neuroscience memes

Project Description

We need more neuro-memes for our software presentations!

Skills required to participate

Very limited

Integration

Featuring the hashtag #NeuroMemes

Contributing

Add below!

Extending denoising strategies in tedana

Extending denoising strategies in tedana

Project Description

tedana is a Python package for denoising multi-echo fMRI data. One project goal is to implement a range of denoising methods (in addition to two ICA-based decision trees under current development), so that users may choose for themselves which to use. At the hackathon, we would like to discuss a decision tree created by @cjl2007 (currently implemented in MATLAB here) and to implement a version of it in Python within tedana.

Skills required to participate

Those with an interest in (and preferably experience with) multi-echo fMRI or decomposition-based denoising (e.g., AROMA) would be able to contribute at a conceptual level. Those with Python coding skills can contribute to the actual implementation of the methods.

Integration

This project will include both a discussion of denoising strategies to apply within tedana and a hacking portion in which we hope to implement one such strategy in Python within tedana. Neuroimagers and computational scientists may be able to contribute to either part of the project.

Preparation material

Here is a walkthrough of tedana’s pipeline.

Link to your GitHub repo

The tedana repository with README and contributing guidelines.

Communication

Gitter chat

readme

"openness is the norm and siloes the exception"

it might be some mispelling, unless you're really referring to siloes :-P

alt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.