GithubHelp home page GithubHelp logo

sdnet's Introduction

SDNet

The code in this repository implements SDNet, a model-driven FOD reconstruction network, for further details see the accompanying paper at https://arxiv.org/pdf/2307.15273.pdf. The code in this repo is currently being updated to improve usability.

Abstract

Fibre orientation distribution (FOD) reconstruction using deep learning has the potential to produce accurate FODs from a reduced number of diffusion-weighted images (DWIs), decreasing total imaging time. Diffusion acquisition invariant representations of the DWI signals are typically used as input to these methods to ensure that they can be applied flexibly to data with different b-vectors and b-values; however, this means the network cannot condition its output directly on the DWI signal. In this work, we propose a spherical deconvolution network, a model-driven deep learning FOD reconstruction architecture, that ensures intermediate and output FODs produced by the network are consistent with the input DWI signals. Furthermore, we implement a fixel classification penalty within our loss function, encouraging the network to produce FODs that can subsequently be segmented into the correct number of fixels and improve downstream fixel-based analysis. Our results show that the model-based deep learning architecture achieves competitive performance compared to a state-of-the-art FOD super-resolution network, FOD-Net. Moreover, we show that the fixel classification penalty can be tuned to offer improved performance with respect to metrics that rely on accurately segmented of FODs.


Figure 1 SDNet achitecture

User Guide

File Structure

The directory the code comes in is named CSDNet_dir, and will have the following structure:

SDNet_dir
    └── data.py
    └── util.py
    └── inference checkpoints
        └── experiment_name
            └── inference
            └── runs
            └── model_saves
    └── models
        └── csdnet
    └── model_app
        ├── inference.py
        └── train.py

The network is designed to be trained on the WU-MINN human connectome project dataset, as a consequence the data directory is designed to be similar to the HCP data when downloaded. The main data directory, named hcp in this case, subjects labeled as per the HCP. The file path of this directory should be specified in options.py under the attribute data_dir. To specify the subjects in data_dir to be used for training, validation and testing adjust the train_subject_list, val_subject_list, and test_subject_list attributes.

.
└── hcp
    └── subject
        └── T1w
            └── T1w_acpc_dc_restore_1.25.nii.gz
            └── 5ttgen.nii.gz
            └── white_matter_mask.nii.gz
            └── Diffusion
                └── bvals
                └── bvecs 
                └── data.nii.gz
                └── nodif_brain_mask.nii.gz
                └── csf_response.txt
                └── gm_response.txt
                └── wm_response.txt
                └── csf.nii.gz
                └── gm.nii.gz
                └── wmfod.nii.gz
                └── gt_fod.nii.gz
                ├──fixel_directory
                    └── afd.nii.gz
                    └── peak_amp.nii.gz
                    └── index.nii.gz
                    └── directions.nii.gz
                    └── fixnet_targets
                        └── gt_threshold_fixels.nii.gz
                └── undersampled_fod
                    └── bvals
                    └── bvecs
                    └── data.nii.gz
                    └── normalised_data.nii.gz
                    └── csf_response.txt
                    └── gm_response.txt
                    └── wm_response.txt
                    └── csf.nii.gz
                    └── gm.nii.gz
                    └── wm.nii.gz
                └── tractseg *
                    └── peaks.nii.gz *
                    └── bundle_segmentations *
                        └── ** bundle_segmnetation_masks **
                    

Network Training

Prior to network training, ensure the file paths have been set up as above and the options.py script contains the desired network configuration. To train the network run train.py.

Network Inference

Prior to using the network for inference, ensure the file paths have been set up as above and the options.py script contains the desired network configuration. To perform inference update the test_subject_list attribute in the options.py script and run the eval.py script.

Network Configuration:

Network configuration is performed by changing the attributes in options.py; documentation is included in the form of comments in this script.

sdnet's People

Contributors

jbartlett6 avatar

Stargazers

 avatar  avatar

Watchers

 avatar

sdnet's Issues

Results should be stored correctly every time inference/evaluation is run

Each time inference/evaluation is ran the FODs are inferred from the undersampled input data. These output FODs act as qualitative results from the network. To obtain quantitative results the FODs must be compared to the ground truth FOD and different scalar performance metrics calculated. For each metric the scalars are calculated for each individual scalar, then the values averaged over regions of the brain. The regions of the brain they should be averaged over are:

  • White Matter
  • Corpus Callosum (CC)
  • Middle Cerebellar Penduncle (MCP)
  • Corticospinal Tract (CST)
  • ROI 1 - CC containing a ingle fibre
  • ROI 2 - Intersection of the MCP and CST containing 2 fixels
  • ROI 3 - Intersection of the CC, CST and Superior Longitudinal Fascicle (SLF) containing 3 fixels

The quantitative metrics we want measure in these regions are:

  • Sum of squared errors (SSE)
  • Angular correlation coefficient (ACC)
  • Apparent Fibre Density Error (AFDE)
  • Peak Amplitude Error (PAE)
  • Absolute fixel error

The data should be stored in CSV files to facilitate easy manipulation and interpretation.

Inference Folder

inference
├── 130821
├── ...
├── 581450
└── Quantitative Results

Quantitative Results

quantitative_results
├── AFDE.csv
├── PAE.csv
├── ACC.csv
├── Fixel_accuracy.csv
└── SSE.csv

Each individual quantitative results csv files should be organised as follows. It should contain all of the results for the relevant metric for all subjects and over all regions.

Subject Region 1 ... Region k
130821
...
581450

Subject Folders

130821
├── fixel_dir
├── AFD_im.nii.gz
├── PA_im.nii.gz
├── ACC.nii.gz
├── AFDE.nii.gz
├── PAE.nii.gz
├── SSE.nii.gz
├── inf_fod.nii.gz
└── inf_wm_fod.nii.gz

Code/Data Considerations

  • Should do everything in python as to make keeping track of the variable names etc easier and to improve readability.
  • All data should be stored as Nifti files so they can be easily handled in python.

Add an inference to the options.py script.

Currently the options.py script is only applicable for training, it should be adjusted to include options related to inference also. However it may be easier to just do this directly in the scipt as comparably few options may need to be set at the inference stage.

Refine the tensorboard plots and data information storage specifically for continuing training.

Firstly we want an individual experiments tensorboard plots to be in the experiment folder itself rather than a global folder. Additionally we want the tensorboard plot to continue plotting/adding scalars to the previous plot when we continue training, rather than starting again. We would also like to store some information about the current state of the model in a .yml file (or any alternative) such as how many epochs it has been through, when the best score was acheived, what the best scores are etc.

Write a python script to do all of the pre-processing to the dataset once it has been downloaded from aws.

Once the data has been downloaded from the HCP there are a series of pre-processing steps which may be required to be ran on it so a model can be trained. They consist of:

  • calculating ground truth FOD
  • undersampling
  • calculating the undersampled fod
  • calcuating the 5ttgen mask from the T1w image.
  • normalising the images

This list may not be conclusive but a script should be written to do so. Additionally the script should be able to write to a different folder (specified in the arguments) depending on the experiment as well as undersampling with a different rate depending on the argument. This should allow more flexibility for testing different undersampling rate.

Another useful extension would be to write a script to convert the files in the dataset for CSDNet into the correct structure to be used by FODNet.

Adding FOD 'signal' loss option

Using an input option to determine the loss used, let the user choose between calculating the L2 loss between the spherical harmonic coefficients or the FOD evaluated at the 300 directions evaluated to apply to non-negativity constraint.

Adjust the FOD/CSD net combination to take the full under sampled FOD as input.

Currently the network only sees the lower order spherical harmonic coefficients which have been fit to the data, however this means some of the higher frequency information is lost from the input, potentially holding the performance of the network back, this needs adjusting so the who under sampled fod's can be taken as input. Note that this is only an issues since there is only 1 cascade, if there were multiple cascades the network to predict the final deep regularisation would see all of the data anyway, so this only needs applying to models which only have a final data consistency term.

Extract performance measures from MRStats output once after ACC or FBA has been performed.

Each time ACC and FBA are performed a measure is obtained for each of the voxels, the average is then calculated by averaging this over all the voxels in the brain or over some mask. However these are output into the shell but cannot be interpreted there. It is required to find a way to store these values so they can be processed, e.g. averaged over the subjects in a folder.

Complete the argument parser.

Complete the utils/options.pt script which loads the arguments for the model. It does so using the Network_Options class, which firstly initialises a set of default options in the init() method. Then loads the arguments passed into the script using arg parse. There is an additional option to load a .yml config file which will override the options which are parsed into the file using argparse.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.