A deep-learning-based approach to fat/water reconstruction of multi-echo gradient-echo MR images based on regional regularization.
The input of the model are patches derived from the multi-echo images.
This software requires Python, and a CUDA-enabled GPU (for patch extraction).
This file contains the configuration of the model: the extent of the patches and the spacing between the voxels, and helper functions for data preparation. You want to change this file to try out different paramenters. Note that the patch goes from -PATCHEXTENT to +PATCHEXTENT, that is, the actual patch size is (PATCHEXTENT+1)^2.
GPU-based patch generation.
Executable file for the generation of the training data. The input data must be in the format:
- images.npy : 4D complex array of the multi-echo dataset with dimensions rows-cols-slc-eco
- WATER.npy : 3D real array representing the ground truth (water component)
- FAT.npy : as above for the fat component
The program is called as
./deepfat_preparedata.py <path_to_directory> [1 if append]
Trains the model on the data generated by the previous. Modify this file to specify the model. If called with a 1 as a parameter, it trains the new model from zero, otherwise it tries to continue training an existing model.
Runs the model on a new dataset.
Loads the dicom data into a npy array for the deepfat_preparedata.py program above. You might want to modify this as it is at the moment tailored for a custom MR sequence.
This project was supported by the SNF (grant number 320030_172876)
Written with StackEdit.