mind-inria / mri-nufft Goto Github PK
View Code? Open in Web Editor NEWDoing non-Cartesian MR Imaging has never been so easy.
Home Page: https://mind-inria.github.io/mri-nufft/
License: BSD 3-Clause "New" or "Revised" License
Doing non-Cartesian MR Imaging has never been so easy.
Home Page: https://mind-inria.github.io/mri-nufft/
License: BSD 3-Clause "New" or "Revised" License
Currently we are using pyNFFT as the reference backend to check the fidely of the other supported backend.
This result from a speed/precision compromise, because comparing all backends to the NDFT would be expensive
However this causes some troubles:
NDFT/NFFT/another_backend
introduce more imprecision.An alternative would be to use finufft as the reference implementation, as they have prime support for python bindings.
Hey there,
It seems that mri-nufft
has some typing errors when running on python 3.8.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/swolf/miniconda3/envs/test/lib/python3.8/site-packages/mrinufft/__init__.py", line 9, in <module>
from .operators import (
File "/home/swolf/miniconda3/envs/test/lib/python3.8/site-packages/mrinufft/operators/__init__.py", line 7, in <module>
from .base import (
File "/home/swolf/miniconda3/envs/test/lib/python3.8/site-packages/mrinufft/operators/base.py", line 184, in <module>
class FourierOperatorBase(ABC):
File "/home/swolf/miniconda3/envs/test/lib/python3.8/site-packages/mrinufft/operators/base.py", line 192, in FourierOperatorBase
interfaces: dict[str, tuple] = {}
TypeError: 'type' object is not subscriptable
This new grammar is officially supported since python 3.9, see this, so please consider setting the minimum required python version to 3.9.
or adding the following line to both operators/base.py
and io/nsp.py
files
from __future__ import annotations
It should fix the typing errors.
ISMRMRD is the standard for sharing k-space data acquired using an MRI machine 1.
There exists Python bindings 2 , as well as a toolbox for handling the data 3, the examples in it are quite instructive on how we could read (and write !) ISMRMRD files.
Ideally, MRI-NUFFT should be able to read the kspace-data and trajectories in ISMRMRD files, and process them to make them compatible with our standards. Ideally, it should also be able to write ISMRMRD, for interoperability with other toolboxes.
There is also vendor conversion tools, for instance for Siemens4
From the discussions in #77, #105, #106, #113, and more, It seems that a better display of the trajectories in MRI-NUFFT documentation (building on top of the amazing work of @Daval-G ). As more and more trajectories are added, the question of their discoverability is also at hand.
Technically speaking, the Sphinx Gallery seems to be a good base to work on; an inspiring example can be from matplotlib
itself (https://matplotlib.org/stable/gallery/index.html)! Using sections and tagging may be a good idea as well.
Writing a script for each trajectory may be too repetitive, so maybe some templating system can be used (one solution may be to describe the trajectory in .json
/.yaml
and generate a .py
file from it with jinja ). Also, some extra CI configuration could be helpful.
However, it is necessary first to discuss exactly what we want to display for each trajectory.
We need a way to send isign to Plans so that it will help in autodiff #116
torch-kbnufft is used in the DL community. Yet better nufft libraries are available (e.g. cufinufft, gpunufft). The idea would be add torch-kbnufft in order to the run the benchmark, and show what is the best nufft currently.
Note that torch-kbnufft has also some interesting capabilities, notably differentiation and support for toeplitz embedding (speeds things up in data consistency evaluation)
Since you're using the finufft package, is it possible to wrap their type-3 nufft?
The main idea is to implement the differentation of NUFFT forward and adjoint operator with respect to k-space trajectory locations.
Some earlier implementations:
TorchKBNUFFT: https://github.com/mmuckley/torchkbnufft/blob/main/torchkbnufft/_autograd/interp.py
TFKBNUFFT: https://github.com/zaccharieramzi/tfkbnufft/blob/master/tfkbnufft/kbnufft.py
tensorflow-nufft: https://github.com/mrphys/tensorflow-nufft/blob/7d2f502c6f5f4ba305e7a389ef8794b65a5b4f29/tensorflow_nufft/python/ops/nufft_ops.py#L126-L232
and 2) are very slow and inefficient. 3) while is much faster, it is based on Google_CUDA and reimplementation of cufinufft.
When implemented in mri-nufft, we expect a much better interface with all the backends, giving a much faster, efficient and flexible way to compute.
Underlying math: https://web.eecs.umich.edu/~fessler/papers/files/jour/23/wang-23-eao.pdf
At ISMRM 2024, peoples asks if there were any Pulseq integration.
I am not familiar with Pulseq, but let's have this here for future reference and discussion.
After creating 3D cones trajectories, I have observed that he min value of the trajectory if out f the [-0.5,0.5] range. Here I attached the code to reproduce this issue:
import mrinufft
# Trajectory parameters
Nc = int(208*384/10) # Number of shots
Ns = 384 # Number of samples per shot
in_out = False # Choose between in-out or center-out trajectories
tilt = "uniform" # Angular distance between shots
trajectory = mrinufft.initialize_3D_cones(Nc, Ns, in_out=in_out)
trajectory = trajectory.reshape(Nc*Ns,3)
# here min value out of [-0.5,0.5] range
trajectory.min()
-0.5000150076299451
With all the recent development we should deploy a new release. But there is still some work to be done
The Documentation should gives more information on the mathematics of the NUFFT, e.g. the gridding/spreading of non uniform points, the Oversampled grid, the deconvolution etc ..
This issue will track a bunch of features needed for learning k-space sampling trajectories.
For now, what we need is:
self.samples
property setter to have update the corresponding plans to update the trajectory in respective raw_op
for each backend.
finufft
and cufinufft
easily as there is a set_pts function. @alineyyy will handle a PR for it. @alineyyy for other backends, please add an error for now. This can be done by having error raised in the main setter in the base class.gpuNUFFT
, we need some update on gpuNUFFT
side. @chaithyagr will handle this. #133Presented in:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9546489/
This is not equivalent to a stack of 2D-radial trajectory due to the interleaving of the blades (see figure)
Following the discussion of getkeops/keops#328, adding a proper NDFT nufft (with GPU support! ) should be doable now.
The documentation is good shape but can be better.
Switch to a better like https://sphinx-book-theme.readthedocs.io/en/latest/index.html (see the config from benchopt for instance)
Clearly separate the contents following the convention of https://diataxis.fr/:
Another non-cartesian trajectory that has some nice properties
With the common interface for the numerous backend library, it would be silly not to implement a nice benchmark.
The Benchmark should focuses on the speed performance under various setup:
The accuracy of the methods is already tested by the backend themselves, and to a lesser extend in our unit-tests.
Before any proper release on pypi, we should provide unit-test of the bindings.
Simply checking each interface vs the NUDFT on the small set of point should be enough and fairly easy. However the parametrization of each interface might be tricky.
A package like pytest-cases might help.
Test autodiff where the operators with sensitivity maps (different settings with multi-coils)
We need a baseline to compare the results against. For this, we need to have NDFT rightly implemented the same way, which is slow but then can help a lot in benchmarking.
here method can also be a bool :
mri-nufft/src/mrinufft/operators/base.py
Lines 323 to 329 in f5a6ba0
and here :
mri-nufft/src/mrinufft/operators/base.py
Lines 334 to 336 in f5a6ba0
PyNUFFT has support for both CPU and GPU:
https://jyhmiinlin.github.io/pynufft/index.html
CPU bindings is already done, but adding another GPU library bindings won't hurt.
Currently, in
mri-nufft/src/mrinufft/operators/off_resonnance.py
Lines 19 to 58 in f5a6ba0
we just generate the fourier_op which takes the B and C values into account.
mrinufft.operators.off_resonance
called get_interpolators_from_fieldmap
.MRIFourierCorrected(fourier_op, b0_field, mask)
Numpy 2.0 got release on 16/06/2024. Some Nufft backend don't survived the ABI change.
The following list is to be checked
The goal of this issue is to list and discuss trajectories from the literature to implement. I gathered a bunch of articles, and I would be curious to know
Feel also free to raise new questions, suggest new papers for discussion etc. and I will edit this post.
Here is a list of tasks related to trajectories from the literature that are not already implemented in MRI-NUFFT, along with a checklist for stuff that needs further exploration:
Hi, while looking for other stuff, we found an another NUFFT backend with @chaithyagr:
https://github.com/mritools/mrrt.nufft
Its a interesting one: fully written with a Python Stack (Cython / numba /Cupy) and is basically a port of Fessler's NUFFT from MATLAB. It has:
The density compensation vector can be estimated using several methods:
A good list of methods is provided by Jeff Fessler: https://web.eecs.umich.edu/~fessler/book/c-four.pdf , section 6.4.2
example_stacked.py and example_density.py can't pass in the test-examples because of some network connection errors.
Some alternatives should be purposed to solve these issues.
mri-nufft/src/mrinufft/operators/interfaces/tfnufft.py
Lines 48 to 51 in 8e15f05
Here, estimate_density
just estimates the density, not the density_compensators
. What we need is a reciprocal of this values.
All we need to do is to change this line to:
tf.math.reciprocal_no_nan(tfmri.estimate_density(
samples, shape, method="pipe", max_iter=15
))
Also, some tests on this would be helpful.
Currently the pipe
density needs normalization. The best I feel is to take a template data, do FH ( D * F (x)) and normalize based on difference in means
This can help have consistent lipschitz constants and also a good consitent candidate lamda
/mu
values for normalization
Recently with #90 we now have Smaps estimation module. However, this was not tested. It will be helpful to have tests for this
Wild NUFFT appears! Catch them all!
https://github.com/spinicist/riesling
It is CPU-based, With CLI similar to BART (but aims for better 3D support)
Hi all,
I have just started trying out MRI-NUFFT after ISMRM, previously having used just BART.
As far as I understand, BART expects non-uniform trajectories to be normalized to 1/FOV. In most cases for me this ends up such that distance between k-space points is just 1 (dk=1), which seems to work.
However, MRI-NUFFT by default normalizes trajectories to [-0.5,0.5]
and then scales this based on the requested image size in traj2cfl
(traj_ = traj * (np.array(shape) - 1)
). This couples the image size to the reconstructed FOV, and generally complicates things. Also, it seems for my BART trajectory, the wrapping (new_traj = (new_traj + 0.5) % 1 - 0.5
) step in proper_trajectory
essentially corrupts the trajectory, as the later scaling does not undo the wrapping.
Maybe I am missing something, however if not an easy fix would be removing the call to proper_trajectory(samples, normalize="unit")
from the BART operator, as the normalization does not seem to fit well with the BART formalism.
Curious to see what others think.
Along with the ISMRMRD format (#64) we could also refactor/expose the routine used in the BART interface.
We could have possibly the following examples:
There are a bunch of scripts to better view k-space sampling trajectories, particularly but not limited to 3D:
Ideally, we need a script to handle this and this can be a good GUI to handle most of what we need (with or without the help of napari).
@paquiteau Opinions?
Density compensation weights act as preconditionner on the adjoint operator, and leads to better image quality (and thus faster convergence in iterative schemes)
Currently we support Voronoi estimation and Pipe's iterative scheme. Yet those estimation are simple heuristic, and does optimize a criteria, thus there is no way of knowing if its "best" in one form or the other. Also the current Voronoi method implementation is not applicable in 3D (too much memory and computation requirement).
Jeff Fessler's work: https://web.eecs.umich.edu/~fessler/book/c-four.pdf provides a good overview of the available methods (see notably section 6.4.2)
Implementation of these methods would be a great asset to mri-nufft, as they can be made agnostic to the NUFFT backend. Most of them are iterative, so a benchmark using benchopt could also be done.
Currently we also do FFT which is not ideal.
GPU CI is broken (due to pynfft).
Interface observed: gpuNUFFT
, could potentially impact others also as it seems cupy issue
I recently saw nearly double to triple the usage of GPU memory than what was needed. On closely checking, I think the problem is when we try to convert a cupy array to complex64
This causes creation of new unwanted memory. We need a better converter, which should either delete the earlier memory or not even create one in the ideal scenario that we give the right datasizes. I feel we need a to complex64 as a specific separate utils function to handle this.
Some paradigm of acquisition uses Stack of non cartesian trajectory (e.g. Stack of Spiral, stack of radial, etc). even if we know that fully 3D non cartesian are better, having support for such trajectory for comparison would be nice. In particular the "stacking" can be leverage for faster reconstruction: by performing a sequence of (potentially batched) 2D-NUFFT followed by a 1D FFT (or DFT if only a few slice are acquired).
Wild NUFFT backend appears! Gotta Catch them all
https://gitlab.mpcdf.mpg.de/mtr/ducc
Similar to finufft, but aims to be faster and more accurate (Benchmark is needed)
For now imports seem really slow, make it faster for faster debug cycles
MRI-NUFFT documentation is great, but it lack a document describing the common interfaces for the operators (basically describing the bare minimum API that makes a MRI-NUFFT operator), as well as the expected data format.
This is also the case for the trajectory generation (where we are always in a (Nc,Ns,d)
setup).
This would help the community to include new backends/trajectories in MRI-NUFFT, and the other way around, helping the integration of MRI-NUFFT in existing and future tools.
Hi,
I have the following error when using "import snkf" on a windows computer:
import snkf
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\snkf_init_.py", line 6, in
from .handlers import (
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\snkf\handlers_init_.py", line 25, in
importlib.import_module("." + name, name)
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\importlib_init_.py", line 90, in import_module
return bootstrap.gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\snkf\handlers\acquisition_init.py", line 3, in
from .base import (
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\snkf\handlers\acquisition\base.py", line 28, in
from .workers import acq_cartesian, acq_noncartesian
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\snkf\handlers\acquisition\workers.py", line 11, in
from fmri.operators.fourier import FFT_Sense
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\fmri\operators\fourier.py", line 12, in
from mrinufft import get_operator
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\mrinufft_init.py", line 9, in
from .operators import (
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\mrinufft\operators_init_.py", line 21, in
importlib.import_module(".interfaces." + name, name)
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\importlib_init_.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\site-packages\mrinufft\operators\interfaces\bart.py", line 20, in
BART_AVAILABLE = not subp.call(
^^^^^^^^^^
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\subprocess.py", line 389, in call
with Popen(*popenargs, **kwargs) as p:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\subprocess.py", line 1026, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\benja\anaconda3\envs\fRMI\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] Le fichier spécifié est introuvable
Adding Propeller trajectories would be a nice addition to the current collection. Also this might requires some additional reconstruction methods to benefit from the motion correction.
References:
After cloning & installing the current repository, I get the following error when trying to import the package as a whole:
Traceback (most recent call last):
File "", line 1, in
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/init.py", line 9, in
from .operators import (
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/init.py", line 2, in
from .interfaces import (
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/interfaces/init.py", line 4, in
from .cufinufft import MRICufiNUFFT, CUFINUFFT_AVAILABLE
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/interfaces/cufinufft.py", line 8, in
from .utils.gpu_utils import (
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/interfaces/utils/init.py", line 7, in
from .gpu_utils import (
File "/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/interfaces/utils/gpu_utils.py", line 53, in
with open(str(Path(file).parent / "css_colors.txt")) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/guillaume/miniconda3/envs/general/lib/python3.10/site-packages/mrinufft/operators/interfaces/utils/css_colors.txt'
I guess the css_colors.txt
file was simply not pushed.
We soon will support autodiff wrt to trajectory.
However, we should perhaps handle:
We need the users to be able to use autodiff wrt to trajectory more directly as:
trajectory = torch.Tensor(...)
trajectory.requires_grad = True
get_operator('cufinufft')(trajectory)
Note that if trajectory.requires_grad = True, we can enable grad_wrt_traj
as True internally. We dont want users to write it twice.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.