GithubHelp home page GithubHelp logo

enet4 / nifti-rs Goto Github PK

View Code? Open in Web Editor NEW
39.0 3.0 10.0 1.07 MB

Rust implementation of the NIfTI-1 format

License: Apache License 2.0

Rust 100.00%
rust nifti-format neuroimaging medical-imaging nifti hacktoberfest

nifti-rs's Introduction

NIFTI-rs   Latest Version Continuous integration status dependency status

This library is a pure Rust implementation for reading files in the NIfTI format (more specifically NIfTI-1.1).

Example

Please see the documentation for more.

use nifti::{NiftiObject, ReaderOptions, NiftiVolume};

let obj = ReaderOptions::new().read_file("myvolume.nii.gz")?;
// use obj
let header = obj.header();
let volume = obj.volume();
let dims = volume.dim();

The library will automatically look for the respective volume when specifying just the header file:

use nifti::{NiftiObject, ReaderOptions};

let obj = ReaderOptions::new().read_file("myvolume.hdr.gz")?;

With the ndarray_volumes feature (enabled by default), you can also convert a volume to an ndarray::Array and work from there:

let volume = obj.into_volume().into_ndarray::<f32>();

In addition, the nalgebra_affine feature unlocks the affine module, for useful affine transformations.

Roadmap

This library should hopefully fulfil a good number of use cases. However, not all features of the format are fully available. There are no deadlines for these features, so your help is much appreciated. Please visit the issue tracker and tracker for version 1.0. In case something is missing for your use case to work, please find an equivalent issue of file a new one. Pull requests are also welcome.

License

Licensed under either of

at your option.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

nifti-rs's People

Contributors

enet4 avatar nilgoyette avatar twitzelbos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

nifti-rs's Issues

Remove "Reading volume of ..."

I understand that this println can be useful for debugging purposes but we don't want it when we pipe the output of our programs :) Would you mind if I remove it? (Or you can do it, as you wish)

Windows support

I've been using your library on Windows and Debian an everything works well except one test. minimal_by_hdr_and_img_gz() uses resources/minimal2.hdr which is a link to minimal.hdr. Of course, Windows doesn't support link so it's trying to read the link as if it was the actual header.

I hesitate to change it myself because I don't know why you choosed a link in the first place. It seems to me that this test would work as much using resources/minimal.hdr but you already have this test so you must be testing something else... Anyway, I can fix it if you can't test on windows.

Rethink some of the header's default attribute values

This was discussed in #40, but I'm filing an issue so that this concern is not forgotten.

  • pixdim should be [1.0; 8] by default.
  • srow_x and other should have an "identity" default.
  • sform_code should be 1 by default IF it's really the normal way to encode the transformation. Maybe qform_code is the way, I don't know,

I believe we're still undecided on what should be the defaults for sform_code and qform_code.

Inverse Linear Transform on writing

The current implementation of slope/inter is too simple and produces wrong results if T is an integer type and slope/inter are not exactly equal to an integer.

let slope = T::from_f32(slope).unwrap();
let inter = T::from_f32(header.scl_inter).unwrap();
for arr_data in data.axis_iter(Axis(0)) {
    write_slice(writer, arr_data.sub(inter).div(slope))?;

For example, a slope of 0.5 on a u16 image would produce a black image (filled with 0).

We should use a concept similar to data::element. See this comment from @Enet4 for more information.

Resampling of the Image Volume

Some helper functions might be beneficial to resample the image volume for common use cases. E.g., nibabel implements resample_to_output (resample to a given resolution) and resample_from_to (resample into the reference space of another image). I am currently looking into implementing these functions in rust. Is this functionality that you would integrate into nifti-rs?

C or F

I've been working with this library for several months now and I have written my own nifti writer. It has been tested extensively but it's quite slow! Slower than NiBabel in fact. One of the reason that my writer is slow is that the elements are contiguous BUT not in logical order. I searched and finally found what I wanted in convert_bytes_and_cast_to

Ok(Array::from_shape_vec(IxDyn(&dim).f(), data)
//                                  ^^^^ fortran!

I was wondering why I needed a transpose in my writer but never took the time to investigate. So, after all this text... Do you remember why you wanted the memory to be in Fortran mode? It's kind of surprising to me, and probably to many users of ndarray.

ndarray 0.13

I tried updating nifti-rs to ndarray 0.13 because we also want to update our enterprise project, but there's a problem. Now that "approx" is a feature flag of ndarray, we need a feature that doesn't exist: optional dev-dependencies. Without this feature, the "approx" crate will always be included, even in normal builds. I understand that we do not want that, but I don't see what we're supposed to do about it. Nobody's working in cargo issue 1596, so we shouldn't wait. Thoughts?

Huge gz file

We received an big image from the Human Connectome Project, nothing huge, but we needed to resample it to 1x1x1 and now it's 2.3Gb in .nii.gz and 8.0Gb in .nii. It's a 181x218x181x288 f32 image, thus allocating 8 227 466 496 bytes and reading from a Gz source, here

let mut raw_data = vec![0u8; nb_bytes_for_data(header)?];
source.read_exact(&mut raw_data)?;

I tested and it doesn't seem to be a memory issue, in the sense that it does reach the read_exact line, but then it's stuck for, err, long enough that I kill the job. 7zip decodes it in ~1m40s, nifti-rs reads the non-gz version in ~10s. For the gz version, it allocates ~3750Mb, then run indefinitely (max we waited was 1 hour) while always using one process, so it's doing something.

We will probably work with HCP image in the future so we might want to contribute a solution to this problem. I'm not sure how to solve this though! Do you think a chunk version would work? Something like:

out = image of right dimension
buffer = vec![0; 1024]
while not eof
    read chuck
    reinterpret to input type
    cast to requested type
    linear_transform
    assign to out  at right place.
return out

It might slow down the reading of "normal"/smaller images, but we can probably create a different code path for "big" images. What do you think?

Writing 4D images with a single volume

This snippet currently produces a mangled image, as if the bytes are written in the wrong order.

let mut data = Array4::zeros((5, 5, 4, 3));
data.slice_mut(s![.., 2, 2, 0]).fill(1.0);
data.slice_mut(s![.., 3, 2, 0]).fill(1.1);
write_nifti("/tmp/test.nii.gz", &data.select(Axis(3), &[0]), None).unwrap();

Changing the indices to &[0, 1] works well. This tells me that writing a 4D image with a single volume is broken. I had no success fixing it yet but I'm working on it.

Header fix

There are several automatic fixes done on the header in NiBabel. I don't think it's relevant to have them all. IMO, fixing the magic number is wrong, but some may disagree.

I would add _chk_qfac because one Nifti I have crashes with nifiti-rs when I ask for the qform affine because pixdim[0] == 0.0. It works with NiBabel. I could fix it in all my outside tools, but I think it should be fixed here.

But maybe we're against all automatic fixes? What's your opinion on this?

Documentation Examples Do Not Compile

Hi,

I am interested in testing this crate but when I copy and paste the first example in the documentation, paste directly into a main.rs file, I get the following error:

 1  error: expected item, found keyword `let`
  --> src/main.rs:3:1
   |
 3 | let obj = ReaderOptions::new().read_file("myvolume.nii.gz")?;
   | ^^^ consider using `const` or `static` instead of `let` for global variables

 error: could not compile `nifti-test` due to previous error

If I instead wrap the first line in a main() function I receive:

error[E0277]: the `?` operator can only be used in a function that returns `Result` or `Option` (or another type that implements `FromResidual`)
 --> src/main.rs:5:64
  |
3 | fn main() {
  | --------- this function should return `Result` or `Option` to accept `?`
4 |     // let obj = ReaderStreamedOptions::new().read_file("test.nii")?;
5 |     let obj = ReaderOptions::new().read_file("myvolume.nii.gz")?;
  |                                                                ^ cannot use the `?` operator in a function that returns `()`
  |
  = help: the trait `FromResidual<Result<Infallible, NiftiError>>` is not implemented for `()`

Since this is my first contact with the package it's unclear how I would go about debugging these issues or what a working example would look like.

Thanks.

Automatic extension

One of my users complained that ".nii" wasn't appended to the filename he provided to my program. I understand that there's no obvious answer to this question, but I think that a lib called nifti-rs should probably save ".nii" files by default instead of directly using the filename provided without question. What do you think about that?

minimal_by_hdr_and_img_gz fails on Windows

This is not a new problem, I just waited a long time to report it.

When I run cargo test --release --features ndarray_volumes, minimal_by_hdr_and_img_gz fails on Windows and WSL. Moreover, it stops the other tests from running. To see the tests from volume.rs and writer.rs, I must remove the minimal_by_hdr_and_img_gz test.

I didn't investigate yet but I'm probably the only one with Windows so I'll do that "soon". What we could do to confirm the problem is to tell travis to test all OS. They added Windows support in the last months.

os:
  - windows
  - linux
  - osx
language: rust
...

support nifti-2 files

The nifti-1 file format was finalized in 2007. In 2011, the nifti-2 format was created to support larger data sets. Nifti-2 is now widely-used in the neuroimaging field, and furthermore, it serves as the basis for the CIFTI-2 format.

A decade later, nifti-rs supports nifti-1 but not nifti-2. Attempting to read a nifti-2 file produces a NiftiError::InvalidFormat. This is unfortunate, because nifti-rs is otherwise an excellent library with many advantages over its C/C++ and Python counterparts.

Is there interest in adding nifti-2 support to nifti-rs? The differences between nifti-1 and nifti-2 are not great. See also Anderson Winkler's blog and nifti2.h. There are test data here.

  • No new header fields are added
  • The types of 30 existing header fields are enlarged (e.g., 32-bit float --> 64-bit double)
  • 7 existing header fields that were unused are removed
  • The header itself is (obviously) larger
  • The header fields are stored in a different order
  • The magic string is different

Supporting nifti-2 should therefore not be too difficult, but would require some deliberate changes to the API. Since the two headers are so similar, it might make sense to create concrete Nifti1Header and Nifti2Header types, and make NiftiHeader an enum over both types with getter methods to extract the values of shared fields as the larger type. This would let users who are reading an image not have to care about the underlying version (nifti1 vs nifti2). Users writing an image would still have to manually populate the header fields for whichever version they intend to write out.

enum NiftiHeader {
    Nifti1Header(Nifti1Header),
    Nifti2Header(Nifti2Header),
}
impl NiftiHeader {
    pub fn slice_duration(&self) -> f64 {
        match *self {
            // promote nifti1 type to size of larger nifti2 type
            Nifti1Header(ref header) => header.slice_duration as f64,
            Nifti2Header(ref header) => header.slice_duration,
        }
    }
}

I welcome feedback about the API changes and, although my time is limited, I would be happy to submit incremental pull requests against a nifti-2 branch.

Volume data type support

I will use this issue to track currently supported volume data types.

type NiftiVolume (get_*) IntoNdArray Tested Issue / PR
Uint8
Int8 #5
Uint16 #5
Int16 #5
Uint32 #5
Int32 #5
Uint64 #5
Int64 #5
Float32 #1
Float64 #5
Complex64
Rgb24 #26 (writing only)
Rgba32
Float128
Complex128
Complex256

Tracker issue towards 1.0 -- API stability

NIFTI-rs is already two and a half years old. Despite not having many known dependents, I think it's at least time to start thinking about what we'd like to have in 1.0. The principle of this version is that it should transmit a stronger signal of API stability and a sufficient level of completeness. New features can be introduced later on, of course, but if those require significant public API changes, it would be better to push them before 1.0.

The list below is currently a draft. Please let me know of what should be taken care of in this initiative towards crate stability.

  • Volume access API: will the current signature of NiftiVolume satisfy future implementations?
  • Object/Volume writing API: currently only available in combination with ndarray, and is also not uniform across data types (RGB); is a uniform volume saving method feasible?
  • #60 AsPrimitive ergonomics: most volume manipulation methods infect other functions in order to check whether the data element type is OK. Making this better is likely to require a breaking change of the public API.
  • NiftiHeader design: should this stay as a repr(Rust) struct with public fields? Should descrip become an 80-element array to avoid a heap allocation?
  • Dependencies in public API: I can count at least the following crates that are part of this library's public API: ndarray, nalgebra, simba, andbyteordered. It is conventional to only release a 1.0 if the public API dependencies are not pre-1.0, since that could disrupt the intended signal of stability.

Wrong dimension

Some images that we receive are strangely defined. For example, we have an image

dim: [4, 133, 133, 16, 1, 1, 1, 1]

where nifti-rs thinks that it's a 4D image with dimensions [133, 133, 16, 1] Maybe it's totally domain-related, but here at my job we consider this a 3D image! I tried fixing it myself

let mut header = nifti_object.header().clone();
let mut volume = nifti_object.into_volume();

// Fix bad dimension on some 3D images
if header.dim[header.dim[0] as usize] == 1 {
    header.dim[0] -= 1;
    volume.dim[0] -= 1; // ERROR
}

but, as you know, everything in InMemNiftiVolume is private (as it should). We do use these 3 objects: InMemNiftiVolume -> ArrayD -> Array<D, T> in all kinds of context, so I must fix InMemNiftiVolume, not the others.

What do you think? Should all "false" 4D images considered 3D images? If not, should nifti-rs offers a constructor to fix this? Or something else?

Data writing is always done in native byte order regardless of header

If the target's endianness is Big Endian, the voxel data will be stored in the system's native order, while the header is stored in little endian. This will make the header/volume pair inconsistent, leading to garbage voxel data.

This can be reproduced by building and testing for the mips64-unknown-linux-gnuabi64 target.

cross test --features ndarray_volumes --target mips64-unknown-linux-gnuabi64
running 5 tests
test tests::test_c_writing ... FAILED
test tests::test_fortran_writing ... FAILED
test tests::test_header_slope_inter ... FAILED
test tests::test_write_3d_rgb ... ok
test tests::test_write_4d_rgb ... ok

failures:

---- tests::test_c_writing stdout ----
thread 'tests::test_c_writing' panicked at 'assertion failed: read_nifti.all_close(&arr, 1e-10)', tests/writer.rs:74:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.

---- tests::test_fortran_writing stdout ----
thread 'tests::test_fortran_writing' panicked at 'assertion failed: read_nifti.all_close(&arr, 1e-10)', tests/writer.rs:74:9

---- tests::test_header_slope_inter stdout ----
thread 'tests::test_header_slope_inter' panicked at 'assertion failed: read_nifti.all_close(&transformed_data, 1e-10)', tests/writer.rs:142:9


failures:
    tests::test_c_writing
    tests::test_fortran_writing
    tests::test_header_slope_inter

test result: FAILED. 2 passed; 3 failed; 0 ignored; 0 measured; 0 filtered out

The best way to fix this is to write the header in exactly the same endianness as the header.endianness, and then write the volume data in the same byte order. We can still use write_all for each slice if we first reverse the bytes of each voxel value in case of a non-native order.

I will be adjusting our Travis CI job matrix to also cover big endian systems, as we really have a fair amount of code that depends on the target system's byte order (while taking away some other less important jobs).

IncompatibleLength

Hi Eduardo,

nifti-rs panics at an unexpected place and I would like to have your opinion. We have the following method in inmem.rs:

pub fn from_reader<R: Read>(source: R, header: &NiftiHeader) -> Result<Self> {
    // rather than pre-allocating for the full volume size, this will
    // pre-allocate up to a more reliable amount and feed the vector
    // sequentially, to prevent some trivial OOM attacks
    let nb_bytes = nb_bytes_for_data(header)?;
    let mut raw_data = Vec::with_capacity(nb_bytes.min(PREALLOC_MAX_SIZE));
    let nb_bytes_written =
        std::io::copy(&mut source.take(nb_bytes as u64), &mut raw_data)? as usize;

    if nb_bytes_written != nb_bytes {
        return Err(NiftiError::IncompatibleLength(nb_bytes_written, nb_bytes));
    }

    ...

Reading the comment and reading the code, I don't understand how it's supposed to work.

  • The comment tells us it's ok to not read the whole data in a single read
  • The line with PREALLOC_MAX_SIZE tells us the same thing
  • Then there's the condition saying "Hey, if you didn't read the whole thing, explode"

What was the intention behind this code?

Malformed header writing when `descrip` has a different size

When writing the header, we are currently writing all bytes of the descrip field here, but this can make an invalid NifTI file if this vector has a length different from 80 bytes. It's a public field, which means that this can happen in user land.

We can do one of two things to fix this:

  • Make an a priori validation, raise an error if this vector is not 80-lengthed.
  • Or when writing, expand with trailing zeros if it's too short, truncate if it's too large.

CC @nilgoyette

Affine transformation

We would like to contribute "affine" code to nifti-rs. A part of the code is in our private compagnie codebase and the rest is here.

I think it belongs in nifti-rs because:

  • it doesn't belong in trk-io, nor in our private codebase :)
  • it totally depends on NiftiHeader struct.
  • some people might need it and this is where they will search first.

In fact, I consider nifti-rs to be the Rust version of NiBabel, so of course it should be in nifti-rs. What's your opinion on this?

Disclaimer: Most of the code has been ported from NiBabel. I don't know much about licensing and I don't like to care about it.

into_img_file_gz

Any reason into_img_file_gz is so complicated? I feel it could be replaced by

pub fn into_img_file_gz(mut path: PathBuf) -> PathBuf {
    if is_gz_file(&path) {
        // Leave only the first extension (.hdr)
        let _ = path.set_extension("");
    }
    path.with_extension("img.gz")
}

It's shorter and it can't panic anymore. Am I forgetting a special case?

Writing NIfTI files

In order to write these files, we need methods for:

  • writing a NiftiHeader to a file (".hdr") or writer;
  • writing a volume to a file (".img") or writer, with or without gzip;
  • writing a full object to a file (".nii") or writer, with or without gzip.
  • writing a full object (- extensions) to a file (".nii"), with or without gzip, out of a header and ndarray.
  • writing extensions

Unused dimension 0 or 1

We were looking at the nifti specification to know if we should write

dim = [N, w, h, d, 0, 0, 0]
OR
dim = [N, w, h, d, 1, 1, 1]

but it's not entirelly clear! The standard says

dim[i] = length of dimension #i, for i=1..dim[0]  (must be positive)

Is zero positive or negative? Such is the question :) A quick research returns a math.stackexchange page. It seems that there a difference between "positive" and "strickly positive", but the standard doesn't use the clear term "strickly positive"... 0 seems to be both "positive" and "negative" AND neither, depending on who you ask, the context and the position of Venus.

What do we have in nifti-rs?

  • The default NiftiHeader creates a dim [1, 0, 0, 0, 0, 0, 0]
  • The Dim and Idx structs both receive a [u16; 8] without modifying it. It's validated for its length and negative values, but not for unused values. It currently accepts [3, 100, 100, 100, 0, 42, 1].

What do we do with this? It's not a bug, nifti-rs still works as intended, but... do we "fix" it?

Shortcut for the 10 AsPrimitive<T>

You know that group of 10 AsPrimitive that we see quite often in nifti-rs (8 times) and probably in all other projects that use nifti-rs and generics?

u8: AsPrimitive<T>,
...
f64: AsPrimitive<T>,

I tried removed them, but of course I can't because it tries to read from all possible types and convert that to all possible types. This is an important requirement :)

Maybe this is more a question about Rust, but do you think that it's possible to create a "group" trait and use it everywhere? A kind of shortcut? I tested it

trait AllTypesToPrimitive<T>
where
    T: DataElement, // Added because it's used everywhere
    u8: AsPrimitive<T>,
    ...
    f64: AsPrimitive<T>,
{}

impl IntoNdArray for InMemNiftiVolume {
    fn into_ndarray<T>(self) -> Result<Array<T, IxDyn>>
    where T: AllTypesToPrimitive<T> { ... }

and tried using it but it seems that I don't know the right syntax or that's it's impossible. I get several errors that look like

^ the trait `typedef::_IMPL_NUM_FromPrimitive_FOR_NiftiType::_num_traits::AsPrimitive<T>` is not implemented for `f64`

It would be really nice to have this shortcut! Do you think it's possible?

FSL intent

I was wondering if we should add the remaining intent codes in nifti-rs, which stops at

#define NIFTI_INTENT_SHAPE        2005

According to the standard, there are 7 others intents (support is OPTIONAL for conforming implementations.)

#define NIFTI_INTENT_FSL_FNIRT_DISPLACEMENT_FIELD               2006
#define NIFTI_INTENT_FSL_CUBIC_SPLINE_COEFFICIENTS              2007
#define NIFTI_INTENT_FSL_DCT_COEFFICIENTS                       2008
#define NIFTI_INTENT_FSL_QUADRATIC_SPLINE_COEFFICIENTS          2009
#define NIFTI_INTENT_FSL_TOPUP_CUBIC_SPLINE_COEFFICIENTS        2016
#define NIFTI_INTENT_FSL_TOPUP_QUADRATIC_SPLINE_COEFFICIENTS    2017
#define NIFTI_INTENT_FSL_TOPUP_FIELD                            2018

I ask because of course we deal with some of those images. It made my nifti header printer crash. I don't need to modify nifti-rs to repair it, I'll fix it wathever happens, but I think we should add those intents because they do appear in the standard.

f32 images

Hi, thank you for working on this. It's the only nifti loader in rust! I tried using it to test an image denoising algo but it's not possible to load a f32 image. I know it's only a side project for you and I won't be waiting for a fix.

I tried with the simple InMemNiftiObject and by creating a NiftiHeader with the right values. Both give really sparse data. A whole lot of zeros with some random floats.

Improve to_ndarray with better dimensionality awareness

At the end of #1, there was a discussion of whether it would be possible to specify the dimensionality for the Array created with to_ndarray. In particular, it might be more useful to have this method prototype instead:

fn to_ndarray<T, D>(self) -> Result<Array<T, D>>
where
    T: ...,
    D: IntoDimension;

I will leave this issue for discussion on the feasibility of this. Basically, this requires a conversion (which can fail) from a dynamic shape to an arbitrary dimension type D. I haven't found a way to do this yet in ndarray. There is also the small drawback that one has to specify two parameter types instead of just one. IxDyn is a good default, but default type parameters do not work on functions (rust/#36887).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.