GithubHelp home page GithubHelp logo

niftypet / nipet Goto Github PK

View Code? Open in Web Editor NEW
29.0 7.0 7.0 1.24 MB

High-throughput PET image reconstruction with high quantitative accuracy and precision

License: Apache License 2.0

Python 45.28% CMake 1.56% C 3.32% Cuda 49.83%
python processing analysis gpu cuda medical-imaging pet image-reconstruction mlem

nipet's Introduction

NIPET: high-throughput Neuro-Image PET reconstruction

Docs Version Downloads Py-Versions DOI Licence Tests

NIPET is a Python sub-package of NiftyPET, offering high-throughput PET image reconstruction as well as image processing and analysis (nimpa: https://github.com/NiftyPET/NIMPA) for PET/MR imaging with high quantitative accuracy and precision. The software is written in CUDA C and embedded in Python C extensions.

The scientific aspects of this software are covered in two open-access publications:

Although, the two stand-alone and independent packages, nipet and nimpa, are dedicated to brain imaging, they can equally well be used for whole body imaging. Strong emphasis is put on the data, which are acquired using positron emission tomography (PET) and magnetic resonance (MR), especially the hybrid and simultaneous PET/MR scanners.

This software platform and Python name-space NiftyPET covers the entire processing pipeline, from the raw list-mode (LM) PET data through to the final image statistic of interest (e.g., regional SUV), including LM bootstrapping and multiple reconstructions to facilitate voxel-wise estimation of uncertainties.

In order to facilitate all the functionality, NiftyPET relies on third-party software for image conversion from DICOM to NIfTI (dcm2niix) and image registration (NiftyReg). The additional software is installed automatically to a user specified location.

Documentation with installation manual and tutorials: https://niftypet.readthedocs.io/

Quick Install

Note that installation prompts for setting the path to NiftyPET_tools and hardware attenuation maps. This can be avoided by setting the environment variables PATHTOOLS and HMUDIR, respectively. It's also recommended (but not required) to use conda.

# optional (Linux syntax) to avoid prompts
export PATHTOOLS=$HOME/NiftyPET_tools
export HMUDIR=$HOME/mmr_hardwareumaps
# cross-platform install
conda install -c conda-forge python=3 \
  ipykernel numpy scipy scikit-image matplotlib ipywidgets dipy nibabel pydicom
pip install dcm2niix
pip install "nipet>=2"

External CMake Projects

The raw C/CUDA libraries may be included in external projects using cmake. Simply build the project and use find_package(NiftyPETnipet).

# print installation directory (after `pip install nipet`)...
python -c "from niftypet.nipet import cmake_prefix; print(cmake_prefix)"

# ... or build & install directly with cmake
mkdir build && cd build
cmake ../niftypet && cmake --build . && cmake --install . --prefix /my/install/dir

At this point any external project may include NIPET as follows (Once setting -DCMAKE_PREFIX_DIR=<installation prefix from above>):

cmake_minimum_required(VERSION 3.3 FATAL_ERROR)
project(myproj)
find_package(NiftyPETnipet COMPONENTS mmr_auxe mmr_lmproc petprj nifty_scatter REQUIRED)
add_executable(myexe ...)
target_link_libraries(myexe PRIVATE
  NiftyPET::mmr_auxe NiftyPET::mmr_lmproc NiftyPET::petprj NiftyPET::nifty_scatter)

Licence

Licence DOI

Copyright 2018-21

nipet's People

Contributors

casperdcl avatar davecash75 avatar github-actions[bot] avatar pjmark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

nipet's Issues

C++/CUDA: Use CuVec unified memory

As with NiftyPET/NIMPA#16 and NiftyPET/NIMPA#15.

This would allow for implementing MLEM & OSEM reconstruction in pure Python (using projectors and resolution modelling filters directly) without any unnecessary memory copies.

  • upgrade petprj.fprj & petprj.bprj #36
  • replace petprj.osem with direct calls to the above
  • update vsm to work without convert2e7

Problem during dynamic reconstruction

When running nipet.mmrchain() I get an error that seems to come from the cuda file prjf.cu. I am unexperienced with C, so I hope you can help me out.

My parameters are:
{'corepath': './ads054-petmr01/', 'mumapDCM': 'ads054-petmr01', '#mumapDCM': 192, 'lm_dcm': 'ads054-petmr01/30000016062206483381200000003_anonym.dcm', 'lm_bf': 'ads054-petmr01/30000016062206483381200000003_anonym.bf', 'pCT': 'ads054-petmr01/sub-ads054_t1w_pct.nii.gz', 'nrm_dcm': 'ads054-petmr01/30000016062206483381200000002_anonym.dcm', 'nrm_bf': 'ads054-petmr01/30000016062206483381200000002_anonym.bf'}

The output is the following:

the given histogram does not contain a prompt sinogram--will generate a histogram.
i> the list-mode file: ads054-petmr01/30000016062206483381200000003_anonym.bf
i> number of elements in the list mode file: 383979868
i> the first time tag is:       3887688 at positon 7099.
i> the last time tag is:        4787714 at positon 383979739.
i> using time offset:           3887688
i> number of report itags is:   901
i> # chunks of data (initial):  41

i> # elechnk:  12582912

i> setting up data chunks:
i> break time tag [31] is:     900027ms at position 383979868. 
i> frame start time: 15
i> frame stop  time: 60
i> using CUDA device #0

i> setting up CUDA pseudorandom number generator... DONE.

i> creating 3 CUDA streams... DONE.

i> reading the first chunks of LM data from:
   ads054-petmr01/30000016062206483381200000003_anonym.bf  DONE.

+> histogramming the LM data:
wc> couldn't find time tag from this position onwards: 0, 
    assuming the last one.
   +> stream[2]:   3 chunks of data are DONE.  
ic> total prompt single slice rebinned sinogram:  P = 15250000

ic> total prompt and delayeds sinogram   events:  P = 15250000, D = 4444359

ic> total prompt and delayeds head-curve events:  P = 15250000, D = 4444359

ic> maximum prompt sino value:  13 
i> using CUDA device #0
i> and removing the gaps and reordering sino for GPU... DONE in 0.017165s
i> calculating normalisation sinogram using device #0... DONE in 0.089437s.

After which I get the error:

invalid argument in /tmp/pip-install-r1u6rc7y/nipet_f727f69030e8494d9faf2f743df5c3d7/niftypet/nipet/prj/src/prjf.cu at line 276

Vertical streak artifacts

I've noticed vertical streak artifacts on my MLEM reconstructions (also those in STIR using the NP projectors). I'm using the example Zenodo data. Am I doing something wrong or is this a bug?

I've created a small jupyter notebook (attached, saved as .txt) that demonstrates what I mean. Very similar to @casperdcl's demo, but with no cylindrical truncation or PSF. I've used norm and randoms, but no scatter or attenuation.

The important chunk of code is below, and the orthogonal views after 50 iterations is after that.

iters = 50
prompts = m['psino'].astype(np.float32)
additive = rands
multiplicative = norm

def make_non_zero(A):
    """Make an array non-zero"""
    A[A <= 0] = np.min(A[A > 0]) * 0.001
    return A

def fwd(im):
    """Forward project"""
    out = nipet.frwd_prj(im, mMRpars)
    if multiplicative is not None:
        out *= multiplicative
    if additive is not None:
        out += additive
    out = make_non_zero(out)
    return out

def bck(sino):
    """Back project"""
    sino_to_proj = np.copy(sino)
    if multiplicative is not None:
        sino_to_proj *= multiplicative
    out = nipet.back_prj(sino_to_proj, mMRpars)
    return out

bck_1s = bck(np.ones_like(prompts))
bck_1s = make_non_zero(bck_1s)
inv_bck_1s = 1 / bck_1s

recon_mlem = [np.ones_like(bck_1s)]
for k in trange(iters, desc="MLEM"):
    fprj = fwd(recon_mlem[-1])
    recon_mlem.append(recon_mlem[-1] * inv_bck_1s * bck(prompts / fprj))

image

Lastly, there's also some horizontal artifacts, that can be seen on this zoomed image of the chin:

image

Issues with installation

I am trying to install nipet but am facing issues. I am using CUDA v12.3 and it does not have a curand static library. This is the error that I come across:
CMake Error at nipet/lm/CMakeLists.txt:13 (target_link_libraries):
Target "mmr_lmproc" links to:
CUDA::curand_static
but the target was not found. Possible reasons include:
* There is a typo in the target name.
* A find_package call is missing for an IMPORTED target.
* An ALIAS target is missing.
-- Generating done (0.1s)
CMake Generate step failed. Build files cannot be regenerated correctly.
My CUDA library path is linked correctly and I do have a cudart_static.lib file. I am just missing the curand_static.lib file

How to Obtaining the hardware and object 𝜇-maps

when i run :
`# obtain the hardware mu-map (the bed and the head&neck coil)
muhdct = nipet.hdw_mumap(datain, [1,2,4], mMRpars, outpath=opth, use_stored=True)

obtain the MR-based human mu-map

muodct = nipet.obj_mumap(datain, mMRpars, outpath=opth, store=True)i get the information as fellow:IOError Traceback (most recent call last)
in ()
1 # obtain the hardware mu-map (the bed and the head&neck coil)
----> 2 muhdct = nipet.hdw_mumap(datain, [1,2,4], mMRpars, outpath=opth, use_stored=False)
3 # obtain the MR-based human mu-map
4 muodct = nipet.obj_mumap(datain, mMRpars, outpath=opth, store=True)

/public/lixin/.local/lib/python2.7/site-packages/niftypet/nipet/img/mmrimg.pyc in hdw_mumap(datain, hparts, params, outpath, use_stored, del_interm)
1288 # otherwise generate it from the parts through resampling the high resolution CT images
1289 else:
-> 1290 hmupos = get_hmupos(datain, hparts, Cnt, outpath=outpath)
1291 # just to get the dims, get the ref image
1292 nimo = nib.load(hmupos[0]['niipath'])

/public/lixin/.local/lib/python2.7/site-packages/niftypet/nipet/img/mmrimg.pyc in get_hmupos(datain, parts, Cnt, outpath)
1203 fh = os.path.join(Cnt['HMUDIR'], Cnt['HMULIST'][i-1])
1204 # get the interfile header and binary data
-> 1205 hdr, im = rd_hmu(fh)
1206 #get shape, origin, offset and voxel size
1207 s = hmu_shape(hdr)

/public/lixin/.local/lib/python2.7/site-packages/niftypet/nipet/img/mmrimg.pyc in rd_hmu(fh)
1066 def rd_hmu(fh):
1067 #--read hdr file--
-> 1068 f = open(fh, 'r')
1069 hdr = f.read()
1070 f.close()

IOError: [Errno 2] No such file or directory: '/public/admin/umap_HNMCL_10606489.v.hdr'`

mmr_lmproc.hist

Hello,
I have a problem with "mmr_lmproc.hist". Although I've installed the niftypet without any problem or error, my notebook crashes when I want to produce Sinogram with "mmr_lmproc.hist". I am using the "Tesla V100-SXM2-32GB" GPU.

ImportError: cannot import name 'mmr_auxe' from partially initialized module 'niftypet.nipet'

Dear developers,
I have installed niftypet inside a docker container using the attached Dockerfile. However, if I start it and enter the Python3 venv (/venv/bin/python3) and import nipet from niftypet as given in the example scripts (from niftypet import nipet), I get the following error:

>>> from niftypet import nipet
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/venv/NIPET/niftypet/nipet/__init__.py", line 45, in <module>
    from . import img, lm, mmr_auxe, mmraux, mmrnorm, prj
  File "/venv/NIPET/niftypet/nipet/img/__init__.py", line 4, in <module>
    from . import auximg, mmrimg
  File "/venv/NIPET/niftypet/nipet/img/mmrimg.py", line 16, in <module>
[Dockerfile.txt](https://github.com/NiftyPET/NiftyPET/files/12313850/Dockerfile.txt)

    from .. import mmraux
  File "/venv/NIPET/niftypet/nipet/mmraux.py", line 21, in <module>
    from . import mmr_auxe, resources
ImportError: cannot import name 'mmr_auxe' from partially initialized module 'niftypet.nipet' (most likely due to a circular import) (/venv/NIPET/niftypet/nipet/__init__.py)

I have also tried to install nipet via niftypet and directly via pip but the error still occurs. Any feedback or suggestions are appreciated!
Best, Melissa
Dockerfile.txt

Support for Mac OS?

Dear Pawel,

I want to ask you if is there a reason why is not possible to install this library in Mac OS?

Best,

Gabriel

Bug in reconstruction pipeline

The function mmrchain in the pipeline code pipe.py takes in a time frame, converts it into a dictionary and then tries to index it (e.g. here).
This results in an error since dictionaries cannot be sliced.

Shouldn't the timing variable be t_frms = dfrms['timings'][1:]?

NVIDIA CUDA Toolkit 9.1 from Ubuntu build error

On a Ubuntu 18.04 I cannot build NiftyPET. See SyneRBI/SIRF-SuperBuild#494

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:        18.04
Codename:       bionic

I ran

sudo apt update
sudo apt upgrade
sudo apt install python3-dev
sudo apt install nvidia-cuda-toolkit
sudo snap install --classic cmake
wget https://raw.githubusercontent.com/pypa/get-pip/master/get-pip.py
python3 get-pip.py --user
python3 -m pip install --user 'numpy==1.19.5'  # no need to specify version. just testing...
git clone https://github.com/NiftyPET/NIPET nipet
mkdir nipet/build
cd nipet/build
cmake ../niftypet && cmake --build .

Error

[  4%] Building CUDA object nipet/CMakeFiles/mmr_auxe.dir/src/aux_module.cu.o
cd /home/ofn77899/nipet/build/nipet && /usr/bin/nvcc  -Dmmr_auxe_EXPORTS -I/home/ofn77899/nipet/niftypet/nipet/src -I/usr/include/python3.6m -I/home/ofn77899/.local/lib/python3.6/site-packages/numpy/core/include -I/home/ofn77899/nipet/niftypet/nipet/include -O3 -DNDEBUG --generate-code=arch=compute_30,code=[compute_30,sm_30] -Xcompiler=-fPIC -std=c++14 -x cu -c /home/ofn77899/nipet/niftypet/nipet/src/aux_module.cu -o CMakeFiles/mmr_auxe.dir/src/aux_module.cu.o
/home/ofn77899/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h(84): error: expected a "}"

/home/ofn77899/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h(89): warning: parsing restarts here after previous syntax error

/home/ofn77899/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h(450): error: identifier "NPY_NTYPES_ABI_COMPATIBLE" is undefined

2 errors detected in the compilation of "/tmp/tmpxft_00005c59_00000000-6_aux_module.cpp1.ii".

I then tried to install CUDA Toolkit from NVidia, and got 10.1. With it it works, both with conda and stock python.

Maybe 9.1, which is installed by Ubuntu is not sufficient for NiftyPET?

pip install error

Hi,

I am trying to install with pip your package on a Ubuntu 16.04 with Cuda 9.1 and cmake 3.5 in a conda environment with python 2.7 but installation fails. It seems that the dicom2niix is a badzip file. Any idea why because the links (https://github.com/rordenlab/dcm2niix/releases/download/v1.0.20180622/dcm2niix_27-Jun-2018_lnx.zip) seems to be working?

  ---------------------------------------------
    i> setting up NiftyPET tools ...
    i> conda environment found: niftypet
    ---------------------------------------------
    i> installing dcm2niix:
    >>>>> DISPLAY :0
    =================================================
    i> dcm2niix will be installed directly from:
       https://github.com/rordenlab/dcm2niix/releases
    =================================================
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-KlaYDt/nimpa/setup.py", line 70, in <module>
        Cnt = tls.install_tool('dcm2niix', Cnt)
      File "install_tools.py", line 219, in install_tool
        Cnt = download_dcm2niix(Cnt, path)
      File "install_tools.py", line 167, in download_dcm2niix
        zipf = zipfile.ZipFile(os.path.join(path, 'dcm2niix.zip'), 'r')
      File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/zipfile.py", line 770, in __init__
        self._RealGetContents()
      File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/zipfile.py", line 811, in _RealGetContents
        raise BadZipfile, "File is not a zip file"
    zipfile.BadZipfile: File is not a zip file
Cleaning up...
  Removing source in /tmp/pip-install-KlaYDt/nimpa
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-KlaYDt/nimpa/
Exception information:
Traceback (most recent call last):
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/basecommand.py", line 228, in main
    status = self.run(options, args)
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 291, in run
    resolver.resolve(requirement_set)
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/resolve.py", line 103, in resolve
    self._resolve_one(requirement_set, req)
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/resolve.py", line 257, in _resolve_one
    abstract_dist = self._get_abstract_dist_for(req_to_install)
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/resolve.py", line 210, in _get_abstract_dist_for
    self.require_hashes
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 324, in prepare_linked_requirement
    abstract_dist.prep_for_dist(finder, self.build_isolation)
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 154, in prep_for_dist
    self.req.run_egg_info()
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 486, in run_egg_info
    command_desc='python setup.py egg_info')
  File "/home/hmn-mednuc/.conda/envs/niftypet/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 698, in call_subprocess
    % (command_desc, proc.returncode, cwd))
InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-KlaYDt/nimpa/

Thanks for your help
Best regards
Paul

mCT reconstruction

Hello,

I'm wondering how can I use NiftyPET to reconstruct Biograph mCT images?

Thanks,
Sanaz

dynamic_timings function does not include offset

Hi @pjmark-
I was using the nipet.lm.get_time_offset to get the time offset due to injection delay, but when I then incorporate that value in nipet.lm.dynamic_timings per the documentation, the frame timings remain unchanged. When I had a bit of a dig into the code, it also seems that the resulting frame timings are dependent if you pass it a 1-D list versus the 2-D 'def' list approach. Since this function is pretty straightfoward and doesn't require any of the actual PET data to run, I've produced a toy example below.
Also, with the 1-D list approach, would you expect there to be a first frame containing the data before the injection?

from niftypet import nipet

dframes = ['def', [4, 15],[8, 30], [9, 60], [2, 180], [8, 300]]
lframes = [ 15,15,15,15,
            30,30,30,30,30,30,30,30,
            60,60,60,60,60,60,60,60,60,
            180,180,
            300,300,300,300,300,300,300,300 ]

offset=0
# First assume no offset
frames = nipet.lm.dynamic_timings(lframes)
print(frames)
{'total': 3600, 'frames': array([ 15,  15,  15,  15,  30,  30,  30,  30,  30,  30,  30,  30,  60,
        60,  60,  60,  60,  60,  60,  60,  60, 180, 180, 300, 300, 300,
       300, 300, 300, 300, 300], dtype=uint16), 'timings': ['timings', [0, 15], [15, 30], [30, 45], [45, 60], [60, 90], [90, 120], [120, 150], [150, 180], [180, 210], [210, 240], [240, 270], [270, 300], [300, 360], [360, 420], [420, 480], [480, 540], [540, 600], [600, 660], [660, 720], [720, 780], [780, 840], [840, 1020], [1020, 1200], [1200, 1500], [1500, 1800], [1800, 2100], [2100, 2400], [2400, 2700], [2700, 3000], [3000, 3300], [3300, 3600]]}

# Get the same result when using the 2-D or 1-D list approach when no offset is provided
frames = nipet.lm.dynamic_timings(dframes)
print(frames)
{'total': 3600, 'frames': array([ 15,  15,  15,  15,  30,  30,  30,  30,  30,  30,  30,  30,  60,
        60,  60,  60,  60,  60,  60,  60,  60, 180, 180, 300, 300, 300,
       300, 300, 300, 300, 300], dtype=uint16), 'timings': ['timings', [0, 15], [15, 30], [30, 45], [45, 60], [60, 90], [90, 120], [120, 150], [150, 180], [180, 210], [210, 240], [240, 270], [270, 300], [300, 360], [360, 420], [420, 480], [480, 540], [540, 600], [600, 660], [660, 720], [720, 780], [780, 840], [840, 1020], [1020, 1200], [1200, 1500], [1500, 1800], [1800, 2100], [2100, 2400], [2400, 2700], [2700, 3000], [3000, 3300], [3300, 3600]]}

# Now let's pick an arbitrary offset value
offset=26
frames = nipet.lm.dynamic_timings(lframes,offset=offset)
# WIth a 1-D list, it produces a "zero-frame" the duration of offset, but this means there
# is a dimension mismatch between frames and timings
print(frames)
{'total': 3626, 'frames': array([ 15,  15,  15,  15,  30,  30,  30,  30,  30,  30,  30,  30,  60,
        60,  60,  60,  60,  60,  60,  60,  60, 180, 180, 300, 300, 300,
       300, 300, 300, 300, 300], dtype=uint16), 'timings': ['timings', [0, 26], [26, 41], [41, 56], [56, 71], [71, 86], [86, 116], [116, 146], [146, 176], [176, 206], [206, 236], [236, 266], [266, 296], [296, 326], [326, 386], [386, 446], [446, 506], [506, 566], [566, 626], [626, 686], [686, 746], [746, 806], [806, 866], [866, 1046], [1046, 1226], [1226, 1526], [1526, 1826], [1826, 2126], [2126, 2426], [2426, 2726], [2726, 3026], [3026, 3326], [3326, 3626]]}

# With dframes 2-D approach the values are the same as they are when offset is 0
frames = nipet.lm.dynamic_timings(dframes,offset=offset)
print(frames)
{'total': 3600, 'frames': array([ 15,  15,  15,  15,  30,  30,  30,  30,  30,  30,  30,  30,  60,
        60,  60,  60,  60,  60,  60,  60,  60, 180, 180, 300, 300, 300,
       300, 300, 300, 300, 300], dtype=uint16), 'timings': ['timings', [0, 15], [15, 30], [30, 45], [45, 60], [60, 90], [90, 120], [120, 150], [150, 180], [180, 210], [210, 240], [240, 270], [270, 300], [300, 360], [360, 420], [420, 480], [480, 540], [540, 600], [600, 660], [660, 720], [720, 780], [780, 840], [840, 1020], [1020, 1200], [1200, 1500], [1500, 1800], [1800, 2100], [2100, 2400], [2400, 2700], [2700, 3000], [3000, 3300], [3300, 3600]]}

Problem during static image reconstruction

Dear developer,
when i try to replicate the static image reconstruction test case, i got the problem "invalid device ordinal in /tmp/pip-install-nen1_a8s/nipet_5bfb9eace6c945fda5fad5c3be0617ac/niftypet/nipet/lm/src/lm_module.cu at line 357" when i using nipet.mmrchain, i annotate the mu_h = muhdct, and i can't get the reconstruction image.
Any feedback or suggestions are appreciated!
Best, Hewen

Recon using timing offset produce very similar frames for all dynamic study

Hi Pawel-
As discussed on Friday, when I tried to account for delay in injection, I got a funny output in which all of the frames were essentially identical. This does not happen when I just include the data before injection (which produces blank frames=0 until a reliable reconstruction can be generated).
This was when using the following code:

 #obtain histogram dictionary:
    hst = nipet.mmrhist(datain, mMRparams)

    #From histogram, obtain offset in seconds:
    time_offset = nipet.lm.get_time_offset(hst)
    print('Time offset for start is' + str(time_offset))

    #Adjust the frames getting a new frame dictionary:
    fdic = nipet.mmraux.timings_from_list(dframes, offset=time_offset)
    print(fdic)

    #Create customised  frames:
    dframes_c = fdic['timings']

    #not sure its needed, but insert a flag for customised dynamic reconstruction:
    dframes_c[0]='fluid'

where dframes_c was passed into mmrchain instead of dframes, which was originally set to:
dframes = ['def',[4, 15], [8, 30], [9, 60], [2, 180], [8, 300]]

Documentation on outputs from function

HI Pawel-
I think it would really help if you stuck in the comments at the beginning of your function definitions what they return, particularly in the cases where you are returning a dictionary with a lot of different entries. For example, the output dictionary of mmrchain. The input is really well documented, but people may also want to use stuff from the output too, especially since many people will run that function from their own script.
Thanks
Dave

No module named dinf

Normally I simply pip install niftypet and everything works fine. I want to modify the source code, so git clone both nipet and nimpa. I use pip install as per the readthedocs and the readme:

pip install --no-binary :all: --verbose -e .

and then test:

python -c "from niftypet import nipet"

Error:

No module named dinf

Managed memory on /src/norm.cu

Hi!
I am just checking around the CUDA code to understand few things, and when looking at the code for the normalization, I found several instances of the following:

https://github.com/NiftyPET/NIPET/blob/master/niftypet/nipet/src/norm.cu#L94-L98

It looks like you are allocating managed memory if its not a windows 32 OS. I have some questions about why this is done:

  1. Managed memory will behave differently in pre and post 6.x cc, so you may get different performance, "in the general case"

  2. This code does not seem to need managed memory (if I am not wrong, please correct me!). You allocate an array on GPU and use it on the kernel, then free it. Managed memory will likely cause just slower execution. Why are you using it? Is it maybe because you are worried of running out of RAM, and it may help in the cc>6.0 archs ?

error when run recon

Dear all,

I can install niftypet by conda. And It works to load data and show images.
image
image

But when I run recon, it shows:invalid argument in /tmp/pip-install-mrfa8dgj/nipet_1671cb602a014072bdf23049ba61457d/niftypet/nipet/prj/src/prjf.cu at line 254.
image
image
image
image

I used CUDA 12.5 and python 3.12, is it possible occured problem cuz version?

And I checked pycuda, it can print correct information about my GPU devices.

Thanks for any reply.

Hannah

Outputting auxiliary information

Hi Pawel-
Since the Nifti format cannot hold information about time frames or the tracer and many modelling software needs that information, would it be possible to create a BIDS sidecar or a SIF file as part of the output to mmrchain so that it would be easier to use these files as inputs into tracer kinetics programs?
Many thanks
David Cash

BLAS : Program is Terminated. Because you tried to allocate too many memory regions.

during nipet.mmrchain, unpredictably (could be after any number of iterations), the program often crashes despite having ample free memory.

i> using CUDA device #0
i> put gaps in and reorder sino...DONE in 0.001838s.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
...
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.

This seems to be fixed by running export OMP_NUM_THREADS=1 before launch.
Unfortunately running os.environ["OMP_NUM_THREADS"] = "1" does not work as an alternative.

Easiest fix is to add this to NiftyPET:

if os.getenv("OMP_NUM_THREADS", None) != "1":
    raise EnvironmentError("should run `export OMP_NUM_THREADS=1` before launch")

Otherwise (proper fix) would involve passing in that environ flag somehow when calling the crashing libraries.

The content of amyloidPET_FBP_TP0 dataset is strange?

https://niftypet.readthedocs.io/en/latest/tutorials/demo/
{'corepath': 'amyloidPET_FBP_TP0',
'lm_dcm': 'amyloidPET_FBP_TP0/LM/17598013_1946_20150604155500.000000.dcm',
'lm_bf': 'amyloidPET_FBP_TP0/LM/17598013_1946_20150604155500.000000.bf',
'hmumap': 'amyloidPET_FBP_TP0/niftyout/mumap-hdw/hmumap.npz',
'sinos': 'amyloidPET_FBP_TP0/niftyout/sino/sinos_s11_frm-0-0.npz',
'nrm_dcm': 'amyloidPET_FBP_TP0/norm/17598013_1946_20150604082431.000000.dcm',
'nrm_bf': 'amyloidPET_FBP_TP0/norm/17598013_1946_20150604082431.000000.bf',
'pCT': 'amyloidPET_FBP_TP0/pCT/17598013_pCT_synth.nii.gz',
'T1lbl': 'amyloidPET_FBP_TP0/prcl/n4_bias_corrected_T1_NeuroMorph_Parcellation.nii.gz',
'T1nii': 'amyloidPET_FBP_TP0/T1w_N4/t1_S00113_17598013_N4bias_cut.nii.gz',
'T1N4': 'amyloidPET_FBP_TP0/T1w_N4/t1_S00113_17598013_N4bias_cut.nii.gz',
'mumapDCM': 'amyloidPET_FBP_TP0/umap',
'#mumapDCM': 192}

but the content of amyloidPET_FBP_TP0 dataset I downloaded are:
{'corepath': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/',
'lm_dcm': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/LM/17598013_1946_20150604155500.000000.dcm',
'lm_bf': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/LM/17598013_1946_20150604155500.000000.bf',
'mumapDCM': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/umap',
'#mumapDCM': 192,
'nrm_dcm': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/norm/17598013_1946_20150604082431.000000.dcm',
'nrm_bf': '/home/wangxiao/Downloads/projects/PET_reconstuction/NIPET/dataset/amyloidPET_FBP_TP0/norm/17598013_1946_20150604082431.000000.bf'}

mmr_hardwareumaps

how to acquire these four .hdr files in mmr_hardwareumaps?
'umap_HNMCL_10606489.v.hdr',
'umap_HNMCU_10606489.v.hdr',
'umap_SPMC_10606491.v.hdr',
'umap_PT_2291734.v.hdr'

I tried several cells in demo.ipynb, but when I ran the second cell in the section of "load & process raw data", it showed that "No such file or directory: '~/mmr_hardwareumaps/umap_HNMCL_10606489.v.hdr'". the directory is indeed empty.

using via CMake failed (doc incomplete?)

I installed nipet from PyPy and used

    find_package(NiftyPETnipet REQUIRED COMPONENTS mmr_auxe mmr_lmproc petprj CONFIG)

and get

CMake Warning (dev) at my-env/lib/python3.10/site-packages/niftypet/nipet/cmake/NiftyPETnipetConfig.cmake:30 (if):
  Policy CMP0057 is not set: Support new IN_LIST if() operator.  Run "cmake
  --help-policy CMP0057" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  IN_LIST will be interpreted as an operator when the policy is set to NEW.
  Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
  CMakeLists.txt:186 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Error at my-env/lib/python3.10/site-packages/niftypet/nipet/cmake/NiftyPETnipetConfig.cmake:30 (if):
  if given arguments:
    "NOT" "_comp" "IN_LIST" "_supported_components"
  Unknown arguments specified

I had the same error when just using

find_package(NiftyPETnipet REQUIRED COMPONENTS mmr_auxe mmr_lmproc petprj)

STIR's NiftyPET /NIPET support cannot be upgraded to NiftyPET 2

@rijobro wrapped NiftyPET into STIR. He had to hard-wire some of the python NIPET initialisations into the STIR C++, but it wasn't too bad.

I've now looked at updating to NIPET2. It seems that more and more initialisation is now only done in Python. For instance, we used get_txlut() to initialise txLUT structure. This is now done in transaxial_lut in mmraux.py.

It makes no sense to "copy" all that Python code into STIR's C++ wrapper. It would mean that we have even more hard-wiring, and presumably will have broken code with the next NIPET update. But I'm assuming you don't want to put all that Python code back into C++...

It therefore looks like we will have to remove NiftyPET's wrapper from STIR sadly, @pjmark @casperdcl any ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.