GithubHelp home page GithubHelp logo

analysiscenter / radio Goto Github PK

View Code? Open in Web Editor NEW
220.0 220.0 52.0 12.12 MB

RadIO is a library for data science research of computed tomography imaging

Home Page: https://analysiscenter.github.io/radio/

License: Apache License 2.0

Python 9.98% Jupyter Notebook 90.02%
computed-tomography data-science deep-learning machine-learning medical-imaging neural-networks tensorflow

radio's People

Contributors

akoryagin avatar anton-br avatar dearkafka avatar dmylzenova avatar henrique avatar kirillemelyanov avatar kirillemilio avatar roman-kh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

radio's Issues

Memory Issue using load (blosc)

I used the package to load .raw images, do some computations and save them again as blosc files. However now I want to reload these images to observe them, but this uses all of my memory. Even if I use a small batchsize of two, the memory shoots to 99% percent for a while and then returns to a value way higher then before or the program crashes completely. I am using a computer with 12GB RAM, so I didn't except these kind of problems, especially because in the first part of loading my .raw images everything works fine.
Do you know if there is some explanation / fix for this problem?

I used the following code:

 #set-up dataset structure 
crops_index = ds.FilesIndex(path='C:/subset3 - split/training/*', dirs=True)
crops_dataset = ds.Dataset(index=crops_index, batch_class=CTIMB)

#create pipeline to load images, and spacing & origin information
pipeline_load = Pipeline().load(fmt='blosc', components=['spacing', 'origin', 'images','masks'])

#give dataset to pipline and run it per batch
ctline= crops_dataset >> pipeline_load
  
batch=ctline.next_batch(batch_size=2, shuffle=False)

TypeError: 'FilesIndex' object is not iterable

The code is from the tutorial 1.

Traceback (most recent call last):
File "C:/Users/Predator/Documents/WORK/Project_CT/radio/radio_tut1.py", line 17, in
batch = (luna_dataset >> preprocessing).next_batch(batch_size=2, shuffle=False) # execution starts when you call next_batch
File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\dataset\pipeline.py", line 1080, in next_batch
batch_res = next(self._batch_generator)
File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\dataset\pipeline.py", line 1053, in gen_batch
batch_res = self.execute_for(batch)
File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\dataset\pipeline.py", line 597, in execute_for
batch_res = self._exec_all_actions(batch)
File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\dataset\pipeline.py", line 569, in _exec_all_actions
batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\dataset\pipeline.py", line 517, in _exec_one_action
batch = action_method(*args, **kwargs)
File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\decorators.py", line 37, in _action_wrapper
_res = action_method(action_self, *args, **kwargs)
File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_batch.py", line 367, in load
self._load_raw() # pylint: disable=no-value-for-parameter
File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_batch.py", line 541, in _load_raw
for patient_id in self.indices:
TypeError: 'FilesIndex' object is not iterable

reading nifti files

Do this library support reading nifti (nii.gz) files?
I tried just like tutorial for dicom files but got the error of this issue.

Tutorial I: Error in preprocessing pipeline

Hi all, @roman-kh @akoryagin
I cannot move forward pass tutorial I. The preprocessing pipeline worked when it only has load operation. But, when I chained it with resize operation, I found the following error. Please help, thanks
Screenshot from 2019-05-29 16-40-57

The trace-back is the following...

AttributeError                            Traceback (most recent call last)
<ipython-input-9-fbc764c06c25> in <module>
----> 1 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
   1242                 self._lazy_run = args, kwargs
   1243                 self._batch_generator = self.gen_batch(*args, **kwargs)
-> 1244             batch_res = next(self._batch_generator)
   1245         else:
   1246             _kwargs = kwargs.copy()

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
   1212             for batch in batch_generator:
   1213                 try:
-> 1214                     batch_res = self.execute_for(batch)
   1215                 except SkipBatchException:
   1216                     pass

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
    607             asyncio.set_event_loop(asyncio.new_event_loop())
    608         batch.pipeline = self
--> 609         batch_res = self._exec_all_actions(batch)
    610         batch_res.pipeline = self
    611         return batch_res

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, action_list)
    579                     join_batches = None
    580 
--> 581                 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
    582 
    583             batch.pipeline = self

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
    527                 batch.pipeline = self
    528                 action_method, _ = self._get_action_method(batch, action['name'])
--> 529                 batch = action_method(*args, **kwargs)
    530                 batch.pipeline = self
    531         return batch

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
     42                 action_self.pipeline.get_variable(_lock_name).acquire()
     43 
---> 44         _res = action_method(action_self, *args, **kwargs)
     45 
     46         if _use_lock is not None:

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrapped_method(self, *args, **kwargs)
    325                 x = wrap_with_async(self, args, kwargs)
    326             elif _target in ['threads', 't']:
--> 327                 x = wrap_with_threads(self, args, kwargs)
    328             elif _target in ['mpc', 'm']:
    329                 x = wrap_with_mpc(self, args, kwargs)

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrap_with_threads(self, args, kwargs)
    228                 cf.wait(futures, timeout=timeout, return_when=cf.ALL_COMPLETED)
    229 
--> 230             return _call_post_fn(self, post_fn, futures, args, full_kwargs)
    231 
    232         def wrap_with_mpc(self, args, kwargs):

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _call_post_fn(self, post_fn, futures, args, kwargs)
    153                     traceback.print_tb(all_errors[0].__traceback__)
    154                 return self
--> 155             return post_fn(all_results, *args, **kwargs)
    156 
    157         def _prepare_args(self, args, kwargs):

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/preprocessing/ct_masked_batch.py in _post_rebuild(self, all_outputs, new_batch, **kwargs)
    974         batch._rescale_spacing()  # pylint: disable=protected-access
    975         if self.masks is not None:
--> 976             batch.create_mask()
    977         return batch
    978 

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
     42                 action_self.pipeline.get_variable(_lock_name).acquire()
     43 
---> 44         _res = action_method(action_self, *args, **kwargs)
     45 
     46         if _use_lock is not None:

/media/roguka/big_drive/workspace/tensorflow/venv/lib/python3.6/site-packages/radio/preprocessing/ct_masked_batch.py in create_mask(self, mode)
    502         self.masks = np.zeros_like(self.images)
    503 
--> 504         center_pix = np.abs(self.nodules.nodule_center -
    505                             self.nodules.origin) / self.nodules.spacing
    506         radius_pix = np.rint(self.nodules.nodule_size / self.nodules.spacing / 2)

AttributeError: 'NoneType' object has no attribute 'nodule_center'

PyTorch

Is it possible to train a custom pytorch model with this framework?

Kernel is crashed after using all available RAM

Hi guys!
I try to execute Tutorial3 (and 4) with Goggle Colaboratory notebooks (GPU with 12GB RAM). I use only subset0 from luna16. Kernel is crashed when I execute "Shortcut to RadIO capabilities: pipelines-submodule" from tutorial3.
In [34]: batch_crops = (luna_dataset >> crops_sampling).next_batch(7)

Are data from 1 subset very big for 12 GB RAM?

Timestamp Level Message
Feb 17, 2019, 12:41:07 PM WARNING WARNING:root:kernel 29f58e92-c82d-4d59-8881-97130a866161 restarted
Feb 17, 2019, 12:41:07 PM INFO KernelRestarter: restarting kernel (1/5), keep random ports
Feb 17, 2019, 12:40:43 PM WARNING tcmalloc: large alloc 5872025600 bytes == 0xfe3e8000 @ 0x7fc77217c001 0x7fc76686fb85 0x7fc7668d2b43 0x7fc7668d4a86 0x7fc76696c868 0x5030d5 0x506859 0x504c28 0x501b2e 0x591461 0x59ebbe 0x507c17 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x58650d 0x59ebbe 0x507c17 0x504c28 0x501b2e 0x591461 0x59ebbe 0x507c17 0x502209 0x502f3d

My notebook is https://drive.google.com/open?id=1j3xoSUvKF-Z6rl6lO3HcJ-dLxBKUUkzx

Thank you for help

ValueError('image has wrong mode',)

Hi, so I am trying the same code from the tutorial on different platforms. On ubuntu 16 the first lines of code runs super fast compared to windows or mac but got different error:

Traceback (most recent call last):
  File "/home/~/Documents/work/radio/testRADIO.py", line 47, in <module>
    (lunaset.train >> crop_pipeline).run()
  File "/home/~/Documents/work/radio/radio/dataset/dataset/pipeline.py", line 1088, in run
    for _ in self.gen_batch(*args, **kwargs):
  File "/home/~/Documents/work/radio/radio/dataset/dataset/pipeline.py", line 1035, in gen_batch
    batch_res = self._exec(batch)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/pipeline.py", line 579, in _exec
    batch_res = self._exec_all_actions(batch)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/pipeline.py", line 565, in _exec_all_actions
    batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
  File "/home/~/Documents/work/radio/radio/dataset/dataset/pipeline.py", line 516, in _exec_one_action
    batch = action_method(*args, **kwargs)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/decorators.py", line 37, in _action_wrapper
    _res = action_method(action_self, *args, **kwargs)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/decorators.py", line 238, in wrapped_method
    return wrap_with_threads(self, args, kwargs)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/decorators.py", line 171, in wrap_with_threads
    return _call_post_fn(self, post_fn, futures, args, full_kwargs)
  File "/home/~/Documents/work/radio/radio/dataset/dataset/decorators.py", line 136, in _call_post_fn
    return post_fn(all_results, *args, **kwargs)
  File "/home/~/Documents/work/radio/radio/preprocessing/ct_masked_batch.py", line 887, in _post_rebuild
    batch = super()._post_rebuild(all_outputs, new_batch, **kwargs)
  File "/home/~/Documents/work/radio/radio/preprocessing/ct_batch.py", line 962, in _post_rebuild
    self._reraise_worker_exceptions(all_outputs)
  File "/home/~/Documents/work/radio/radio/preprocessing/ct_batch.py", line 785, in _reraise_worker_exceptions
    raise RuntimeError("Failed parallelizing. Some of the workers failed with following errors: ", all_errors)
RuntimeError: ('Failed parallelizing. Some of the workers failed with following errors: ', [ValueError('image has wrong mode',), ValueError('image has wrong mode',), ValueError('image has wrong mode',), ValueError('image has wrong mode',), ValueError('image has wrong mode',), ValueError('image has wrong mode',), ValueError('image has wrong mode',)])

Created Masks are not being resized

Hello, I faced a problem in tutorial 3. When I run this code,
image
image

image

The nodule location in the mask does not correspond to any nodules in the images (shown here). I suspect that the mask is not being resized when I used the unify spacing function. This is a problem as when i tried to use split_dump to generate the training examples for training a neural network, the masks obtained are wrong. This causes the prediction to be wrong as well.

How do I resolve this issue? Does anyone encounter this same problem as well?

llvmlite, numba errors

Traceback (most recent call last):
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\errors.py", line 259, in new_error_context
    yield
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 216, in lower_block
    self.lower_inst(inst)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 265, in lower_inst
    val = self.lower_assign(ty, inst)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 411, in lower_assign
    return self.lower_expr(ty, value)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 743, in lower_expr
    res = self.lower_call(resty, expr)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 709, in lower_call
    res = impl(self.builder, argvals)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\base.py", line 1043, in __call__
    return self._imp(self._context, builder, self._sig, args)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\arrayobj.py", line 3158, in numpy_zeros_nd
    ary = _empty_nd_impl(context, builder, arrtype, shapes)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\arrayobj.py", line 3045, in _empty_nd_impl
    arrlen = builder.mul(arrlen, s)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\llvmlite\ir\builder.py", line 24, in wrapped
    % (lhs.type, rhs.type))
ValueError: Operands must be the same type, got (i64, i32)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/Predator/Documents/WORK/Project_CT/radio/radioInput.py", line 48, in <module>
    (lunaset.train >> crop_pipeline).run()
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\pipeline.py", line 1088, in run
    for _ in self.gen_batch(*args, **kwargs):
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\pipeline.py", line 1035, in gen_batch
    batch_res = self._exec(batch)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\pipeline.py", line 579, in _exec
    batch_res = self._exec_all_actions(batch)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\pipeline.py", line 565, in _exec_all_actions
    batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\pipeline.py", line 516, in _exec_one_action
    batch = action_method(*args, **kwargs)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\decorators.py", line 37, in _action_wrapper
    _res = action_method(action_self, *args, **kwargs)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_masked_batch.py", line 719, in sample_dump
    nodules = self.sample_nodules(batch_size=batch_size, nodule_size=nodule_size, share=share, **kwargs)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\dataset\dataset\decorators.py", line 37, in _action_wrapper
    _res = action_method(action_self, *args, **kwargs)
  File "C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_masked_batch.py", line 650, in sample_nodules
    images = get_nodules_numba(self.images, nodules_st_pos, nodule_size)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\dispatcher.py", line 307, in _compile_for_args
    return self.compile(tuple(argtypes))
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\dispatcher.py", line 579, in compile
    cres = self._compiler.compile(args, return_type)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\dispatcher.py", line 80, in compile
    flags=flags, locals=self.locals)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 779, in compile_extra
    return pipeline.compile_extra(func)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 362, in compile_extra
    return self._compile_bytecode()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 738, in _compile_bytecode
    return self._compile_core()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 725, in _compile_core
    res = pm.run(self.status)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 248, in run
    raise patched_exception
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 240, in run
    stage()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 658, in stage_nopython_backend
    self._backend(lowerfn, objectmode=False)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 613, in _backend
    lowered = lowerfn()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 600, in backend_nopython_mode
    self.flags)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\compiler.py", line 898, in native_lowering_stage
    lower.lower()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 135, in lower
    self.lower_normal_function(self.fndesc)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 176, in lower_normal_function
    entry_block_tail = self.lower_function_body()
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 201, in lower_function_body
    self.lower_block(block)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 216, in lower_block
    self.lower_inst(inst)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\errors.py", line 265, in new_error_context
    six.reraise(type(newerr), newerr, sys.exc_info()[2])
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\six.py", line 658, in reraise
    raise value.with_traceback(tb)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\errors.py", line 259, in new_error_context
    yield
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 216, in lower_block
    self.lower_inst(inst)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 265, in lower_inst
    val = self.lower_assign(ty, inst)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 411, in lower_assign
    return self.lower_expr(ty, value)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 743, in lower_expr
    res = self.lower_call(resty, expr)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\lowering.py", line 709, in lower_call
    res = impl(self.builder, argvals)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\base.py", line 1043, in __call__
    return self._imp(self._context, builder, self._sig, args)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\arrayobj.py", line 3158, in numpy_zeros_nd
    ary = _empty_nd_impl(context, builder, arrtype, shapes)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\numba\targets\arrayobj.py", line 3045, in _empty_nd_impl
    arrlen = builder.mul(arrlen, s)
  File "C:\Users\Predator\AppData\Local\Programs\Python\Python35\lib\site-packages\llvmlite\ir\builder.py", line 24, in wrapped
    % (lhs.type, rhs.type))
numba.errors.LoweringError: Failed at nopython (nopython mode backend)
Operands must be the same type, got (i64, i32)
File "radio\preprocessing\ct_masked_batch.py", line 59
[1] During: lowering "$0.20 = call $0.2($0.19, func=$0.2, args=[Var($0.19, C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_masked_batch.py (59))], vararg=None, kws=[])" at C:\Users\Predator\Documents\WORK\Project_CT\radio\radio\preprocessing\ct_masked_batch.py (59)

The code above was tested on a very small dataset of pictures from LUNA16 dataset after 'lunaset.train >> crop_pipeline).run()' just freezes my PC for eternity testing on the whole dataset. Restarts even (after long time of being freezed). Running on win10 x64 with i7 processor.

Everything is up to date. Upgraded to newest numpy+mkl,numba,llvmlite. Running the python 3.5.

Any one experiencing the same problem?

Tutorial 4: Memory error while running crop pipeline

Hi,

While running the crop pipeline(from tutorial 4, input 8) over test(or training) data, I am facing the following error:

 MemoryError

                               Traceback (most recent call last)
<ipython-input-17-489b801ca09d> in <module>()
----> 1 (lunaset.test >> crop_pipeline).run()

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/pipeline.py in run(self, *args, **kwargs)
   1086             if len(args) == 0 and len(kwargs) == 0:
   1087                 args, kwargs = self._lazy_run
-> 1088             for _ in self.gen_batch(*args, **kwargs):
   1089                 pass
   1090         return self

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/pipeline.py in gen_batch(self, batch_size, shuffle, n_epochs, drop_last, prefetch, on_iter, *args, **kwargs)
   1033             for batch in batch_generator:
   1034                 try:
-> 1035                     batch_res = self._exec(batch)
   1036                 except SkipBatchException:
   1037                     pass

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/pipeline.py in _exec(self, batch, new_loop)
    577             asyncio.set_event_loop(asyncio.new_event_loop())
    578         batch.pipeline = self
--> 579         batch_res = self._exec_all_actions(batch)
    580         batch_res.pipeline = self
    581         return batch_res

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/pipeline.py in _exec_all_actions(self, batch, action_list)
    563                     join_batches = None
    564 
--> 565                 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
    566 
    567             batch.pipeline = self

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
    514                 batch.pipeline = self
    515                 action_method, _ = self._get_action_method(batch, action['name'])
--> 516                 batch = action_method(*args, **kwargs)
    517                 batch.pipeline = self
    518         return batch

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in _action_wrapper(action_self, *args, **kwargs)
     35                 action_self.pipeline.get_variable(_lock_name).acquire()
     36 
---> 37         _res = action_method(action_self, *args, **kwargs)
     38 
     39         if _use_lock is not None:

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in wrapped_method(self, *args, **kwargs)
    236                 return wrap_with_async(self, args, kwargs)
    237             elif target in ['threads', 't']:
--> 238                 return wrap_with_threads(self, args, kwargs)
    239             elif target in ['mpc', 'm']:
    240                 return wrap_with_mpc(self, args, kwargs)

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in wrap_with_threads(self, args, kwargs)
    161                 futures = []
    162                 full_kwargs = {**dec_kwargs, **kwargs}
--> 163                 for arg in _call_init_fn(init_fn, args, full_kwargs):
    164                     margs, mkwargs = _make_args(arg, args, kwargs)
    165                     one_ft = executor.submit(method, self, *margs, **mkwargs)

~/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in _call_init_fn(init_fn, args, kwargs)
    111         def _call_init_fn(init_fn, args, kwargs):
    112             if callable(init_fn):
--> 113                 return init_fn(*args, **kwargs)
    114             return init_fn
    115 

~/anaconda3/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in _init_rebuild(self, **kwargs)
    893             num_slices, y, x = kwargs['shape']
    894             new_bounds = num_slices * np.arange(len(self) + 1)
--> 895             new_data = np.zeros((num_slices * len(self), y, x))
    896         else:
    897             new_bounds = self._bounds

MemoryError: 

I have tried experimenting with several split ratios([#training, #test]), ranging from [0.9, 0.1] to [0.995, 0.005] all resulting in same error. For larger datasets, (lunaset.train >> crop_pipeline).run() does not terminate, in reasonable time. I am running it on Ubuntu 16.04, 64-bit over a PC with 6GB RAM.

AttributeError: Method 'fetch_nodules_info' has not been found in the CTImagesBatch class

If I use
luna_dataset = Dataset(index=luna_index, batch_class=CTImagesBatch)
I am getting below error
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
1 luna_dataset = Dataset(index=luna_index, batch_class=CTImagesBatch)
----> 2 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
1347 self.reset_iter()
1348 self._batch_generator = self.gen_batch(*args, **kwargs)
-> 1349 batch_res = next(self._batch_generator)
1350 else:
1351 _kwargs = kwargs.copy()

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
1312 for batch in batch_generator:
1313 try:
-> 1314 batch_res = self.execute_for(batch)
1315 except SkipBatchException:
1316 pass

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
670 asyncio.set_event_loop(asyncio.new_event_loop())
671 batch.pipeline = self
--> 672 batch_res = self._exec_all_actions(batch)
673 batch_res.pipeline = self
674 return batch_res

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, actions)
642 join_batches = None
643
--> 644 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
645
646 batch.pipeline = self

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
589 for _ in range(repeat):
590 batch.pipeline = self
--> 591 action_method, _ = self._get_action_method(batch, action['name'])
592 batch = action_method(*args, **kwargs)
593 batch.pipeline = self

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _get_action_method(batch, name)
581 raise TypeError("%s is not a method" % name)
582 else:
--> 583 raise AttributeError("Method '%s' has not been found in the %s class" % (name, type(batch).name))
584 return action_method, action_spec
585

AttributeError: Method 'fetch_nodules_info' has not been found in the CTImagesBatch class
If I use luna_dataset = Dataset(index=luna_index, batch_class=CTImagesMaskedBatch)`

I am getting below error
`Info about nodules location must be loaded before calling this method. Nothing happened.
None

AttributeError Traceback (most recent call last)
in
1
----> 2 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
1347 self.reset_iter()
1348 self._batch_generator = self.gen_batch(*args, **kwargs)
-> 1349 batch_res = next(self._batch_generator)
1350 else:
1351 _kwargs = kwargs.copy()

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
1312 for batch in batch_generator:
1313 try:
-> 1314 batch_res = self.execute_for(batch)
1315 except SkipBatchException:
1316 pass

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
670 asyncio.set_event_loop(asyncio.new_event_loop())
671 batch.pipeline = self
--> 672 batch_res = self._exec_all_actions(batch)
673 batch_res.pipeline = self
674 return batch_res

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, actions)
642 join_batches = None
643
--> 644 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
645
646 batch.pipeline = self

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
590 batch.pipeline = self
591 action_method, _ = self._get_action_method(batch, action['name'])
--> 592 batch = action_method(*args, **kwargs)
593 batch.pipeline = self
594 return batch

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
42 action_self.pipeline.get_variable(_lock_name).acquire()
43
---> 44 _res = action_method(action_self, *args, **kwargs)
45
46 if _use_lock is not None:

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/decorators.py in wrapped_method(self, *args, **kwargs)
325 x = wrap_with_async(self, args, kwargs)
326 elif _target in ['threads', 't']:
--> 327 x = wrap_with_threads(self, args, kwargs)
328 elif _target in ['mpc', 'm']:
329 x = wrap_with_mpc(self, args, kwargs)

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/decorators.py in wrap_with_threads(self, args, kwargs)
228 cf.wait(futures, timeout=timeout, return_when=cf.ALL_COMPLETED)
229
--> 230 return _call_post_fn(self, post_fn, futures, args, full_kwargs)
231
232 def wrap_with_mpc(self, args, kwargs):

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/decorators.py in _call_post_fn(self, post_fn, futures, args, kwargs)
153 traceback.print_tb(all_errors[0].traceback)
154 return self
--> 155 return post_fn(all_results, *args, **kwargs)
156
157 def _prepare_args(self, args, kwargs):

/host/Akhila/Models/radio/tutorials/../radio/preprocessing/ct_masked_batch.py in _post_rebuild(self, all_outputs, new_batch, **kwargs)
974 batch._rescale_spacing() # pylint: disable=protected-access
975 if self.masks is not None:
--> 976 batch.create_mask()
977 return batch
978

/host/Akhila/Models/radio/tutorials/../radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
42 action_self.pipeline.get_variable(_lock_name).acquire()
43
---> 44 _res = action_method(action_self, *args, **kwargs)
45
46 if _use_lock is not None:

/host/Akhila/Models/radio/tutorials/../radio/preprocessing/ct_masked_batch.py in create_mask(self, mode)
502 self.masks = np.zeros_like(self.images)
503 print(self)
--> 504 center_pix = np.abs(self.nodules.nodule_center -
505 self.nodules.origin) / self.nodules.spacing
506 radius_pix = np.rint(self.nodules.nodule_size / self.nodules.spacing / 2)

AttributeError: 'NoneType' object has no attribute 'nodule_center'`

Tutorial 4: how to predict masks use Keras3DUNet?

Please help, which parameter should i pass to the function predict?

..............................................................................................
unet_config.update({'path': './data/weights'})
train_unet_pipeline = (
combine_crops(cancerset, ncancerset, batch_sizes=(4, 4))
.init_model(
name='3dunet', model_class=Keras3DUNet,
config=unet_config, mode='static'
)
.train_model(
name='3dunet',
x=F(CTIMB.unpack, component='images', data_format='channels_first'),
y=F(CTIMB.unpack, component='masks', data_format='channels_first'),
)
)

unet_model = train_unet_pipeline.get_model_by_name('3dunet')

predicted_masks = unet_model.predict(feed_dict={'images': batch.unpack('images')}) # code in tutorial doesn't work.

convert blk files to png or jpg

pls. let me know how to save luna16 data to JPG or PNG format through RadIO.

I have preprocessed the .mhd files in luna16 using RadIO and obtained 4 folders images, masks, origin and spacing. I would like to get the images and masks in image format (png or jpg).

Preprocessing resize

Was following tutorial#1 and got the following exception on executing line (but works if remove resize part):

preprocessing = (Pipeline()                     # Initialize empty pipeline 
                 .load(fmt='raw')               # load data from disk
                 .resize(shape=(92, 256, 256))) # resize to shape (92, 256, 256)

batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

Trows the same for

preprocessing = preprocessing.resize(shape=(92, 256, 256)) # remember, nothing is executed here

batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

RuntimeError Traceback (most recent call last)
in
----> 1 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
1242 self._lazy_run = args, kwargs
1243 self._batch_generator = self.gen_batch(*args, **kwargs)
-> 1244 batch_res = next(self._batch_generator)
1245 else:
1246 _kwargs = kwargs.copy()

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
1212 for batch in batch_generator:
1213 try:
-> 1214 batch_res = self.execute_for(batch)
1215 except SkipBatchException:
1216 pass

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
607 asyncio.set_event_loop(asyncio.new_event_loop())
608 batch.pipeline = self
--> 609 batch_res = self._exec_all_actions(batch)
610 batch_res.pipeline = self
611 return batch_res

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, action_list)
579 join_batches = None
580
--> 581 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
582
583 batch.pipeline = self

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
527 batch.pipeline = self
528 action_method, _ = self._get_action_method(batch, action['name'])
--> 529 batch = action_method(*args, **kwargs)
530 batch.pipeline = self
531 return batch

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
42 action_self.pipeline.get_variable(_lock_name).acquire()
43
---> 44 _res = action_method(action_self, *args, **kwargs)
45
46 if _use_lock is not None:

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrapped_method(self, *args, **kwargs)
325 x = wrap_with_async(self, args, kwargs)
326 elif _target in ['threads', 't']:
--> 327 x = wrap_with_threads(self, args, kwargs)
328 elif _target in ['mpc', 'm']:
329 x = wrap_with_mpc(self, args, kwargs)

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrap_with_threads(self, args, kwargs)
228 cf.wait(futures, timeout=timeout, return_when=cf.ALL_COMPLETED)
229
--> 230 return _call_post_fn(self, post_fn, futures, args, full_kwargs)
231
232 def wrap_with_mpc(self, args, kwargs):

/usr/local/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _call_post_fn(self, post_fn, futures, args, kwargs)
153 traceback.print_tb(all_errors[0].traceback)
154 return self
--> 155 return post_fn(all_results, *args, **kwargs)
156
157 def _prepare_args(self, args, kwargs):

/usr/local/lib/python3.6/site-packages/radio/preprocessing/ct_masked_batch.py in _post_rebuild(self, all_outputs, new_batch, **kwargs)
970 """
971 # TODO: process errors
--> 972 batch = super()._post_rebuild(all_outputs, new_batch, **kwargs)
973 batch.nodules = self.nodules
974 batch._rescale_spacing() # pylint: disable=protected-access

/usr/local/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in _post_rebuild(self, all_outputs, new_batch, **kwargs)
1071 unify_spacing is performed.
1072 """
-> 1073 self._reraise_worker_exceptions(all_outputs)
1074
1075 new_bounds = np.cumsum([patient_shape[0] for _, patient_shape

/usr/local/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in _reraise_worker_exceptions(self, worker_outputs)
865 if any_action_failed(worker_outputs):
866 all_errors = self.get_errors(worker_outputs)
--> 867 raise RuntimeError("Failed parallelizing. Some of the workers failed with following errors: ", all_errors)
868
869 def _post_custom_components(self, list_of_dicts, **kwargs):

RuntimeError: ('Failed parallelizing. Some of the workers failed with following errors: ', [LoweringError('Failed in object mode pipeline (step: object mode frontend)\nFailed in object mode pipeline (step: object mode backend)\n(<class 'numba.ir.StaticSetItem'>, output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21)\n\nFile "../../../../../usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py", line 124:\ndef resize_pil(input_array, output_array, res, axes_pairs=None, shape_resize=None,\n \n # normalize result of resize (average over resizes with different pairs of axes)\n output_array[:, :, :] /= len(axes_pairs)\n ^\n\n[1] During: lowering "output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21" at /usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py (124)\n-------------------------------------------------------------------------------\nThis should not have happened, a problem has occurred in Numba's internals.\n\nPlease report the error message and traceback, along with a minimal reproducer\nat: https://github.com/numba/numba/issues/new\n\nIf more help is needed please feel free to speak to the Numba core developers\ndirectly at: https://gitter.im/numba/numba\n\nThanks in advance for your help in improving Numba!\n\n',), LoweringError('Failed in object mode pipeline (step: object mode frontend)\nFailed in object mode pipeline (step: object mode backend)\n(<class 'numba.ir.StaticSetItem'>, output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21)\n\nFile "../../../../../usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py", line 124:\ndef resize_pil(input_array, output_array, res, axes_pairs=None, shape_resize=None,\n \n # normalize result of resize (average over resizes with different pairs of axes)\n output_array[:, :, :] /= len(axes_pairs)\n ^\n\n[1] During: lowering "output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21" at /usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py (124)\n-------------------------------------------------------------------------------\nThis should not have happened, a problem has occurred in Numba's internals.\n\nPlease report the error message and traceback, along with a minimal reproducer\nat: https://github.com/numba/numba/issues/new\n\nIf more help is needed please feel free to speak to the Numba core developers\ndirectly at: https://gitter.im/numba/numba\n\nThanks in advance for your help in improving Numba!\n\n',), LoweringError('Failed in object mode pipeline (step: object mode frontend)\nFailed in object mode pipeline (step: object mode backend)\n(<class 'numba.ir.StaticSetItem'>, output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21)\n\nFile "../../../../../usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py", line 124:\ndef resize_pil(input_array, output_array, res, axes_pairs=None, shape_resize=None,\n \n # normalize result of resize (average over resizes with different pairs of axes)\n output_array[:, :, :] /= len(axes_pairs)\n ^\n\n[1] During: lowering "output_array[(slice(None, None, None), slice(None, None, None), slice(None, None, None))] = $196.21" at /usr/local/lib/python3.6/site-packages/radio/preprocessing/resize.py (124)\n-------------------------------------------------------------------------------\nThis should not have happened, a problem has occurred in Numba's internals.\n\nPlease report the error message and traceback, along with a minimal reproducer\nat: https://github.com/numba/numba/issues/new\n\nIf more help is needed please feel free to speak to the Numba core developers\ndirectly at: https://gitter.im/numba/numba\n\nThanks in advance for your help in improving Numba!\n\n',)])

Question: Use RadIO with TIFF/JPG

Hello experts:

Would I be able to use RadIO without the raw Dicom slices/files? I am working with CT scan images that have been cleaned and exported from Dicom to Tiff/Jpeg format. How would I use RadIO in this case? Please advise.

Thanks,

Indus

Numda errors

I am not very familiar with git yet but this issue is an continuation of llvmlite, numba errors #7, which was closed.

I have the same error as Screenium encountered the first time. @AlexeyKozhevin I changed the function
def get_nodules_numba(data, positions, size): into your suggestion, but I still get the same error:


Traceback (most recent call last):
  File "<ipython-input-30-1a5a735f9284>", line 3, in <module>
    batch_crops = (luna_dataset >> crops_sampling_pipeline).next_batch(2, shuffle=False)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\pipeline.py", line 1062, in next_batch
    batch_res = next(self._batch_generator)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\pipeline.py", line 1035, in gen_batch
    batch_res = self._exec(batch)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\pipeline.py", line 579, in _exec
    batch_res = self._exec_all_actions(batch)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\pipeline.py", line 565, in _exec_all_actions
    batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\pipeline.py", line 516, in _exec_one_action
    batch = action_method(*args, **kwargs)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\dataset\dataset\decorators.py", line 37, in _action_wrapper
    _res = action_method(action_self, *args, **kwargs)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\radio-0.1.0-py3.6.egg\radio\preprocessing\ct_masked_batch.py", line 650, in sample_nodules
    images = get_nodules_numba(self.images, nodules_st_pos, nodule_size)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\dispatcher.py", line 307, in _compile_for_args
    return self.compile(tuple(argtypes))
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\dispatcher.py", line 579, in compile
    cres = self._compiler.compile(args, return_type)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\dispatcher.py", line 80, in compile
    flags=flags, locals=self.locals)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 766, in compile_extra
    return pipeline.compile_extra(func)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 362, in compile_extra
    return self._compile_bytecode()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 725, in _compile_bytecode
    return self._compile_core()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 712, in _compile_core
    res = pm.run(self.status)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 248, in run
    raise patched_exception
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 240, in run
    stage()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 647, in stage_nopython_backend
    self._backend(lowerfn, objectmode=False)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 602, in _backend
    lowered = lowerfn()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 589, in backend_nopython_mode
    self.flags)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\compiler.py", line 885, in native_lowering_stage
    lower.lower()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 135, in lower
    self.lower_normal_function(self.fndesc)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 176, in lower_normal_function
    entry_block_tail = self.lower_function_body()
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 201, in lower_function_body
    self.lower_block(block)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 216, in lower_block
    self.lower_inst(inst)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\errors.py", line 265, in new_error_context
    six.reraise(type(newerr), newerr, sys.exc_info()[2])
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\six.py", line 658, in reraise
    raise value.with_traceback(tb)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\errors.py", line 259, in new_error_context
    yield
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 216, in lower_block
    self.lower_inst(inst)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 265, in lower_inst
    val = self.lower_assign(ty, inst)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 411, in lower_assign
    return self.lower_expr(ty, value)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 735, in lower_expr
    res = self.lower_call(resty, expr)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\lowering.py", line 701, in lower_call
    res = impl(self.builder, argvals)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\targets\base.py", line 1046, in __call__
    return self._imp(self._context, builder, self._sig, args)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\targets\arrayobj.py", line 3117, in numpy_zeros_nd
    ary = _empty_nd_impl(context, builder, arrtype, shapes)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\numba\targets\arrayobj.py", line 3004, in _empty_nd_impl
    arrlen = builder.mul(arrlen, s)
  File "C:\Users\s120116\AppData\Local\Continuum\anaconda3\lib\site-packages\llvmlite\ir\builder.py", line 24, in wrapped
    % (lhs.type, rhs.type))
LoweringError: Operands must be the same type, got (i64, i32)

Do you have any idea how to solve this problem in another way?

Radio's version of dataset is out of date

I'd like to add a some metrics (described here https://analysiscenter.github.io/dataset/api/dataset.models.metrics.html#dataset.models.metrics.Metrics) however it seems that the version of dataset that is shipped with radio is out of date.

To reproduce, attempt to
import radio.dataset.models.metrics

Returns:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
/media/nvme/TRAIN/notebooks/PE/John's Exploration/trainModel.py in <module>()
----> 1 import radio.dataset.models.metrics

ModuleNotFoundError: No module named 'radio.dataset.models.metrics'

Expected:

nothing

NotADirectoryError

I am using the same code as in the Radio description. (I replaced the paths of course)
I tried to import dicom files, when I run the pipeline I got the exception below.
Can you help me what did i do wrong?
(python 3.6.3, Radio 0.1.0, dataset 0.3.0)

from radio import CTImagesBatch
from radio.dataset import FilesIndex, Dataset
dicom_ix = FilesIndex(path='path/to/dicom/*', no_ext=True)
dicom_dataset = Dataset(index=dicom_ix, batch_class=CTImagesBatch)
pipeline = (
dicom_dataset.p
.load(fmt='dicom')
.resize(shape=(128, 256, 256))
.normalize_hu()
.dump('/path/to/preprocessed/scans/')
)

pipeline.run(batch_size=10)
Traceback (most recent call last):
File "", line 1, in
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/pipeline.py", line 1088, in run
for _ in self.gen_batch(*args, **kwargs):
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/pipeline.py", line 1035, in gen_batch
batch_res = self._exec(batch)
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/pipeline.py", line 579, in _exec
batch_res = self._exec_all_actions(batch)
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/pipeline.py", line 565, in _exec_all_actions
batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/pipeline.py", line 516, in _exec_one_action
batch = action_method(*args, **kwargs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/decorators.py", line 37, in _action_wrapper
_res = action_method(action_self, *args, **kwargs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/preprocessing/ct_batch.py", line 354, in load
self._load_dicom() # pylint: disable=no-value-for-parameter
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/decorators.py", line 238, in wrapped_method
return wrap_with_threads(self, args, kwargs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/decorators.py", line 171, in wrap_with_threads
return _call_post_fn(self, post_fn, futures, args, full_kwargs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/dataset/dataset/decorators.py", line 136, in _call_post_fn
return post_fn(all_results, *args, **kwargs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/preprocessing/ct_batch.py", line 804, in _post_default
self._reraise_worker_exceptions(list_of_arrs)
File "/home/cloudera/Downloads/radio-0.1.0/radio/preprocessing/ct_batch.py", line 780, in _reraise_worker_exceptions
raise RuntimeError("Failed parallelizing. Some of the workers failed with following errors: ", all_errors)
RuntimeError: ('Failed parallelizing. Some of the workers failed with following errors: ', [NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory'), NotADirectoryError(20, 'Not a directory')])

(get_patches function) AssertionError: Failed in nopython mode pipeline (step: convert to parfors)

While trying out the get_patches function defined under CTImagesBatch class, I got an assertion error.

`luna_dataset = Dataset(index=luna_index, batch_class=CTImagesBatch) # create a dataset structure

preprocessing = (Pipeline()
.load(fmt='raw')
.normalize_hu(min_hu=-1000, max_hu=400)
.unify_spacing(shape=(92,256,256),spacing = (1,1,1))
)
batch = (luna_dataset >> preprocessing).next_batch(1, shuffle=False)
patches = batch.get_patches(patch_shape = (32,64,64) ,stride = (32,64,64))`

After I run this code, I got this:

AssertionError Traceback (most recent call last)
in
10 )
11 batch = (luna_dataset >> preprocessing).next_batch(1, shuffle=False)
---> 12 patches = batch.get_patches(patch_shape = (32,64,64) ,stride = (32,64,64))
13 # AssertionError: Failed in nopython mode pipeline (step: convert to parfors)

~/anaconda3/envs/tf2/lib/python3.8/site-packages/radio/preprocessing/ct_batch.py in get_patches(self, patch_shape, stride, padding, data_attr)
1669
1670 # put patches into the tensor
-> 1671 get_patches_numba(data_padded, patch_shape, stride, patches)
1672 patches = np.reshape(patches, (len(self) * np.prod(num_sections), *patch_shape))
1673 return patches

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in _compile_for_args(self, *args, **kws)
437 e.patch_message('\n'.join((str(e).rstrip(), help_msg)))
438 # ignore the FULL_TRACEBACKS config, this needs reporting!
--> 439 raise e
440 finally:
441 self._types_active_call = []

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in _compile_for_args(self, *args, **kws)
370 return_val = None
371 try:
--> 372 return_val = self.compile(tuple(argtypes))
373 except errors.ForceLiteralArg as e:
374 # Received request for compiler re-entry with the list of arguments

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in compile(self, sig)
907 with ev.trigger_event("numba:compile", data=ev_details):
908 try:
--> 909 cres = self._compiler.compile(args, return_type)
910 except errors.ForceLiteralArg as e:
911 def folded(args, kws):

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in compile(self, args, return_type)
77
78 def compile(self, args, return_type):
---> 79 status, retval = self._compile_cached(args, return_type)
80 if status:
81 return retval

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in _compile_cached(self, args, return_type)
91
92 try:
---> 93 retval = self._compile_core(args, return_type)
94 except errors.TypingError as e:
95 self._failed_cache[key] = e

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/dispatcher.py in _compile_core(self, args, return_type)
104
105 impl = self._get_implementation(args, {})
--> 106 cres = compiler.compile_extra(self.targetdescr.typing_context,
107 self.targetdescr.target_context,
108 impl,

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler.py in compile_extra(typingctx, targetctx, func, args, return_type, flags, locals, library, pipeline_class)
604 pipeline = pipeline_class(typingctx, targetctx, library,
605 args, return_type, flags, locals)
--> 606 return pipeline.compile_extra(func)
607
608

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler.py in compile_extra(self, func)
351 self.state.lifted = ()
352 self.state.lifted_from = None
--> 353 return self._compile_bytecode()
354
355 def compile_ir(self, func_ir, lifted=(), lifted_from=None):

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler.py in _compile_bytecode(self)
413 """
414 assert self.state.func_ir is None
--> 415 return self._compile_core()
416
417 def _compile_ir(self):

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler.py in _compile_core(self)
393 self.state.status.fail_reason = e
394 if is_final_pipeline:
--> 395 raise e
396 else:
397 raise CompilerError("All available pipelines exhausted")

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler.py in _compile_core(self)
384 res = None
385 try:
--> 386 pm.run(self.state)
387 if self.state.cr is not None:
388 break

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler_machinery.py in run(self, state)
337 (self.pipeline_name, pass_desc)
338 patched_exception = self._patch_error(msg, e)
--> 339 raise patched_exception
340
341 def dependency_analysis(self):

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler_machinery.py in run(self, state)
328 pass_inst = _pass_registry.get(pss).pass_inst
329 if isinstance(pass_inst, CompilerPass):
--> 330 self._runPass(idx, pass_inst, state)
331 else:
332 raise BaseException("Legacy pass in use")

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler_lock.py in _acquire_compile_lock(*args, **kwargs)
33 def _acquire_compile_lock(*args, **kwargs):
34 with self:
---> 35 return func(*args, **kwargs)
36 return _acquire_compile_lock
37

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler_machinery.py in _runPass(self, index, pss, internal_state)
287 mutated |= check(pss.run_initialization, internal_state)
288 with SimpleTimer() as pass_time:
--> 289 mutated |= check(pss.run_pass, internal_state)
290 with SimpleTimer() as finalize_time:
291 mutated |= check(pss.run_finalizer, internal_state)

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/compiler_machinery.py in check(func, compiler_state)
260
261 def check(func, compiler_state):
--> 262 mangled = func(compiler_state)
263 if mangled not in (True, False):
264 msg = ("CompilerPass implementations should return True/False. "

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/core/typed_passes.py in run_pass(self, state)
301 state.metadata,
302 state.parfor_diagnostics)
--> 303 parfor_pass.run()
304
305 # check the parfor pass worked and warn if it didn't

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/parfor.py in run(self)
2775 """run parfor conversion pass: replace Numpy calls
2776 with Parfors when possible and optimize the IR."""
-> 2777 self._pre_run()
2778 # run stencil translation to parfor
2779 if self.options.stencil:

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/parfor.py in _pre_run(self)
2767 def _pre_run(self):
2768 # run array analysis, a pre-requisite for parfor translation
-> 2769 self.array_analysis.run(self.func_ir.blocks)
2770 # NOTE: Prepare _max_label. See #6102
2771 ir_utils._max_label = max(ir_utils._max_label,

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/array_analysis.py in run(self, blocks, equiv_set)
1153 topo_order = find_topo_order(blocks, cfg=cfg)
1154 # Traverse blocks in topological order
-> 1155 self._run_on_blocks(topo_order, blocks, cfg, init_equiv_set)
1156
1157 if config.DEBUG_ARRAY_OPT >= 1:

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/array_analysis.py in _run_on_blocks(self, topo_order, blocks, cfg, init_equiv_set)
1173 block = blocks[label]
1174 scope = block.scope
-> 1175 pending_transforms = self._determine_transform(
1176 cfg, block, label, scope, init_equiv_set
1177 )

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/array_analysis.py in _determine_transform(self, cfg, block, label, scope, init_equiv_set)
1254 )
1255 self.calltypes[inst] = orig_calltype
-> 1256 pre, post = self._analyze_inst(
1257 label, scope, equiv_set, inst, redefined
1258 )

~/anaconda3/envs/tf2/lib/python3.8/site-packages/numba/parfors/array_analysis.py in _analyze_inst(self, label, scope, equiv_set, inst, redefined)
1424 n = len(shape)
1425 # shape dimension must be within target dimension
-> 1426 assert target_ndim >= n
1427 equiv_set.set_shape_setitem(inst, shape)
1428 return pre + asserts, []

AssertionError: Failed in nopython mode pipeline (step: convert to parfors)

Does anyone have this same issue and how to resolve this?

How to predict a CT scan without nodules annotation

Hello all!

Let’s say that I have succesfully trained a 3D unet model and have saved its weight. Currently in RadIO tutorial 4, in order to predict a CT scan, we first generate cancer and noncancer crops using split_dump function. Afterwards, we use combine_crops to generate a batch and then the model will predict it. Current workflow is as such:

image
image
image

However, in order for the crops to generate, we first need to obtain the nodules location/annotations.

Now, if we have a CT scan with no nodule annotations, how do I use the model to predict if there is any nodules in the CT scan?
I ask this as currently, we need to have nodules annotation before doing 3D cropping ( via sample_nodules function).
So how do we process a CT scan with no nodule annotations, crop it to volume size of (32,64,64) such that the model is able to make a prediction?

PS: The model input shape is (None,1,32,64,64)

Did anyone able to resolve this?

Multiple modalities (image channels)

Hello, from the APIs it is not clear if/how multiple image channels are supported. Given that the loader takes all the images and stacks them in a skyscraper, how do we work with multiple image channels for training and inference?

Tutorial3

I try to execute Tutorial3 and for step#9 I get error:
Info about nodules location must be loaded before calling this method. Nothing happened.

AttributeError Traceback (most recent call last)
in ()
----> 1 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)
...

/usr/local/lib/python3.6/dist-packages/radio/preprocessing/ct_masked_batch.py in create_mask(self, mode)
502 self.masks = np.zeros_like(self.images)
503
--> 504 center_pix = np.abs(self.nodules.nodule_center -
505 self.nodules.origin) / self.nodules.spacing
506 radius_pix = np.rint(self.nodules.nodule_size / self.nodules.spacing / 2)

AttributeError: 'NoneType' object has no attribute 'nodule_center'

I can execute this step without error if reorder functions in previous step#8:
instead:
preprocessing = (Pipeline()
.load(fmt='raw')
.unify_spacing(shape=(92, 256, 256), spacing=(3.5, 1.0, 1.0))
.fetch_nodules_info(nodules_df)
.create_mask())
I use:

preprocessing = (Pipeline()
.load(fmt='raw')
.fetch_nodules_info(nodules_df)
.unify_spacing(shape=(92, 256, 256), spacing=(3.5, 1.0, 1.0))
.create_mask())

I it right?

Trouble with Tutorial RadIO.I

I am the following error in executing the code -
LUNA_MASK = 'D:\LUNA16\subset0' # set mask for scans from Luna-dataset here
from radio.batchflow import FilesIndex
luna_index = FilesIndex(path=LUNA_MASK, no_ext=True) # preparing indexing structure

ImportError Traceback (most recent call last)
in
1 LUNA_MASK = 'D:\LUNA16\subset0' # set mask for scans from Luna-dataset here
----> 2 from radio.batchflow import FilesIndex
3 luna_index = FilesIndex(path=LUNA_MASK, no_ext=True) # preparing indexing structure

~\Anaconda3\lib\site-packages\radio_init_.py in
3 """3d ct-scans preprocessing module with dataset submodule."""
4 import importlib
----> 5 from .preprocessing import *
6 from .pipelines import *
7 from . import batchflow

~\Anaconda3\lib\site-packages\radio\preprocessing_init_.py in
1 """ CT-scans preprocessing module. """
2
----> 3 from .ct_batch import CTImagesBatch
4 from .ct_masked_batch import CTImagesMaskedBatch
5 from .augmented_batch import CTImagesAugmentedBatch

~\Anaconda3\lib\site-packages\radio\preprocessing\ct_batch.py in
21
22 import SimpleITK as sitk
---> 23 from skimage.measure import label, regionprops
24 try:
25 import nibabel as nib

~\Anaconda3\lib\site-packages\skimage\measure_init_.py in
1 from ._find_contours import find_contours
2 from ._marching_cubes_lewiner import marching_cubes, marching_cubes_lewiner
----> 3 from ._marching_cubes_classic import (marching_cubes_classic,
4 mesh_surface_area,
5 correct_mesh_orientation)

~\Anaconda3\lib\site-packages\skimage\measure_marching_cubes_classic.py in
1 import numpy as np
----> 2 import scipy.ndimage as ndi
3 from .._shared.utils import warn
4 from . import _marching_cubes_classic_cy
5

~\Anaconda3\lib\site-packages\scipy\ndimage_init_.py in
159 from future import division, print_function, absolute_import
160
--> 161 from .filters import *
162 from .fourier import *
163 from .interpolation import *

~\Anaconda3\lib\site-packages\scipy\ndimage\filters.py in
35 from . import _ni_support
36 from . import _nd_image
---> 37 from scipy.misc import doccer
38 from scipy._lib._version import NumpyVersion
39

~\Anaconda3\lib\site-packages\scipy\misc_init_.py in
49 from .common import *
50 from numpy import who, source, info as _info
---> 51 from scipy.special import comb, factorial, factorial2, factorialk
52
53 import sys

~\Anaconda3\lib\site-packages\scipy\special_init_.py in
634 from future import division, print_function, absolute_import
635
--> 636 from ._ufuncs import *
637
638 from .basic import *

ImportError: DLL load failed: The specified module could not be found.

sample_nodules() works strange

from radio.preprocessing.utils import get_nodules_pixel_coords, show_slices
get_nodules_pixel_coords(batch)

returns dataframe with 8 nodules

from radio.preprocessing.utils import num_of_cancerous_pixels
num_of_cancerous_pixels(batch)

returns 1 scan with 2k nodules pixels

However when I'm trying to attach sample_nodules(batch_size=None, nodule_size=(16, 32, 32), share=1.0) to the current pipeline and calling
batch_crops = (dicom_dataset >> crops_sampling_pipeline).next_batch(20)

I'm getting nothing here despite I used share = 1.0
get_nodules_pixel_coords(batch_crops).head(1)

How can I save it in JPG or PNG format

I hope to save luna16 data to JPG or PNG format through RadIO, or dump it to image format through batch processing. It's better to save some or all of them. Can you write a tutorial for this?

Packages used in tutorial are missing

Hello,

I am not able to implement any of theese packages from the tutorial:

from radio.dataset import FilesIndex, Pipeline
from utils import show_slices

I followed the installation instructions but theese seems just missing. Are those turorial outdated?

As well I would like to ask you if its possible to use LIDR-IDRI dataset instead of LUNA16 (considering different metadata files included marking the nodles as I was not able to look inside the FilesIndex package)

Thank you.

Pipeline will never stop as n_epochs=None

Hi guys,

I'm following tutorial 4. Everything works just fine till the very end.
After preprocessing, i define a Keras 3D U-NET, exactly as in the tutorial.
So my code is :


`from radio.models import Keras3DUNet
from radio.models.keras.losses import dice_loss

unet_config = dict(
input_shape = (1, 32, 64, 64),
num_targets = 1,
loss= dice_loss
)

from radio.batchflow import F

train_unet_pipeline = (
combine_crops(cancerset, ncancerset, batch_sizes=(4, 4))
.init_model(
name='3dunet', model_class=Keras3DUNet,
config=unet_config, mode='static'
)
.train_model(
name='3dunet',
x=F(CTIMB.unpack, component='images', data_format='channels_first'),
y=F(CTIMB.unpack, component='masks', data_format='channels_first')
)
)

train_unet_pipeline.run()`


The training begins and seems to work fine bat at the end it stops with the error:


StopIteration Traceback (most recent call last)
~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
1213 try:
-> 1214 batch_res = self.execute_for(batch)
1215 except SkipBatchException:

~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
608 batch.pipeline = self
--> 609 batch_res = self._exec_all_actions(batch)
610 batch_res.pipeline = self

~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, action_list)
557 elif _action['mode'] == 'n':
--> 558 jbatch = pipe.next_batch()
559 join_batches.append(jbatch)

~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
1238 args, kwargs = self._lazy_run
-> 1239 batch_res = self.next_batch(*args, **kwargs)
1240 elif True or kwargs.get('prefetch', 0) > 0:

~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in next_batch(self, *args, **kwargs)
1243 self._batch_generator = self.gen_batch(*args, **kwargs)
-> 1244 batch_res = next(self._batch_generator)
1245 else:

StopIteration:

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
in
----> 1 train_unet_pipeline.run()

~/jupyter/env/lib/python3.7/site-packages/radio/batchflow/batchflow/pipeline.py in run(self, init_vars, *args, **kwargs)
1279 warnings.warn('Pipeline will never stop as n_epochs=None')
1280
-> 1281 for _ in self.gen_batch(*args, **kwargs):
1282 pass
1283 return self

RuntimeError: generator raised StopIteration`


So the issue seems easy.
I then try


train_unet_pipeline.run(batch_size=4, n_epochs=2, bar=True)


But i still got the same error, except this time it pops after around 20% of the training.

What am i doing wrong ?

Thanks a lot !

Keras model.fit() integration

Is it possible to specify a train and validation pipeline in Radio, and then use keras's model.fit() function to run the training loop. This way we can have the benefits of the Radio pipeline with the benefits of keras training loop.

At the moment I can't see a way to do this. It seems that you need to instantiate your model training as part of the Radio pipeline and then run an epoch. If you wanted to check your score against your validation set or perhaps save your model weights, it looks like this needs to be done in your model loop or encoded into your pipeline. So currently it is something like:

train_pipeline = (Pipeline()
                      .init_variable('current_loss')
                      .init_variable('loss_history', init_on_each_run=list)
                          #.init_variable('current_loss', init=float)
                      .init_model('static', ResNoduleNet, 'resnet', config=resnet_config)
                      .predict_model('resnet', x=F(Batch.unpack), save_to=V('predicted'))
                      .train_model(save_to=V('current_loss'), 
                                   name='resnet',
                                   x=F(Batch.unpack),
                                   y=F(Batch.get_labels)
                      )

valid_pipeline = (Pipeline()
                      .init_variable('pred_labels',  init_on_each_run=list)
                      .init_variable('predicted', init_on_each_run=list)
                      .init_variable('true_labels', init_on_each_run=list)
                      .import_model('resnet', train_pipeline)
                      .predict_model('resnet', x= F(Batch.unpack),
                                     save_to=V('predicted'))
                      .call(print_metrics)
                      .run(lazy=True)
                      )
train = (train_dataset+train_pipe)
valid = (valid_dataset+valid_pipe)

for i in range(epochs):
        print(f"epoch {i}")
        t = train.run(epoch=1, batch_size=bs)
        valid.run(epoch=1, batch_size=bs) 

But it would be much nicer if we could just have an interface like.

model = #some keras model
model.fit(train, val_data=valid, epochs=epochs)

Trouble with the Tutorial3

I have followed the tutorials line for line. In tutorial 3, when performing split_dump, dumping to the cancerous_folder and non_cancerous_folder, and (luna_dataset.train >> crops_dumping).run(), I get multiple alerts of ' Components ['predictions'] are empty. Nothing is dumped!' in the terminal with no idea how to continue.

Viewing preprocessed blosc scans

Hello,

I preprocessed some Dicom files and dumped it in blosc format. I did see that dumping as DICOM was not implemented. Is there a way to convert blosc to DICOM so that the preprocessed scans can be viewed with a Dicom viewer.

And are there plans to implement dumping in DICOM format?

Thanks

tqdm missing from requirements

I noticed that you added tqdm recently, but it was not part of the pip install which I just ran. Suggest to add it to the requirements.

RadIO.IV 'Too many open files'

Hi guys,

Thanks for sharing your amazing work here. I was just trying to run your notebook 4 radio/tutorials/RadIO.IV.ipynb and run on a weird error while parallelizing the split_dump on cell 9. I had a quick look on your code but can't see how to disable the multiprocessing in the pipeline. Any idea what could be causing this? I'm running it on a regular AWS p3 ubuntu DL instance.

Thanks for any input


RuntimeError Traceback (most recent call last)
in ()

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in run(self, init_vars, *args, **kwargs)
1279 warnings.warn('Pipeline will never stop as n_epochs=None')
1280
-> 1281 for _ in self.gen_batch(*args, **kwargs):
1282 pass
1283 return self

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _gen_batch(self, *args, **kwargs)
1212 for batch in batch_generator:
1213 try:
-> 1214 batch_res = self.execute_for(batch)
1215 except SkipBatchException:
1216 pass

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in execute_for(self, batch, new_loop)
607 asyncio.set_event_loop(asyncio.new_event_loop())
608 batch.pipeline = self
--> 609 batch_res = self._exec_all_actions(batch)
610 batch_res.pipeline = self
611 return batch_res

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_all_actions(self, batch, action_list)
579 join_batches = None
580
--> 581 batch = self._exec_one_action(batch, _action, _action_args, _action['kwargs'])
582
583 batch.pipeline = self

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/pipeline.py in _exec_one_action(self, batch, action, args, kwargs)
527 batch.pipeline = self
528 action_method, _ = self._get_action_method(batch, action['name'])
--> 529 batch = action_method(*args, **kwargs)
530 batch.pipeline = self
531 return batch

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _action_wrapper(action_self, *args, **kwargs)
42 action_self.pipeline.get_variable(_lock_name).acquire()
43
---> 44 _res = action_method(action_self, *args, **kwargs)
45
46 if _use_lock is not None:

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrapped_method(self, *args, **kwargs)
323
324 if asyncio.iscoroutinefunction(method) or _target in ['async', 'a']:
--> 325 x = wrap_with_async(self, args, kwargs)
326 elif _target in ['threads', 't']:
327 x = wrap_with_threads(self, args, kwargs)

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in wrap_with_async(self, args, kwargs)
289 loop.run_until_complete(wait_for_all(futures, loop))
290
--> 291 return _call_post_fn(self, post_fn, futures, args, full_kwargs)
292
293 def wrap_with_for(self, args, kwargs):

~/anaconda3/lib/python3.6/site-packages/radio/batchflow/batchflow/decorators.py in _call_post_fn(self, post_fn, futures, args, kwargs)
153 traceback.print_tb(all_errors[0].traceback)
154 return self
--> 155 return post_fn(all_results, *args, **kwargs)
156
157 def _prepare_args(self, args, kwargs):

~/anaconda3/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in _post_default(self, list_of_arrs, update, new_batch, **kwargs)
918 Output of each worker should correspond to individual patient.
919 """
--> 920 self._reraise_worker_exceptions(list_of_arrs)
921 res = self
922 if update:

~/anaconda3/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in _reraise_worker_exceptions(self, worker_outputs)
865 if any_action_failed(worker_outputs):
866 all_errors = self.get_errors(worker_outputs)
--> 867 raise RuntimeError("Failed parallelizing. Some of the workers failed with following errors: ", all_errors)
868
869 def _post_custom_components(self, list_of_dicts, **kwargs):

RuntimeError: ('Failed parallelizing. Some of the workers failed with following errors: ', [OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files'), OSError(24, 'Too many open files')])

"Event Loop is already running" - inside Jupyter notebook

Whenever I try and run a function that has a async decorator inside a jupyter notebook I get the following error:
RuntimeError: This event loop is already running

What I'm trying to do is load a data file that I have dumped to disk in 'blosc' format. However it seems to give this event loop error when I run it using Jupyter notebook. If I run the same command in a python script it works fine.

Steps to reproduce:

1, Build a pipeline to dump files to disk in blosc format, and use it to dump the files.
2, Using Jupyter notebook, attempt to load in the data, something similar to:

data = FilesIndex(path=path+'/*', dirs=True)
batch_dumped = CITB(pe.create_subset(data.index[0 : 4]))
batch_dumped.load(fmt='blosc')

Expected output:

batch object to contain scan data.

Current output:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-16-1c87479e8aee> in <module>()
----> 1 batch_dumped.load(fmt='blosc')

/media/resonance/Data/PE/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in _action_wrapper(action_self, *args, **kwargs)
     35                 action_self.pipeline.get_variable(_lock_name).acquire()
     36 
---> 37         _res = action_method(action_self, *args, **kwargs)
     38 
     39         if _use_lock is not None:

/media/resonance/Data/PE/anaconda3/lib/python3.6/site-packages/radio/preprocessing/ct_batch.py in load(self, fmt, components, bounds, **kwargs)
    363             components = np.asarray(components).reshape(-1)
    364 
--> 365             self._load_blosc(components=components)              # pylint: disable=no-value-for-parameter
    366         elif fmt == 'raw':
    367             self._load_raw()                # pylint: disable=no-value-for-parameter

/media/resonance/Data/PE/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in wrapped_method(self, *args, **kwargs)
    234             """ Wrap a method in a required parallel engine """
    235             if asyncio.iscoroutinefunction(method) or target in ['async', 'a']:
--> 236                 return wrap_with_async(self, args, kwargs)
    237             elif target in ['threads', 't']:
    238                 return wrap_with_threads(self, args, kwargs)

/media/resonance/Data/PE/anaconda3/lib/python3.6/site-packages/radio/dataset/dataset/decorators.py in wrap_with_async(self, args, kwargs)
    209                 futures.append(asyncio.ensure_future(method(self, *margs, **mkwargs)))
    210 
--> 211             loop.run_until_complete(asyncio.gather(*futures, loop=loop, return_exceptions=True))
    212 
    213             return _call_post_fn(self, post_fn, futures, args, full_kwargs)

/media/resonance/Data/PE/anaconda3/lib/python3.6/asyncio/base_events.py in run_until_complete(self, future)
    453         future.add_done_callback(_run_until_complete_cb)
    454         try:
--> 455             self.run_forever()
    456         except:
    457             if new_task and future.done() and not future.cancelled():

/media/resonance/Data/PE/anaconda3/lib/python3.6/asyncio/base_events.py in run_forever(self)
    407         self._check_closed()
    408         if self.is_running():
--> 409             raise RuntimeError('This event loop is already running')
    410         if events._get_running_loop() is not None:
    411             raise RuntimeError(

RuntimeError: This event loop is already running

tensorflow version 2

Hi, I just want to ask, can tensorflow version 2.0 be used to run this repo? If not, when will this repo be upgraded into tensorflow version 2.0?

Loading binary masks

Hi,

I'm having trouble understanding the idea of binary masks in your framework. If I understand correctly, the data for training segmentation has to be in "nodule" format, which is restricted to rectangles (origin + diameter). Is it possible to load binary masks created manually and saved in raw formats?

Best

No module named 'blosc.blosc_extension

I created an environment (radIOenv) in Anaconda. And I typed python -m pip install git+https://github.com/analysiscenter/radio.git.

When i type import radio or from radio import CTImagesBatch I get the following error : ModuleNotFoundError: No module named 'blosc.blosc_extension'

import radio
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\user\Anaconda3\envs\radioenv\lib\site-packages\radio_init_.py", line 5, in
from .preprocessing import *
File "C:\Users\user\Anaconda3\envs\radioenv\lib\site-packages\radio\preprocessing_init_.py", line 3, in
from .ct_batch import CTImagesBatch
File "C:\Users\user\Anaconda3\envs\radioenv\lib\site-packages\radio\preprocessing\ct_batch.py", line 15, in
import blosc
File "C:\Users\user\Anaconda3\envs\radioenv\lib\site-packages\blosc_init_.py", line 13, in
from blosc.blosc_extension import (
ModuleNotFoundError: No module named 'blosc.blosc_extension'

IXS KeyError with Tutorial 2

ixs = np.array(['1.3.6.1.4.1.14519.5.2.1.6279.6001.219618492426142913407827034169',
'1.3.6.1.4.1.14519.5.2.1.6279.6001.185154482385982570363528682299'])

Gives Key Error

Can't import radio after installing via pip3

Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import radio
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'radio'

Any known issues regarding this?

TypeError: only integer scalar arrays can be converted to a scalar index

Hi,

I'm following the first tutorial (RadIO.I.ipynb) and I always seem to get the same error. I've tryed with different data than from the Luna dataset and I get the same error.

After running this line:

batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)  # execution starts when you call next_batch

I get:

---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-14-2649f0c8a285> in <module>()
----> 1 batch = (luna_dataset >> preprocessing).next_batch(batch_size=3, shuffle=False)  # execution starts when you call next_batch

6 frames

/usr/local/lib/python3.6/dist-packages/radio/batchflow/batchflow/dsindex.py in subset_by_pos(self, pos)
     84     def subset_by_pos(self, pos):
     85         """ Return subset of index by given positions in the index. """
---> 86         return self.index[pos]
     87 
     88     def create_subset(self, index):

TypeError: only integer scalar arrays can be converted to a scalar index

I'm running the notebook on google colab, but I also get the same error when running in my laptop.

What I am doing wrong here?

Does not work with pydicom >= 1.0

Library does not work with the newest version of pydicom which is imported as pydicom instead of dicom

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-a2708da00b4e> in <module>()
      4 import tensorflow as tf
      5 
----> 6 from radio.dataset import Pipeline, FilesIndex, Dataset, F
      7 from radio import CTImagesMaskedBatch as CTIMB

/Users/mader/anaconda/lib/python3.5/site-packages/radio/__init__.py in <module>()
      3 """3d ct-scans preprocessing module with dataset submodule."""
      4 import importlib
----> 5 from .preprocessing import *
      6 from .pipelines import *
      7 from . import dataset

/Users/mader/anaconda/lib/python3.5/site-packages/radio/preprocessing/__init__.py in <module>()
      1 """ CT-scans preprocessing module. """
      2 
----> 3 from .ct_batch import CTImagesBatch
      4 from .ct_masked_batch import CTImagesMaskedBatch
      5 from .histo import sample_ellipsoid_region

/Users/mader/anaconda/lib/python3.5/site-packages/radio/preprocessing/ct_batch.py in <module>()
     12 import aiofiles
     13 import blosc
---> 14 import dicom
     15 import SimpleITK as sitk
     16 

ImportError: No module named 'dicom'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.