GithubHelp home page GithubHelp logo

pybdv's Introduction

Build Status Conda Forge

pyBDV

Python tools for BigDataViewer.

Installation

You can install the package from source

python setup.py install

or via conda:

conda install -c conda-forge pybdv

Usage

Python

Write out numpy array volume to bdv format:

from pybdv import make_bdv

out_path = '/path/to/out'

# the scale factors determine the levels of the multi-scale pyramid
# that will be created by pybdv.
# the downscaling factors are interpreted relative to the previous factor
# (rather than absolute) and the zeroth scale level (corresponding to [1, 1, 1])
# is implicit, i.e. DON'T specify it
scale_factors = [[2, 2, 2], [2, 2, 2], [4, 4, 4]]

# the downscale mode determines the method for downscaling:
# - interpolate: cubic interpolation
# - max: downscale by maximum
# - mean: downscale by averaging
# - min: downscale by minimum
# - nearest: nearest neighbor downscaling
mode = 'mean'

# specify a resolution of 0.5 micron per pixel (for zeroth scale level)
make_bdv(volume, out_path,
         downscale_factors=scale_factors, downscale_mode=mode,
         resolution=[0.5, 0.5, 0.5], unit='micrometer')

Convert hdf5 dataset to bdv format:

from pybdv import convert_to_bdv

in_path = '/path/to/in.h5'
in_key = 'data'
out_path = '/path/to/out'

# keyword arguments are same as for 'make_bdv'
convert_to_bdv(in_path, in_key, out_path,
               resolution=[0.5, 0.5, 0.5], unit='micrometer')

Command line

You can also call convert_to_bdv via the command line:

convert_to_bdv /path/to/in.h5 data /path/to/out --downscale_factors "[[2, 2, 2], [2, 2, 2], [4, 4, 4]]" --downscale_mode nearest --resolution 0.5 0.5 0.5 --unit micrometer

The downscale factors need to be encoded as json list.

Conversion to n5-bdv format

Bigdatviewer core also supports an n5 based data format. The data can be converted to this format by passing a path with n5 ending as output path: /path/to/out.n5. In order to support this, you need to install z5py.

Advanced IO options

If elf is available, additional file input formats are supported. For example, it is possible to convert inputs from tif slices

import os
import imageio
import numpy as np
from pybdv import convert_to_bdv


input_path = './slices'
os.makedirs(input_path, exist_ok=True)
n_slices = 25
shape = (256, 256)

for slice_id in range(n_slices):
    imageio.imsave('./slices/im%03i.tif', np.random.randint(0, 255, size=shape, dtype='uint8'))

input_key = '*.tif'
output_path = 'from_slices.h5'
convert_to_bdv(input_path, input_key, output_path)

or tif stacks:

import imageio
import numpy as np
from pybdv import convert_to_bdv


input_path = './stack.tif'
shape = (25, 256, 256)

imageio.volsave(input_path, np.random.randint(0, 255, size=shape, dtype='uint8'))

input_key = ''
output_path = 'from_stack.h5'
convert_to_bdv(input_path, input_key, output_path)

On-the-fly processing

Data can also be added on the fly, using pybdv.initialize_dataset to create the bdv file and then BdvDataset to add (and downscale) new sub-regions of the data on the fly. See examples/on-the-fly.py for details.

Dask array support

You can use the pybdv.make_bdv_from_dask_array function and pass a dask array. Currently only zarr and n5 are supported and a limited options for downsampling (using dask.array.coarsen). See examples/dask_array.py for details.

pybdv's People

Contributors

boazmohar avatar constantinpape avatar martinschorb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pybdv's Issues

Support 2d

We should also support 2d input data.
I need to check how this is saved in the bdv config though.
If there's no native support for 2d in bdv, we can just add a singleton dimension.

TIF conversion fails due to missing extension in directory path

Hi,

I try to convert a directory with tif slices and it fails with a missing extension:

convert_to_bdv --resolution 0.01 0.01 0.05 --n_threads 32 /g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/aligned_full_stomach/ *.tif stomach.n5

Traceback (most recent call last):
  File "/g/emcf/software/python/miniconda/envs/bdv/bin/convert_to_bdv", line 33, in <module>
    sys.exit(load_entry_point('pybdv', 'console_scripts', 'convert_to_bdv')())
  File "/g/emcf/schorb/code/pybdv/pybdv/scripts/pybdv_converter.py", line 91, in main
    chunks=chunks, n_threads=args.n_threads)
  File "/g/emcf/schorb/code/pybdv/pybdv/converter.py", line 252, in convert_to_bdv
    with open_file(input_path, 'r') as f:
  File "/g/emcf/schorb/code/pybdv/pybdv/util.py", line 35, in open_file
    raise ValueError(f"Invalid extension: {ext}")
ValueError: Invalid extension: 

This happens regardless of giving a trailing /.

memmaps while modifying metadata only

Hi,

I have the feeling that for some reason somewhere in make_bdv it reads the memmaps into numpy arrays even when not overwriting the data. I tried to dig inn it but could not spot where exactly it happens.

I can only clearly see that the h5 files are modified when I run it, even though I only select metadata as overwrite parameter.

data padding

Hi,

my input data is 2024x2024. Pybvd will generate a 2048x2048 output with padded zeros.
These become obvious in linear interpolation mode as black frames
Can these be something else? NaN

Or is this a problem of BDV?

Martin

Warn when signed data with negative values is given (hdf5 only)

Signed data with negative values cannot be displayed properly in BDV, because it maps the negative part to high unsigned values.
pybdv should produce a warning if such data is given.
(This is only relevant for bdv.hdf5; for bdv.n5 data type conversion is not necessary)

generate N5 fails

Hi,

I get a crash when wanting to generate a n5.

import numpy as np
import pybdv
a=np.zeros((301,401,501))
a=np.uint8(a)
pybdv.make_bdv(a,'test1.n5')

leads to.

Traceback (most recent call last):

  File "<ipython-input-1-22f42bb06f94>", line 5, in <module>
    pybdv.make_bdv(a,'test1.n5')

  File "/Users/schorb/code/pybdv/pybdv/converter.py", line 454, in make_bdv
    write_n5_metadata(data_path, factors, resolution, setup_id, timepoint,

  File "/Users/schorb/code/pybdv/pybdv/metadata.py", line 361, in write_n5_metadata
    _write_mdata(root, 'dataType', dtype)

  File "/Users/schorb/code/pybdv/pybdv/metadata.py", line 345, in _write_mdata
    g.attrs[key] = data

  File "/Users/schorb/miniconda3/envs/centrioles/lib/python3.9/site-packages/zarr/attrs.py", line 79, in __setitem__
    self._write_op(self._setitem_nosync, item, value)

  File "/Users/schorb/miniconda3/envs/centrioles/lib/python3.9/site-packages/zarr/attrs.py", line 73, in _write_op
    return f(*args, **kwargs)

  File "/Users/schorb/miniconda3/envs/centrioles/lib/python3.9/site-packages/zarr/attrs.py", line 90, in _setitem_nosync
    self._put_nosync(d)

  File "/Users/schorb/miniconda3/envs/centrioles/lib/python3.9/site-packages/zarr/attrs.py", line 112, in _put_nosync
    self.store[self.key] = json_dumps(d)

  File "/Users/schorb/miniconda3/envs/centrioles/lib/python3.9/site-packages/zarr/n5.py", line 130, in __setitem__
    raise ValueError("Can not set attribute %s, this is a reserved N5 keyword" % k)

ValueError: Can not set attribute dataType, this is a reserved N5 keyword

This is on Mac, Python 3.9, zarr used as N5 backend.

minor: h5 conversion progress bar wrong maximum

Hi,

the converter does not show the proper maximum shape in the progress bar:

image

in this example, it shows 630, while you can see in the line below it is continuing to process further (until 768).

Automatically increase set-up id

Currently, the user must specify the set-up id (0 by default). If the specified (or default)
set-up id already exists, it is simply over-written.
Instead, when giving no set-up id, it should be set to the current max set-up id + 1.

attributes no longer supported

Hi,

I run into


  File "C:\Software\Anaconda3\envs\pyEM\lib\site-packages\pybdv\metadata.py", line 371, in validate_attributes
    xml_ids = [int(child.find('id').text) for child in attribute]

  File "C:\Software\Anaconda3\envs\pyEM\lib\site-packages\pybdv\metadata.py", line 371, in <listcomp>
    xml_ids = [int(child.find('id').text) for child in attribute]

ValueError: invalid literal for int() with base 10: "{'id': 0, 'Projection_Mode': 'Sum', 'isset': 'true', 'min': 0, 'max': 255, 'color': '255 0 0 255'}"

And it is filling the xml with the dictionary {xxx:yyy,zzz:qqq} instead of proper values.

I am pretty sure this is the newest version (ran python setup.py install). How can I make sure it is?

make_bdv fails upon certain dataset size

Hi @constantinpape,
I merged your master to my pybdv fork after #31 and I stumbled upon this:

from pybdv import make_bdv
import numpy as np
from shutil import rmtree

target_vol = np.ones((128, 128, 144), dtype='uint8')
tmp_bdv = './test.n5'
scale_factors = [[2, 2, 2], [2, 2, 2], [4, 4, 4]]
downscale_mode = 'interpolate'
n_threads = 1

try:

    make_bdv(
        data=target_vol,
        output_path=tmp_bdv,
        downscale_factors=scale_factors,
        downscale_mode=downscale_mode,
        n_threads=n_threads
    )

except:

    rmtree(tmp_bdv)
    raise

rmtree(tmp_bdv)

results in

Downsample scale 1 / 3
  8%|โ–Š         | 1/12 [00:00<00:04,  2.51it/s]
Traceback (most recent call last):
  File "/home/hennies/.PyCharmCE2018.1/config/scratches/scratch_5.py", line 19, in <module>
    n_threads=n_threads
  File "/g/schwab/hennies/src/github/pybdv/pybdv/converter.py", line 447, in make_bdv
    overwrite=overwrite_data)
  File "/g/schwab/hennies/src/github/pybdv/pybdv/converter.py", line 192, in make_scales
    overwrite=overwrite)
  File "/g/schwab/hennies/src/github/pybdv/pybdv/downsample.py", line 162, in downsample
    sample_chunk(bb)
  File "/g/schwab/hennies/src/github/pybdv/pybdv/downsample.py", line 154, in sample_chunk
    ds_out[bb] = outp[bb_local]
  File "/g/schwab/hennies/miniconda3/envs/mobie-env2/lib/python3.7/site-packages/z5py/dataset.py", line 388, in __setitem__
    item_arr = rectify_shape(item_arr, shape)
  File "/g/schwab/hennies/miniconda3/envs/mobie-env2/lib/python3.7/site-packages/z5py/shape_utils.py", line 94, in rectify_shape
    raise ValueError(msg)
ValueError: could not broadcast input array from shape (64, 64, 64) into shape (64, 64, 8); complicated broacasting not supported

If I use e.g. np.ones((128, 128, 128), dtype='uint8') it works just fine.

Any ideas? Can you reproduce this?

Support view attributes

The bigdataviever attributes are stored in the metadata like this:

<ViewSetups>

  <Attributes name="channel">
    <Channel>
       <id>0</id>
       <name>0</name>
    </Channel>
    <Channel>
       <id>1</id>
       <name>1</name>
    </Channel>
  </Attributes>

  <Attributes name="illumination">
    <Illumination>
       <id>0</id>
       <name>0</name>
    </Illumination>
  </Attributes>
   
 <ViewSetup>
    <id>0</id>
    ....
   <attributes>
      <channel>0</channel>
      <illumination>0</illumination>
   </attributes>
  </ViewSetup>

 <ViewSetup>
    <id>1</id>
    ....
   <attributes>
      <channel>1</channel>
      <illumination>0</illumination>
   </attributes>
  </ViewSetup> 

</ViewSetups>

So there is a group of existing attributes specified by <Attributes name=...> and each ViewSetup then maps to the corresponding attributes in <attributes>.

In order to support this, I will add a dictionary argument attributes to make_bdv,
that maps the ViewSetup that is written to its attributes.
To be compatible to the old code, this will default to attributes={"channel": None}, which translates to: the attribute for this ViewSetup is channel and increase the counter by 1.

If a user wants to add custom attributes (i.e. other attributes than channel), the first call to make_bdv must have all the attribute names in the attributes dict that is passed.
Otherwise subsequent calls that introduce new attribute names will fail.

cc @martinschorb

Interpolation mode failed in the newest version?

Hi Constantin

I reinstalled conda env and pybdv recently. Was processing a file as usual and got an error.

(/das/work/p15/p15889/Maxim_LCT) [polikarpov_m@ra-gpu-001 scripts_batch]$ Downsample scale 1 / 5
/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py:51: UserWarning: Downscaling with mode 'interpolate' may lead to different results depending on the chunk size
  warn("Downscaling with mode 'interpolate' may lead to different results depending on the chunk size")
  0%|                                                                                                               | 0/16384 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "5sec_185deg_4ppd_ra_dev.py", line 208, in <module>
    unit='micrometer', setup_name = data_name) 
  File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 461, in make_bdv
    overwrite=overwrite_data)
  File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 204, in make_scales
    overwrite=overwrite)
  File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 172, in downsample
    sample_chunk(bb)
  File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 163, in sample_chunk
    outp = downsample_function(inp, factor, out_shape)
  File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 15, in ds_interpolate
    anti_aliasing=order > 0, preserve_range=True)
TypeError: resize() got an unexpected keyword argument 'anti_aliasing'

Do you have an idea what could be the problem?

pybdv crashes when I import

I have a fault on HDF5 compatability? any advice how to fix?

It looks like its a compatability issue. I am trying to convert a Zarr file to big data stitcher format but I am getting the following issue about mismatch thats brought up in h5py5. I have followed all the instructions and still the same issue. Any thoughts?

big volumes chunking

Hi,

I was wondering what would be a good way of using pybdv to convert a large stack of individual slices (usually tif).

I think what I would do is use a reasonable chunk size (default would be 64, right?) and then read in this amount of slices as a volume and then convert with make_bdv.

Now, I could assign each slice group a new ViewSetup, but in the end, it will all be one volume and thus one ViewSetup...

Any ideas?

Would it make sense to have a dedicated converter function for a set of slices?
Maybe directly from a list of tif files into one volume?

Proper support and check for unsigned types

Bigdataviewer implements it's own types for unsigned char and unsigned short, because
java does not support unsigned types natively.
As far as I understand it maps like this for uint8 (and similarly for uint16):

0->0, 1->1, ...., 127->127, 128->-1, .... 255->-128

I am currently not taking care of this, which causes ugly errors when opening a converted file with an id > 32767 (int16 max id).

This conversion should be done automatically (if necessary) and if the datarange can not be mapped,
pybdv should throw an error.

anisotropic voxel sizes

Hi,

a stack with dims XYZ will be mapped into the h5 container with ZXY.

This means that the dimensions for specifying the resolutions should be mapped accordingly.

I think pybdv messes up the X and Y voxel sizes. Not that this really matters a lot, because usually they should be the same, but playing around with these, I noticed some strange behaviour.
Could you check?

dealing with absolute paths

Hi,

what can we do to make pybdv applicable to the following scenario:

  • 2 large datasets in some common storage location (maybe even separate group shares)
  • no write access to those
  • I want to register them to each other
  • I need to create matching bdv xml files pointing to the data but with modified AffineTransform and potentially other attributes.
  • so basically I need some means of creating additional valid BDV xml files without having write access to the data directory.

My idea is to just use write_xml_metadata and have it point to the data container. This however cannot be done using relative paths if the user does not have write access there. So far you have that hardcoded in this function. I will give it a try by finding common path and the use relative directory listing, however, this will fail under Windows when different shares are mounted as different drives...

Any ideas how to solve that? S3 storage for this data?

2D input data

Hi,

currently pybdv only supports 3D input data. This is fine as long as I use make_bdv. I will just fake the third dimension by using np.expand_dims(data.copy(),axis=0).

However, I like to now use pybdv's convert_to_bdv to convert an existing hdf5 file and this is written as 2D. Could you apply something similar within the hdf5 file as well? Or could you add support for 2D data? BDV can somehow deal with it. Should I send you a 2D hdf5 file that is compatible with the BDV universe?

Thanks...

Scales by make_bdv seem to 'move' the data

Hi @constantinpape,

This code snipped produces a 128^3 dataset of zeros with a 64^3 region of ones in the center (float64) and transforms it into bdv using make_bdv:

import numpy as np
from pybdv import make_bdv
from shutil import rmtree
from pybdv.util import open_file, get_key
from matplotlib import pyplot as plt

DOWNSCALE_MODE = 'interpolate'

full_shp = (128,) * 3
pos = (32,) * 3
shp = (64,) * 3
scale_factors = [
    [2, 2, 2],
    [2, 2, 2],
    [2, 2, 2]
]

data = np.zeros(full_shp)
data[
    pos[0]: pos[0] + shp[0],
    pos[1]: pos[1] + shp[1],
    pos[2]: pos[2] + shp[2]
] = 1

out_path = './tmp.n5'
make_bdv(data, out_path, downscale_factors=scale_factors, downscale_mode=DOWNSCALE_MODE)

fig, ax = plt.subplots(nrows=1, ncols=4)
for scale in range(4):
    sl = int(32 / (scale + 1))
    with open_file(out_path, mode='r') as f:
        data = f[get_key(False, timepoint=0, setup_id=0, scale=scale)][sl, :]
    ax[scale].imshow(data)

plt.show()

rmtree(out_path)

The outcome of plotting the central slice of each resolution layer is this:

image

It appears to me that each resolution layer is shifted by half a pixel.

Also: setting DOWNSCALE_MODE = 'nearest' yields:

image

Since in this example the data should scale without subpixel locations, I would expect the central (yellow) cube in all four resolutions to appear exactly the same size in the plots - just different scale on the axes.
Also nearest and interpolate should here yield the same, right?

I don't mind looking into this, maybe you can point me to where the scaling happens?

Best,
Julian

hdf5 conversion fails

Hi got a new one while downsampling this file:

https://oc.embl.de/index.php/s/yOYootw12pbPlVw


  File "<ipython-input-7-068badfec019>", line 125, in write_bdv
    overwrite = False)

  File "c:\users\schorb\documents\pybdv-master\pybdv-master\pybdv\converter.py", line 364, in make_bdv
    overwrite=overwrite_)

  File "c:\users\schorb\documents\pybdv-master\pybdv-master\pybdv\converter.py", line 154, in make_scales
    overwrite=overwrite)

  File "c:\users\schorb\documents\pybdv-master\pybdv-master\pybdv\downsample.py", line 65, in downsample
    compression='gzip', dtype=ds_in.dtype)

  File "C:\Software\Anaconda3\envs\pyEM\lib\site-packages\h5py\_hl\group.py", line 136, in create_dataset
    dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)

  File "C:\Software\Anaconda3\envs\pyEM\lib\site-packages\h5py\_hl\dataset.py", line 140, in make_new_dset
    maxshape, scaleoffset, external)

  File "C:\Software\Anaconda3\envs\pyEM\lib\site-packages\h5py\_hl\filters.py", line 212, in fill_dcpl
    plist.set_chunk(chunks)

  File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper

  File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper

  File "h5py\h5p.pyx", line 430, in h5py.h5p.PropDCID.set_chunk

ValueError: All chunk dimensions must be positive (all chunk dimensions must be positive)

Downsampling if already present

Hi,

I have pyramidal data in the form of individual TIFs.
Is there a way to feed the data directly into the different resolution layers of the container and skip the downsampling?

Or is the downsampling effort negligible compared to the file IO?

adding 2D data to hdf5 container

Hi,

it seems that with the current 2D implementation something goes wrong when I add multiple 2D datasets into one hdf5. In this case this is tiles of a mosaic.

It looks like it is adding the different images as slices in a single 3D stack container instead of creating individual data cells...

image

This worked correctly with the previous version.

Feature request: working with dask arrays

Hi,

Is there any chance to support dask arrays, for out of memory datasets this seems like the most popular choice these days.
If you can point me to what needs to change, and you think it is a good addition, I will try to get an initial PR.

Thanks!
Boaz

Document downscaling parameters

The downscaling behaviour should be documented better.
Specifically, it should be clear that scale factors are relative to the previous level rather than absolute
and that the zeroth scale (1, 1, 1) is implicit.

file not found error in abspath

Hi,

in just one CI environment I run into a file not found error in os.path.abspath:

/opt/conda/lib/python3.7/site-packages/mobie/image_data.py:87: in add_bdv_image
    data_path = bdv_metadata.get_data_path(xml_path, return_absolute_path=True)
/opt/conda/lib/python3.7/site-packages/pybdv/metadata.py:845: in get_data_path
    path = os.path.abspath(os.path.relpath(path))
/opt/conda/lib/python3.7/posixpath.py:475: in relpath
    start_list = [x for x in abspath(start).split(sep) if x]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
path = '.'
    def abspath(path):
        """Return an absolute path."""
        path = os.fspath(path)
        if not isabs(path):
            if isinstance(path, bytes):
                cwd = os.getcwdb()
            else:
>               cwd = os.getcwd()
E               FileNotFoundError: [Errno 2] No such file or directory
/opt/conda/lib/python3.7/posixpath.py:383: FileNotFoundError

Doing some research, I found that if CWD is deleted during the call, it can cause this behaviour in certain combinations in a python environment.

conda/conda#6584
pytest-dev/pytest-cov#306

I cannot really understand what is going on, because the error does NOT show up neither in my dev environment nor in my docker that I use to mimic the CI scenario. There, the tests run just fine.
Only on the Gitlab CI, I run into this error.
I also tried forcing certain conda or pytest versions without success.

Do you think that there is potential confusion for CWD by some cd calls somewhere in mobie or pybdv.metadata that we could change to avoid running into this scenario?

chunk boundaries visible as black cubes

Hi,

I have the problem that a BDV volume just generated shows clear black boundaries between the chunks while an older one of the same data doesn't.

Could you please check
/g/emcf/schorb/data/HH_platy/rec1/bdv/*.h5

that has the black boundaries vs.

/g/emcf/schorb/data/HH_platy/Platy*.h5

which is the original.

I suspect it has something to do with the volume dimensions not matching powers of 2, could that be? Here's the input volume /scratch/schorb/HH_platy/Platy-12601_rec.npy. I did another export with sizes exactly being multiples of 256:

/g/emcf/schorb/data/HH_platy/new.h5

and now the boundaries appear bright.
I will try again exporting to n5 and check...

Do you have an idea what could cause this and how to avoid these artifacts?

Updating version on conda

@constantinpape Would it be possible to put the latest version of pybdv on conda-forge?
As far as I know, the version with the on the fly processing functions isn't on conda yet. Thanks!

On the fly editing much slower for n5 than hdf5

I'm initialising a dataset like so:

pybdv.initialize_bdv(path, (50, 1536, 2048) , np.uint8, downscale_factors=[[2,2,2], [2,2,2], [2,2,2]],
                             resolution=resolution, unit="micron", chunks=(1, 64, 64))
current_dataset = BdvDataset(path, setup_id=0, timepoint=0)

Then writing slice by slice (for 10 slices) like so:

# add dummy first dimension to slice, so it's 3d
slice = np.expand_dims(slice, axis=0)
current_dataset[slice_no:slice_no+1, 0:current_dataset_shape[1], 0:current_dataset_shape[2]] = slice

For hdf5 this takes about 29 seconds, compared to around 147 seconds for n5. It also seems like the time to convert each slice increases for n5, while it doesn't really for h5.
Example times for 10 slices on n5:

Converted image in 3.3464 seconds
Converted image in 6.2296 seconds
Converted image in 8.5396 seconds
Converted image in 12.5240 seconds
Converted image in 16.3049 seconds
Converted image in 19.7258 seconds
Converted image in 23.3828 seconds
Converted image in 27.1164 seconds
Converted image in 2.7939 seconds
Converted image in 6.3479 seconds

@constantinpape Any ideas if this can be improved for speed? Or is it always slower to write many small files vs one big hdf5?

#viewsetups >100

Hi,

the HDF5 structure does not support more than 2 digit ViewSetups.

Would N5 support it? I can see the setupN directory structure. Is there a limitation to the digits of N?

BTW: Where can I find the elf package? I cannot get pybdv's n5 conversion to work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.