nvladimus / npy2bdv Goto Github PK
View Code? Open in Web Editor NEWFast writing of numpy 3d-arrays into HDF5 Fiji/BigDataViewer files.
License: GNU General Public License v3.0
Fast writing of numpy 3d-arrays into HDF5 Fiji/BigDataViewer files.
License: GNU General Public License v3.0
I wondered if you could provide an example of overlapping tiles for h5 conversion?
I can see you have channels, angles and so forth. What if you had some overlapping tiles that needed correlation bigstitcher? I can see you can add the views and in the code you have tiles, but there really isnt and examples of this. I am not sure if the tile should be a tuple (though it doesnt look like it) or an in of where it is. Being able to convert to the bigstitcher format that has some of the overlap already completed with the speed of writing here could be a big help.
Hi,
thanks for making this public. I think this will be quite useful to me. I'm after creating BigStitcher datasets files directly from python. I know the tiling pattern and the approximate overlap of multiple tiles from a SPIM, would you consider adding the option to optionally pass in the affine transform parameters for each view (currently you pass in the dx
,dy
,dz
as far as I could see?
Hi @nvladimus I am trying to setup npyb2v to generate XML/hdf5 that are Bigstitcher compatible on multi-tile, multi-channel acquired on the Blaze Light sheet system. I started playing the with example notebook that you have shared but am facing difficulties. What would be a good resource for a beginner like me to set this up? Thank you.
It would be nice if npy2bdv would also offer the option to store data in a different datatype than int16. Maybe the writer could give a warning if data might have been altered? This is the case for any data not in [np.uint8 np.int8 np.int16]
.
Hi Nikita!
For our OPM post-processing, it will be useful to read the affine transform that contains the stage coordinates back out for the tiled acquisition. Right now we write using the virtual stack with no downsampling and write the stage coordinates to the affine translation column.
For post-processing, we read each big strip scan (~100,000x1600x256) back in to deconvolve, deskew, and split it into smaller blocks. We then write a new H5 with downsampling for stitching. We need to transform the stage coordinates after the deskew and splitting for each new tile.
I took a quick look at the new BdvEditor class and I think this should be simple to implement. We will take a shot at it implementing it and do a pull request, but I I wanted to let you know in case you have any input before we get started on it.
Thanks,
Doug
Hi,
I just noticed that you changed the license from BSD-3/MIT to GPL some time back.
I bundled/vendored an earlier version of this code here in a project for easier distribution (while it was still under BSD-license).
Hi
Thanks for making this library.
I'm writing single z planes using append_plane function and I've been running into issues with image shapes that are odd.
If my array is odd shape and I try subsampling, I get a shape broadcast error.
For example, here is a minimal example:
import numpy as np
import npy2bdv
save_path = "test.h5"
#initial image
img = np.zeros((147,1001,1001))
nz,ny,nx = img.shape
bdv_writer = npy2bdv.BdvWriter(save_path, nchannels=1,subsamp=((1,2,2),(1, 8, 8),(1, 16, 16)),
blockdim=((8, 32, 32), (8, 32, 32), (8, 32, 32)),
compression='gzip', overwrite=True)
bdv_writer.append_view(stack=None, virtual_stack_dim=(nz, ny, nx), channel=0)
bdv_writer.append_plane(plane=img[0], channel=0,z=0)
I get the error:
\npy2bdv\npy2bdv.py:409, in BdvWriter.append_plane(self, plane, z, time, illumination, channel, tile, angle)
407 print(plane.shape)
408 print(self.subsamp[ilevel])
--> 409 dataset[z, :, :] = self._subsample_plane(plane, self.subsamp[ilevel]).astype('int16')
File h5py\_objects.pyx:54, in h5py._objects.with_phil.wrapper()
File h5py\_objects.pyx:55, in h5py._objects.with_phil.wrapper()
File ~\AppData\Roaming\Python\Python39\site-packages\h5py\_hl\dataset.py:997, in Dataset.__setitem__(self, args, val)
994 mshape = val.shape
996 # Perform the write, with broadcasting
--> 997 mspace = h5s.create_simple(selection.expand_shape(mshape))
998 for fspace in selection.broadcast(mshape):
999 self.id.write(mspace, fspace, val, mtype, dxpl=self._dxpl)
...
267 # All dimensions from target_shape should either have been popped
268 # to match the selection shape, or be 1.
269 raise TypeError("Can't broadcast %s -> %s" % (source_shape, self.array_shape)) # array shape
TypeError: Can't broadcast (501, 501) -> (500, 500)
I believe its do with how npy2bdv and the skimage.transform.downscale_local_mean function calculates image shapes...
Latter returns 501,501, whereas npy2bdv calculates it as (500,500)
npy2bdv calculates it here:
Line 512 in 4d1ffd3
downscale_local_mean function uses this block function:
https://github.com/scikit-image/scikit-image/blob/441fe68b95a86d4ae2a351311a0c39a4232b6521/skimage/measure/block.py#L78
I've essentially modified your code to round up the shape which seems to solve this:
grp.create_dataset('cells', chunks=self.chunks[ilevel],
shape = np.ceil(virtual_stack_dim / self.subsamp[ilevel]),
compression=self.compression, dtype='int16')
Not sure if this is the best way to do this.
Cheers
Pradeep
While looking through the code to see how one could implement having one affine transform per dataset (see my latest comment regarding #1), I noticed that there are other attributes that can be had per-dataset that are not treated correctly.
affine
, each dataset added view append_view
could also have it's own calibration
and therefore it's own dx,dy,dz
.self.stack_shape
. This instance variable is changed whenever append_view
is called here:Line 82 in 3e98e1b
During writing of the xml file this variable is accessed here:
Line 152 in 3e98e1b
Therefore nz, ny, nx
used to write the ViewSetup subelement size
(
Line 185 in 3e98e1b
I guess what would be needed is that these parameters are passed in (or determined) for each invocation of append_view
. Then some instance variables (lists) or dictionaries with keys derived from (time, ill, ch, tile, angle) could be used to keep track of these parameters for subsequent use when write_xml_file(...)
is called.
Was working with npy2bdv this past week and I think there may be a bug with append_substack()?
I think these lines:
Lines 444 to 446 in e7bdce7
Should include downsampling in the indices such as?
sub_z_start = int(z_start/2**ilevel)
sub_y_start = int(y_start/2**ilevel)
sub_x_start = int(x_start/2**ilevel)
dataset[sub_z_start : sub_z_start + subdata.shape[0],
sub_y_start : sub_y_start + subdata.shape[1],
sub_x_start : sub_x_start + subdata.shape[2]] = subdata
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.