Comments (5)
I'd like to be able to add data from an existing zarr store, with the same compression it used originally, in an efficient way
It would be useful if you could provide a code example of how you are copying the data right now or pseudo-code to show more specifically which kind of data you need to copy. E.g., Is this for individual datasets you want to copy or is this for more complex structures, e.g., an entire ElectricalSeries
in NWB?
Brainstorming: Some ideas for implementing the requested feature
For individual datasets, e.g. ElectricalSeries.data
, I think this could be done either:
- By explicitly specifying it by wrapping with
ZarrDataIO
. We would need to enhanceZarrDataIO
to allow us to specify that the source Zarr dataset should be copied using thecopy_store
approach. Similar to how we have aZarrDataIO.from_h5py_dataset
we could then also have aZarrDataIO.from_zarr_array
method. I think this approach should not be too complicated to implement. - By enhancing the logic in
ZarrIO.write_dataset
to automatically detect that we need to copy a Zarr dataset to usecopy_store
instead. - Some combination of 1 and 2, where the logic in 2 would consist of automatically wrapping with
ZarrDataIO
For whole groups, e.g., an entire ElectricalSeries
, I believe this is something we would need to do define logic in ZarrIO.write_group
to detect that a whole Builder needs to be copied. However, I'm not sure right now what the logic needs to look like to detect this case from the Builder.
from hdmf-zarr.
Right, sorry. I was talking about individual datasets.
Doing something like this works, and the zarr data are not read eagerly, which is great:
import datetime
import pynwb
import zarr
nwb = pynwb.NWBFile(
session_id='test',
session_description='test',
identifier='12345',
session_start_time=(
datetime.datetime.now()
),
processing=[
pynwb.TimeSeries(
name='running_speed',
data=zarr.open('s3://test/running_speed.zarr'),
unit='m/s',
rate=60.0,
starting_time=0.0,
)
],
)
If the intermediate zarr data were written with some optimized chunking/compression config, I'd just like to include the data as-is, rather than re-do the compression on write to NWB-zarr.
from hdmf-zarr.
Thanks for the clarification and helpful example. In that case I would propose the following approach:
- Add a
bool
flagcopy_zarr_store
toZarrDataIO
- Update
ZarrIO.write_dataset
to use zarr.convenience.copy_store to copy the data if the data is wrapped inZarrDataIO
andcopy_zarr_store
is set to true - Optional: Add a static factory method
ZarrIO.from_zarr_store(...)
to wrap and existing zarr dataset withcopy_zarr_store=True
set. But I don't think this is needed here because it really doesn't save much.
With this, your example would change to:
pynwb.TimeSeries(
name='running_speed',
data=ZarrDataIO(data=zarr.open('s3://test/running_speed.zarr'), copy_zarr_store=True)
...
from hdmf-zarr.
@bjhardcastle is this something you are interested in contributing a PR for?
from hdmf-zarr.
@oruebel Yes potentially, just not sure when I'll get time to work on it.
from hdmf-zarr.
Related Issues (20)
- [Bug]: Deploy and release workflow fails due to unsupported actions
- [Feature]: support Python 3.12 HOT 4
- [Feature]: Refactor tox.ini to match HDMF
- [Feature]: Write zarr without using pickle HOT 5
- kerchunk considerations HOT 23
- [Feature]: write `xarray`-compatible Zarr files HOT 9
- [Documentation]: `linkable` key has been deprecated HOT 2
- [Bug]: `export` fails to correctly save units after adding columns HOT 7
- [Feature]: Support pathlib.Path in ZarrIO
- [Bug]: NWBZarrIO appending HOT 7
- [Bug]: Remote read with/without consolidated metadata is not being tested
- Zarr datasets info lack compression data HOT 3
- [Bug]: Zarr 2.18.0 with Blosc HOT 5
- [Feature]: Explore use of Zarrita HOT 2
- [Bug]: Writing NWB with `experimenter` (or any `ArrayLike[str]`) fails HOT 4
- [Feature]: NWBZarrIO should have load_namespaces=True by default HOT 2
- [Bug]: Pre-release daily workflows failing with zarr 3.0.0a0 HOT 2
- [Feature]: Support zarr-python v3
- [Bug]: `[0.7.0, 0.8.0]` Fails to open file with consolidated metadata from S3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hdmf-zarr.