Comments (11)
Your last comment is important context! (the zarr bit in particular). I would add that to the other issue
from flox.
You're starting with 1.5GiB chunk sizes on X
. I would reduce that to the 200MB range. The bottleneck is usually numpy_groupies
here. You should see input_validation
prominently in the dask flamegraph
So I would also try installing numbagg
. It'll be a bit slow to compile but should be faster and make less memory copies.
from flox.
Running this locally, I also spot a dask scheduling bug where it doesn't treat normal
as a data generating task and runs way too many of them initially before reducing that data. Can you open a dask issue please?
from flox.
Ah I keep forgetting this numbagg
only helps with nan-skipping aggregations, so it won't really help here.
I think this is a dask scheduling issue.
from flox.
I also spot a dask scheduling bug where it doesn't treat normal as a data generating task and runs way too many of them initially before reducing that data.
In my real world use case, I get this just loading data from a zarr store.
I think this is a dask scheduling issue.
Me too, but I'm not sure why flox seems to be triggering it. In the dask issue I show that other tree aggregations with this array (X.sum(axis=0)
) seem fine.
from flox.
Reproducer here: https://gist.github.com/ivirshup/eb4f5beb1bb33724b8c11bd0eacf03a6 from dask/dask#11026 (comment)
This works on my laptop with 32GB RAM with some spilling
res, codes = flox.groupby_reduce(
X_dense.T,
by,
func="sum",
fill_value=0,
)
If you turn on logging with
import logging
logger = logging.getLogger("flox")
logger.setLevel("DEBUG")
console_handler = logging.StreamHandler()
logger.addHandler(console_handler)
you'll see it automatically chooses method="cohorts"
and it just works albeit with spilling. There seems be a densification of each sparse block that could be improved. I'll have to look in to it.
EDIT: your choice of reindex=True
was using way more memory than necessary to run this reduction. You have 2390
unique groups, but there are O(10) groups in the output chunks with cohorts. That means reindex=True
was inflating each block to ~ 2390/~10 = 200
X larger than it needed to be.
plt.hist(res.chunks[-1])
EDIT2: I guess you're densifying on your own...
EDIT3: Memory issues can be controlled by using numbagg
and nansum
. It's also 5X faster (approx). I guess we should look in to how to make flox
just handle sparse matrices directly.
res, codes = flox.groupby_reduce(
X_dense.T,
by,
func="nansum",
fill_value=0,
engine="numbagg",
)
from flox.
How would flox handle sparse matrices directly? graphblas
sparse matrix product? We can construct the sparse matrix from by
really quickly (we know where the ones are already).
from flox.
graphblas
can be nice, since we can pretty easily define a semi-ring for many tasks here (though, not so sure about median
) and it's very fast. But it's always going to be only 2d, and is GPL.
In scanpy
right now we are just doing matrix multiplication (code) for in-memory aggregation, but I suspect will end up using some numba kernels since we can get better performance and numerical stability.
@Intron7 may have some thoughts here as he wrote some CUDA kernels for these aggregations in memory.
from flox.
graphblas can be nice, since we can pretty easily define a semi-ring for many tasks here (though, not so sure about median) and it's very fast. But it's always going to be only 2d, and is GPL.
I was thinking of using it through python-graphblas
which is Apache-2.
n scanpy right now we are just doing matrix multiplication (code) for in-memory aggregation, but I suspect will end up using some numba kernels since we can get better performance and numerical stability.
Oh nice! We could move your code into flox as an "engine" or alternatively tie in to scanpy with engine="scanpy"
. I prefer the former, I think, it'd be nice to not need an optional dependency for fairly simple code.
For reference, "engine"s handle the in-memory part of the aggregation. And we already have the codes
ready.
from flox.
I was thinking of using it through python-graphblas which is Apache-2.
The wrapper is apache-2, but it pulls in graphblas which is GPL. So effectively same distribution issues is my understanding.
We could move your code into flox as an "engine"
That's how I've been thinking this sort of thing would work.
Would it be weird to have a "sparse" engine that only works with sparse chunks? Or would you want to add it to another engine?
your choice of reindex=True was using way more memory than necessary to run this reduction. You have 2390 unique groups, but there are O(10) groups in the output chunks with cohorts. That means reindex=True was inflating each block to ~ 2390/~10 = 200X larger than it needed to be.
I am trying to remember why I was using reindex=True. I think it may have just been for a memory usage I could estimate as I had a lot of trouble getting this to work without running out of memory at all.
FWIW, if there is sparse support, then we can increase the chunk sizes, which means we end up hitting more groups.
Also, we don't really have set expectations about the distribution of groups per chunk, so I'd like to be sure this works in the worst case where the labels are well shuffled.
from flox.
Would it be weird to have a "sparse" engine that only works with sparse chunks?
Don't think so. In general, I'd like users to not set it at all (the default is None
now). Constructing a sparse "weights" matrix is another way to do a groupby anyway (Ken Iversion stuck it in his "notation as a tool of thought" paper too!) and flox
has scipy
as a dependency already.
Also, we don't really have set expectations about the distribution of groups per chunk, so I'd like to be sure this works in the worst case where the labels are well shuffled.
Yeah that would be nice. But it would also be good to get a real example. We might instead look to speed up the reindex=False
case to reduce memory usage, and have that be chosen by default.
from flox.
Related Issues (20)
- Optimize `split_every` HOT 4
- Error when data variables have different dimensions HOT 2
- Flox seems much slower in some cases? HOT 2
- possible support for sparse arrays HOT 2
- Examples in docs can be hard to read in dark mode HOT 1
- Support scipy < 1.11 HOT 3
- Add docs for `method` guessing
- optimize nanquantile
- ⚠️ Nightly upstream-dev CI failed ⚠️
- Unable to assign flox method and quantile method in xarray_reduce HOT 2
- ⚠️ Nightly upstream-dev CI failed ⚠️
- `xarray_reduce` is incompatible with `DataArray.pipe` due to mandatory `func` kwarg. HOT 1
- TypeError: no implementation found for 'numpy.asarray' on types that implement __array_function__: [<class 'pint.Quantity'>] HOT 2
- Support grouping by multiple variables with Cubed map-reduce
- Implement `method='blockwise'` for Cubed HOT 3
- Consider `preferred_method="blockwise"` if `by` is sorted HOT 1
- why is using flox slower than not using flox (on a laptop) // one example from flox docs HOT 8
- First execution of groupby on Xarray with Flox takes a lot of time HOT 4
- make cubed support more prominent in docs
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flox.