GithubHelp home page GithubHelp logo

fatiando / verde Goto Github PK

View Code? Open in Web Editor NEW
578.0 22.0 72.0 87.32 MB

Processing and gridding spatial data, machine-learning style

Home Page: https://www.fatiando.org/verde

License: BSD 3-Clause "New" or "Revised" License

Makefile 0.34% Python 98.87% TeX 0.79%
geophysics earth-science geospatial python scipy interpolation python3 scipy-stack fatiando-a-terra geoscience

verde's Introduction

Verde

Processing and gridding spatial data, machine-learning style

Documentation (latest)Documentation (main branch)ContributingContact

Part of the Fatiando a Terra project

Latest version on PyPI Latest version on conda-forge Test coverage status Compatible Python versions. DOI used to cite this software

About

Verde is a Python library for processing spatial data (topography, point clouds, bathymetry, geophysics surveys, etc) and interpolating them on a 2D surface (i.e., gridding) with a hint of machine learning.

Our core interpolation methods are inspired by machine-learning. As such, Verde implements an interface that is similar to the popular scikit-learn library. We also provide other analysis methods that are often used in combination with gridding, like trend removal, blocked/windowed operations, cross-validation, and more!

Project goals

  • Provide a machine-learning inspired interface for gridding spatial data
  • Integration with the Scipy stack: numpy, pandas, scikit-learn, and xarray
  • Include common processing and data preparation tasks, like blocked means and 2D trends
  • Support for gridding scalar and vector data (like wind speed or GPS velocities)
  • Support for both Cartesian and geographic coordinates

Project status

Verde is stable and ready for use! This means that we are careful about introducing backwards incompatible changes and will provide ample warning when doing so. Upgrading minor versions of Verde should not require making changes to your code.

The first major release of Verde was focused on meeting most of these initial goals and establishing the look and feel of the library. Later releases will focus on expanding the range of gridders available, optimizing the code, and improving algorithms so that larger-than-memory datasets can also be supported.

Getting involved

🗨️ Contact us: Find out more about how to reach us at fatiando.org/contact.

👩🏾‍💻 Contributing to project development: Please read our Contributing Guide to see how you can help and give feedback.

🧑🏾‍🤝‍🧑🏼 Code of conduct: This project is released with a Code of Conduct. By participating in this project you agree to abide by its terms.

Imposter syndrome disclaimer: We want your help. No, really. There may be a little voice inside your head that is telling you that you're not ready, that you aren't skilled enough to contribute. We assure you that the little voice in your head is wrong. Most importantly, there are many valuable ways to contribute besides writing code.

This disclaimer was adapted from the MetPy project.

License

This is free software: you can redistribute it and/or modify it under the terms of the BSD 3-clause License. A copy of this license is provided in LICENSE.txt.

verde's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

verde's Issues

Implement Scipy gridder with the same API

scipy.interpolate offers many algorithms but many of the functions have different APIs. We should have an interpolator class that follows our API but uses scipy's linear, nearest, and cubic 2D interpolators. This will helps us interact with them in a consistent way. And it will make it easy to generate xarray grids or extract profiles using the scipy code.

Function to compute the FFT of xarray grids

Verde uses xarray.DataArrays to represent a gridded variable. It conveniently wraps the values and coordinates. We need a function fft_grid(grid) -> grid_fft that takes an xarray grid as input and outputs another xarray grid with the complex Fourier transform of the grid data. The coordinates should be wavenumbers in the increasing values (fftshifted).

This is a requirement for #8 and for radial averaged power spectrum.

Collaboration with pyresample

I am one of the maintainers of the pyresample project (https://github.com/pytroll/pyresample) along with @mraspaud @adybbroe @pnuu. We do some very similar operations to what you have in verde, but with different interfaces and sometimes different purposes. I think our two projects could benefit from each other.

Pyresample/SatPy

Pyresample has been around for a long time and is used by many meteorological institutes/organizations via the satpy library (https://github.com/pytroll/satpy) and mpop before that. I also use satpy in my command line tools under the polar2grid and geo2grid project and joined the PyTroll/SatPy team after attending conferences and realizing that we were solving exactly the same problems.

Pyresample and satpy now use xarray and dask for all of their operations, but pyresample is in need of a new interface now that xarray accessors are more popular and the ease of attaching geolocation information to an xarray object (see pydata/xarray#2288). Pyresample offers (or will very soon) three different resampling algorithms: nearest neighbor using pykdtree, elliptical weighted averaging for scan-based satellite instrument data, and bilinear interpolation using pykdtree for neighbor calculations. The old pyresample API also includes the ability to use your own weighting functions, but we haven't migrated that to work with the xarray/dask functionality.

The use cases that pyresample were originally used for and what people were used to doing involved having predefined 500m-2km grids in various projections. These grids can also be dependent on the visualization tool used to view the satellite data where only certain projections can be used or certain resolutions are too dense for satellite raster images. This is my opinion/experience, the other pyresample developers may see differently. Anyway, a lot of pyresample's interfaces assume you have a predefined grid ("AreaDefinition") that you want to resample your data to. It is coming up more and more that we need an easier interface for making these definitions from other existing definitions like lon/lat arrays or adjustments to existing AreaDefinitions (change resolution). While this is all possible, it isn't always the easiest to do or access or document.

Verde

I spoke with you @leouieda at SciPy 2018 and on gitter which lead to #92 where verde now uses pykdtree if it is available for better performance. You also mentioned in your lightning talk I believe that you'd like to switch to dask where possible but haven't done so yet. This is something pyresample's experience could help with since dask-ifying a kdtree is not easy, but I think we've done the best we can without rewriting the C code.

You are developing some really nice interfaces in verde from what I've seen and make certain use cases extremely easy. Like the last paragraph I described for pyresample above. Pyresample could benefit from interfaces like these I think or both projects could benefit by adding these interfaces to xarray accessors.

Collaboration

One library, two libraries, one library that depends on another, two libraries that depend on a third, or just similar interfaces...I'm not sure what would be best. I do think that we could benefit from working together. This issue isn't requesting that either project be absorbed by the other, but to make each aware of the other.

@leouieda Where do you see verde's features overlapping with pyresample? How many more features do you still want to add to verde (assuming verde is young-ish and seeing how many PRs you are making every day)?

How far do you see verde's feature set going? What else do you want it to do? Does it/will it do more than gridding and the related utilities?

Use the new median_distance function in the gallery/tutorials

Description of the desired feature

PR #163 introduces a utility function for finding the average spacing between data points. This is useful information for determining a grid spacing, for example. We should use this function in some of the examples and tutorials to print out the distances and justify our choice of spacing.

Convenience function for near neighbor distances

Description of the desired feature

Some applications would probably benefit from knowing the average distance between nearest data points (or the min, max, median, etc). For example, for determining the grid spacing or the mindist argument for Spline.

This can be done with a KDTree:

from scipy.spatial import cKDTree
points = np.transpose([coordinates])
# Get the second closest (the closest are points themselves)
tree = cKDTree(points, k=[2])
distances, labels = tree.query(points)
distances.mean()

There is room for automation here to handle:

  • The transposition of the coordinates and ensuring that there are only 2 1D arrays.
  • Switching to pykdtree if it's installed.
  • Projecting the coordinates.

A function like the following would be useful to have:

def neighbor_distances(coordinates, projection=None):
    ...
    return distances

spacing = neighbor_distances(coordinates).mean()

It could go in the verde/coordinates.py module.

Use pykdtree in BlockReduce to speed up computations

Feedback or description of feature requested

pykdtree has the same API as cKDTree but @djhoese showed that it can be 2x faster on a BlockReduce example. We should add it as an optional dependency. Use it if we can import it or fallback to cKDTree if not. Should include some instructions on the docs/install.rst page and a mention in the BlockReduce docstring.

Function to compute finite-difference derivatives of grids

We need a function to calculate a finite-difference gradient of a xarray.DataArray. The function should look like gradientfd(grid, azimuth) -> gradient_grid that takes in a DataArray and direction (azimuth) and calculates the gradient in that direction using central differences. The boundary conditions should be that the second derivative is zero (i.e., copy the derivative values to the borders).

East and North derivatives can be calculated easily using azimuths 90 and 0.

This will complement #8 but has less prerequisites for getting started (no FFT or padding required).

Documentation page explaining our conventions

There should be a special documentation page that explains our conventions: coordinate systems, naming schemes, etc. To make it clear what we mean by region or latitude and longitude.

More functions in public API of verde. base

Description of the desired feature

The verde.base module should contain everything needed to implement a new gridder. Ideally, someone could use the things in there to implement their own. This is what I'm planning to do in Harmonica. Right now, the only thing in the public API is the BaseGridder class. But we also need check_fit_input, n_1d_arrays, and least_squares in all implementations. These functions should be available in the verde.base public API. check_fit_input and n_1d_arrays are pretty much done and just need to be moved. least_squares will take a bit more work and might have to be re-designed.

Add 'python_requires' to setup.py

To help with future python version deprecations in a package it can be helpful to add a 'python_requires' argument to setup.py. You can see an example of it here: https://github.com/pytroll/satpy/blob/master/setup.py#L137

The idea is that once you stop supporting a certain version of python, you update the python_requires, and users of the unsupported version of python will not get the new versions of the package installed. I don't have the specific documentation page on hand, but noticed this was missing so thought I should mention it.

Tutorial showing how to use cross-validation

Verde defines a few utility functions for splitting geographic data for cross-validation. This tutorial should show how to use train_test_split and the score method to judge the performance of a gridder. There should also be a note about why the scikit-learn train_test_split doesn't work for us.

Padding of xarray grids

The padding in fatiando.gridder is useful for FFT operations but it's not compatible with xarray grids. Since they only apply to regular grids, we should make work with xarray.DataArrays. Then our FFT derivatives (#8 and #36) that also use DataArrays can benefit from the padding.

Each padding type in fatiando.gridder should be its own function in Verde pad_*(grid, options) -> padded_grid that takes in a DataArray and options and outputs another DataArray.

The functions can be implemented in separate pull requests so we don't have to do them all at once.

Tutorial for grid generation and coordinates

Description of the desired feature

We need a tutorial about generating regular grids and all the details surrounding how the coordinates are determined (spacing, adjustments, registration, etc). This would one of the very first tutorials and would help explain some of the concepts in the grid_coordinates docstring in more detail.

Create a convexhull masking function

Description of the desired feature

A good way to mask grid points that are too far from data is to only show the ones that fall inside the data convex hull. This is what many scipy and matplotlib interpolations do. It would be great to have a convexhull_mask function that has a similar interface to distance_mask but masks points outside of the convex hull. This function should take the same arguments as distance_mask except maxdist.

One way of implementing this would be with the scipy.spatial.Delaunay class:

tri = Delaunay(np.transpose(data_coordinates))
# Find which triangle each grid point is in. -1 indicates that it's not in any.
in_triangle = tri.find_simplex(np.transpose(coordinates))
mask = in_triangle > 0

Allow block_reduce to take a weights argument

Right now, only the data gets passed to the reducer function. If we want to do a weighted average, there is currently no way to have the windowing select the weights as well. To overcome this, block_reduce can take an optional weights=array argument. If it's given, then select the values from it as well and pass it as weights=windowed_array to the reducer function.

Function to transform an xarray grid into a pandas dataframe

The builtin to_dataframe method in xarray uses the coordinates as the dataframe 2D indices, so in practice it's not really spelling out all of the point coordinates. What I want is to make the grid into an xyz format with the coordinates as columns of the dataframe.
This will allow us to use the data in functions that don't like grids, like the forward modeling and inversion functions in Fatiando.

The function should be grid_to_table(grid) and it spits out a table with the columns having the correct names taken from the grid.

Example of using weights in BlockReduce

There should be an example dedicated to just passing in weights for a BlockReduce using numpy.average. It should replace the current example about outliers and just do BlockReduce, without the spline.

The spline example introduces outliers in the already reduced data, which is not what would happen in the real world. A future example using spline would need to have a BlockMean that can output blocked weights as well.

Tutorial showing how to handle geographic data

Most of our gridders are Cartesian. The tutorial should show how to project the input data so that Spline et al can be used and how to generate a geographic grid by passing in a projection function to the grid method.

Finish the install guide

The install guide only explains how to install from github. Should have a pip and conda section as well.

Tutorial for the basics of interpolation

The first tutorial should cover the basics of using Spline (no BlockReduce) to interpolate Cartesian data. This tutorial will show the gridder API (fit, predict, and grid). Probably best to use the synthetic checkerboard data so that we don't have to deal with projections and Cartopy.

Tutorial page showing how to use splines with scikit-learn

The main advantage of the Verde API is that it handles the calculating of the sensitivity (feature) matrix and all the busy work of dealing with coordinates. Under the hood, it's all just linear models and the actual computations are passed along to scikit-learn. It would be to have a tutorial to show how to generate the feature matrix for a spline (Spline.jacobian) and then call scikit-learn directly on that in case people want to experiment with different fitting algorithms.

Function apply a reduction operator in blocks

Create a block_apply(easting, northing, data, reduction, spacing, region=None, adjust='spacing') -> easting, northing, data function that takes a reduction function like np.mean and applies it to the data in windows. Returns the easting and northing coordinates of the centers of windows that have data and the reduction of the data in those windows. Region can be inferred from input and the spacing or region can be adjusted to fit each other like in grid_coordinates.

This will serve as GMT's blockmedian and blockmean but can be more general.

Option to project coordinates before gridding

The methods in BaseGridder should take an optional projection(lon, lat) -> east, north argument to project coordinates before passing them to predict. This allows Cartesian gridders to create geographic grids without a lot of work. The grid output should still be in the original coordinates. Can take pyproj projections as well with this interface.

Metagridder class to chain operations

The scikit-learn Pipeline allows transformations of the Jacobian to be chained with a final estimator at the end. That's fine if you want to chain a scaling operation or PCA. When gridding, we usually want to remove a trend from the data itself, not the Jacobian.

The Chain class will take a list of gridder classes and apply them in succession. Each new gridder will fit the residuals from the previous gridder. When predicting, the predictions of each component are summed:

grd = Chain([('trend', Trend(degree=2)), ('grid', ScipyGridder())])
grd.fit(coordinates, data, weights) # fits ScipyGridder on the residuals of Trend
grd.predict(coordinates)  # sums the predictions
grd.grid() # Make a grid of the summed predictions
trend = grd.named_steps['trend'].grid() # Make a grid of the trend only

This is a convenient way of adding trends to all our gridders but keeping some level of control on the user side. It will also enable using the same code when implementing equivalent layers in harmonica.

Function get vmin and vmax to center the colorbar

For diverging data, we end always needing to center the colorbar by setting vmin and vmax using the maximum absolute value of the data.
It would be helpful to have a maxabs function to calculate this instead of writing np.abs([x.min(), x.max()]).max() all the time.

Include GMT ship bathymetry data

The GMT tutorial uses bathymetry from Baja California to illustrate gridding. That's a good data set. We can package that in the original form and also a decimated version (using gmt blockmedian) to get better results while GMT/Python isn't available.

Plot grids using xarray methods instead of pyplot

Description of the problem

As pointed out by @ahartikainen, the pyplot.pcolormesh functions expects the grid coordinates to represent the boundaries of a cell. The grids we tend to generate on the examples all have coordinates of the center of these cells. So the plots are slightly off by 0.5*spacing.

The xarray plotting methods take this into account: http://xarray.pydata.org/en/stable/plotting.html#coordinates

Solution: Replace all calls to plt.pcolormesh in the examples and tutorials with grid.variable.plot.pcolormesh to fix the plots. The source files are the *.py files in data/examples, examples, and tutorials.

Tutorial showing how to grid multicomponent data

This tutorial should show how to handle multi-component data (like GPS velocity vectors): how BlockReduce works in these cases, the new Components class (#74), and the specialized multi-component gridders like Vector2D.

Add a Components class for multicomponent data

If we want to grid multicomponent data, we can use something like Vector2D to grid them jointly or we can use a Spline for each component separately. In this case, there is no way to put them in a Chain with BlockReduce or Trend. Also, VectorTrend is doing this for trend estimation.

A better approach would be to have a meta-gridder Components that takes multiple estimators and fits each one to a given data component. It would be a generalization of VectorTrend and would replace it.

This is how the usage would look:

grd = Chain([
    ('mean', BlockReduce(np.mean, spacing=1),
    ('trend', Components([Trend(degree=1), Trend(degree=2)])),
    ('spline', Components([Spline(), Spline()]))
])
grd.fit(coordinates, (data_east, data_north))
# Access each gridder separately
trend_east = grd.named_steps['trend'].components[0].grid()

Tutorial showing how to chain operations

It's very common to first run a BlockReduce on data and then pass the output to Spline. Many times, we should also remove a trend before the spline and then restore it to avoid instabilities on the spline. This tutorial should show how to use the Chain class to build a pipeline of BlockReduce -> Trend -> Spline. It should also mention why we can't use the scikit-learn Pipeline.

Function to generate a 2D point scatter

Need a scatter(region, size, random_state=None) function to generate random coordinate pairs inside a given area. Use a uniform distribution. region is [west, east, south, north] in geographic or cartesian coordinates. Should return two arrays with easting and northing coordinates.

Method to calculate finite-difference derivative on arbitrary points

Description of the desired feature

With our Green's functions interpolators, we can predict data at any point once we have a fitted model. We could have a method BaseGridder.predict_derivative(coordinates, direction=(east, north)) to calculate (central) finite-difference derivatives on a given set of points on arbitrary directions (given by a direction vector or keywords like north and east). The method could allow for arbitrary precision (FD order) as well. This method can later be used to grid derivatives, either through a new method or an argument to BaseGridder.grid.

An example usage would be:

import verde as vd
data = vd.datasets.fetch_rio_magnetic()
coordinates = projection((data.longitude, data.latitude))
spline = vd.Spline().fit(coordinates, data.total_field_anomaly_nt)
east_deriv = spline.predict_derivative(coordinates, direction="east")
# Or using a vector
north_deriv = spline.predict_derivative(coordinates, direction=(0, 1))

A challenge would be determining the spacing for the finite differences. There are probably methods out there for estimating this (maybe Fukushima (2018)). For now, just having the spacing as an argument is more than enough. We can worry about automation later.

It would be good also to calculate second or third derivatives. This could be an argument to the method that defaults to order=1 for first-derivative. But this is not a priority and can be implemented in subsequent PRs.

Allow Spline to take a region argument

Right now, verde.Spline uses the data region to create a regular grid of forces (if spacing is provided). The problem with this is that the model will change between runs of a kfold cross-validation because the data changed. A way around that is allow Spline to take a region argument that fixes this region.

Simply checking if the data region already exists is not good because there is the implicit assumption that it is set to the first data-set that we fitted. And there is no way to change that via set_params. Having an explicit parameter is better.

Add option for specifying grid using spacing instead of shape

Right now, you can only create a grid by specifying the number of points in each direction using shape. It would be convenient if we could specify a grid spacing instead (in east and north). The main challenge is that there is no guarantee that the spacing fits in the given region. So we should probably tweak the spacing or the region to adjust.

If tweaking the spacing, an option is to calculate a number of points that could fit the region by round(dimension/spacing) and then do a linspace. If tweaking the region, we can use np.arange(w, e, spacing) and add another node if not np.allclose(lons[-1], e).

Function to convert geographic to planar data

Usually, a more adequate projection should be applied to convert geographic data to planar Cartesian coordinates. But sometimes we just want a quick conversion without messing with proj and UTM zones.

The equirectangular projection is easy to implement and good enough for many cases. We need a function geographic_to_planar that can take a pair of coordinates (or a grid) and convert it to planar coordinates.

FFT horizontal derivatives of xarray grids

The horizontal derivatives in fatiando.gravmag.transform aren't specific to potential fields and can be calculated for any grid. We should have those functions here and make them work with xarray grids only. This greatly simplifies the function which can be gradientfft(grid, azimuth) -> grid which can calculate a directional derivative of the grid (xarray.DataArray) and return another grid. The gradient is a more generic version of the x and y derivatives and these can be achieved by setting azimuth to 0 or 90 degrees.

Class to fit a 2D polynomial trend to the data

This would be a gridder class and will provide the functions needed to calculate the trend Jacobian (which will be used later by the splines). Also useful for calculating regional trends. Should include a CV option.

Create a DummyGridder for use with Vector

Description of the desired feature

The Vector class allows us to compose a multi-component gridder from scalar gridders (like gridding a 3D vector with 3 Spline instances). We can also use this to make a multi-component Trend and use it in a Chain. But sometimes we want to fit trends to some components and not others but still have a muilt-component Chain. Currently, there is no way of doing this. A way around would be to have a DummyGridder class that implements the gridder API but does nothing with the data. It's fit method should do nothing, predict should return an array of zeros, and filter should just return whatever inputs it got. Then we can do this:

components = (data.east, data.north, data.up)
# Only use a Trend for the East and North components
spline = vd.Chain([
    ("trend", vd.Vector([vd.Trend(degree=1), vd.Trend(degree=1), vd.DummyGridder()])),
    ("spline", vd.Vector([vd.Spline() for i in range(3)])),
])
spline.fit(coordinates, components)

Use balanced_tree=False for cKDTree

Description of the desired feature

For large number of points, tree construction is slow if the default is used: balanced_tree=True. All of this is assuming that points are "more or less" random order, meaning that order of points is not sorted in 3D.

https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html

This could be implemented with either lambda

from scipy.spatial import cKDTree # pylint: disable=no-name-in-module
KDTree = lambda x: cKDTree(x, balanced_tree=False)

or partial

from functools import partial
from scipy.spatial import cKDTree # pylint: disable=no-name-in-module
KDTree = partial(cKDTree, balanced_tree=False)

Are you willing to help implement and maintain this feature? Yes/No

Maybe

Interpolator for harmonic functions (like geophysics potential field data)

Description of the desired feature

Data that are harmonic functions (obey Laplace's equation) have certain advantages. If gridded using a harmonic kernel, like 1/distance, we can include the height coordinate in the fitting. We can also predict the data at arbitrary points in 3D space as long as we are outside of the sources (usually this means above the surface of the Earth). This is the case for geophysics gravity and magnetic data.

Using 1/distance as the kernel is good enough for interpolation. More realistic kernels are needed to grid multiple components or (for magnetic data) do reduction to the pole. We won't bother with those and leave that to a more specialized package (mainly harmonica). But it will be good to have this one here in Verde because it's general purpose enough.

Return an array from cross_val_score

Function verde.model_selection.cross_val_score fits gridders to many folds of the data and calculates the score for each fold. It currently returns a list of the scores, which is annoying if want to calculate the mean score. For example cross_val_score(...).mean() doesn't work and needs to be np.mean(cross_val_score(...)). So it would be better if cross_val_score returned a numpy array instead.

Add support for generating N-dimensional grids

Description of the desired feature

Grids are generated by the BaseGridder.grid method, which relies on coordinates.grid_coordinates to generate the coordinates for prediction. There is a heavy assumption of only 2 dimensions for the grid in the code: region is assumed to be [W, E, S, N], assigning coords to xarray.Dataset is hard-coded, etc. This is fine for gridders in this package because none of them support predicting data in 3D from 2D observations.

This is not the case for harmonic functions, like gravity and magnetic data. Because of Green's identities, you can predict these data anywhere in space from observations on a surface. This is know as the equivalent layer technique and it will be implemented in harmonica using Verde as a basis. So it would be good to:

  • Add support for grid_coordinates to take N-dimensional region, spacing, and shape. For example, grid_coordinates(region=[0, 10, 0, 20, 0, 30], shape=(10, 11, 12)) should produce three 3D arrays.
  • Change BaseGridder.grid to set the Dataset coords attribute dynamically depending on the shape of the coordinates produced by grid_coordinates.

Use Chain more in the examples

Right now, only the example explaining verde.Chain actually uses it. All examples that do something like BlockReduce and gridding should use Chain.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.