GithubHelp home page GithubHelp logo

Comments (18)

jmbeckers avatar jmbeckers commented on July 25, 2024

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

Okay, thanks for clarifying that.

from divand.jl.

JianghuiDu avatar JianghuiDu commented on July 25, 2024

Sorry for the confusion, but statevector_unpack gives the absolute and unscaled error right? Is it in variance unit or in observational unit?

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

statevector_unpack is just a routine to reshape the result from a calculation array into the topology of your grid. If you apply it to a variance it will provide a variance. Here s.P is the error variance relative to the background variance. So at the end you will still have the relative error variance but on the real grid.

So no units here.

from divand.jl.

JianghuiDu avatar JianghuiDu commented on July 25, 2024

But in the notebook 14-cpme-demo.ipynb it is said statevector_unpack(s.sv,diag(s.P),NaN) using the results of DIVAndrun gives absolute error. And when comparing to the relative errors given by DIVAnd_cpme one has to divide it by the background error variance estimated by setting very large epsilon2. So I just assumed the s.P from DIVAndrun is absolute variance in the observational unit.
Also, in there is is also said DIVAnd_aexerr gives both the absolute and background variance aexerr and Bref. I would assume that aexerr./Bref is the relative error and equivalent to cpme calculated using DIVAnd_cpme.

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

I see where the misunderstanding stems from. In the notebook when speaking about the scaling, we refer to the boundary effect included in the background variance. The background error variance of diva actually increases when approaching coast and you can decide whether or not you want to take this into account when plotting the relative error.

Or said differently Bref is normally one far away from the boundary but increases towards coasts and if you want to take this into account when calculating the relative error you need to devide by this Bref.

To have real variances, you have to know the actual average variance of the background.

More details here.

Approximate and Efficient Methods to Assess Error Fields in Spatial Gridding with Data Interpolating Variational Analysis (DIVA) Beckers, Jean-Marie; Barth, Alexander; Troupin, Charles, Alvera-Azcarate, A. Journal of Atmospheric & Oceanic Technology (2014), 31(2), 515-530
https://orbi.uliege.be/handle/2268/161069
https://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-13-00130.1

from divand.jl.

JianghuiDu avatar JianghuiDu commented on July 25, 2024

I see. So if there is no boundary then the errors given by DIVAnd_cpme and DIVAnd_aexerr are errors relative to the overall background covariance (no spatial dependence). Considering the boundaries lead to spatial dependence of background covariance which should be scaled. The scaled errors aexerr./Bref are then the errors relative to the overall background covariance (without boundary effect). Am I correct?

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

yep

from divand.jl.

JianghuiDu avatar JianghuiDu commented on July 25, 2024

Great. Thanks for clarifying!

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

Just for reference, I've done a bit more reading and it turns out there are algorithms out there to evaluate terms of the inverse of a sparse matrix without computing the inverse for the entire matrix.

Taking one such function (https://github.com/DrTimothyAldenDavis/SuiteSparse/blob/master/MATLAB_Tools/sparseinv/sparseinv.m) I've crunched out the time-mean error fields for oxygen conc at 500m in the Atlantic... And the version using the selective inversion (bottom right) agrees closely with the version using a full inversion to produce the co-variance fields (bottom left). And that's a matter of seconds vs about 5 minutes run-time (for 0.5deg spatial resolution). Of cause, there is some extra overhead in the whole export from Julia, import to Matlab, do the inversion, write out and re-read into Julia...

rel_error_test

EDIT: One word of warning, sparseinv() is sensitive to the type you pass the values in the sparse matrix as... Burnt a few hours trying to sort out why it was working in some cases but not other only to find I'd exported a couple of matrices as Float64s when testing but coded the main script to instead export them as Float32s... Due to the rounding errors the resulting matrices were not positive definite (so couldn't be factorized using chol()) with resulting failure...

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

That looks very interesting. Maybe the SuiteSparse wrapper of Julia can give access to the functions used, if not now but at some point in the future ?

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

Okay, so Sparseinv works well enough for a global time-mean on a single depth level (1/2deg resolution, runtime of 10s-60s depending on the number of ocean grid points on the particular depth level), but turns out to really slow down when moving to the time-varying anomaly (relative to the above time-mean, not surprising given the matrix for a 10 year chunk is two orders of magnitude larger than the time-mean background case)....

So, been chasing up other options, so currently looking at a number of methods described in Siden et al. 2018 (paper at: https://www.tandfonline.com/doi/full/10.1080/10618600.2018.1473782, code at: https://github.com/psiden/CovApprox). Still trying to wrap my head around the more sophisticated approaches but looks like their simple Rao–Blackwellized Monte
Carlo estimator looks like a decent balance of computational efficacy vs accuracy, and also isn't dependent on C code compiled into a mex-file... So might even be able to implement it in Julia without too much difficulty.

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

Thanks for the references. For the moment we are testing some ideas along these lines. If you are ready to test, we can send you a julia code (hopefully in a few days)

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

If you want to try an experimental version:

https://github.com/gher-ulg/DIVAnd.jl/blob/JMB/src/DIVAnd_diagapp.jl

just use it after you made an analysis retrieving the structure s from the analysis and using s.P and s.sv

Only interesting if length scale is small compared to domain size (which seems to be the case in your images)

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

Sorry about the lack of reply... Was away on holiday and had other priorities recently.

JM, been unable to get your code working.

Didn't proceed with the methods described in Siden et al. as run-time gains were minimal. Looked at SelInv by Lin Lin et al. (code at: https://math.berkeley.edu/~linlin/software.html, paper at: https://web.stanford.edu/~lexing/diagex.pdf), runtime to invert a matrix was moderately better than Sparseinv() in matlab, but considerable effort to convert matricies to the input format; considered the parallelized version of it (included in PEXSI: https://pexsi.readthedocs.io/), but some of the dependencies required admin level access... so can't set them up on the local compute server (thus, haven't taken it any further).

Looking further, if anyone else is interested in exact solutions, it seems that Simon Etter from the National University of Singapore has implemented a (supposedly relatively efficient, if one believes his paper...) selective sparse matrix inversion algorithm in Julia (arxiv preprint at https://arxiv.org/abs/2001.06211, and Julia package at https://github.com/ettersi/IncompleteSelectedInversion.jl). Will try it out and get back if it actually works. Edit: Public version hasn't been maintained in the last three years (requirement is for Julia 0.5...).

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

And something I'd noticed but not paid much attention to that was mentioned in the above papers, is a "sampling-based approaches for estimating the selected inverse", based on ideas discussed in Bekas, Kokiopoulou,and Saad (2007 https://www-users.cs.umn.edu/~saad/PDF/umsi-2005-082.pdf) and Hutchinson (1990 https://www.tandfonline.com/doi/abs/10.1080/03610919008812866).

Implementations can be found in the Siden et al paper discussed above and MacCarthy et al (2011 JGR, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2011JB008234). Former used PCG to solve the inv(A).v(j) terms, latter uses Least-squares (without preconditioning, so takes a f***-ton of iterations...).

When implemented in series this approach is rather slow but it's trivial to parallelize (in matlab literally just a mater of changing a for statement to parfor), so if you've got a dozen CPUs to throw at things runtimes should be manageable, and doesn't place too many demands on RAM.

Of cause, it's only an approximation and accuracy is limited by how many random first guess vectors you throw at the problem...

from divand.jl.

croachutas avatar croachutas commented on July 25, 2024

And tested for a chunk of the North Atlantic, North Sea and Med. Runtime on the Hutchison/Sampling approach (described in papers in last post) about 1000s on four threads for 200 random 'sampling' vectors.

Maps, from left to right, relative error obtained by inversion of precision matrix; relative error from CPME; relative error from Hutchinson approach, and relative error from Hutchinson approach with low values (<1E-3) eliminated and smoothing (2.5degx2.5degx3year window) applied:
full_err_vs_cpme_vs_hutchinson

CPME produces spatial patterns similar to the full error but almost universally under estimates values by a factor of 2-10.

The Hutchinson approach produces error values and spatial patterns consistent the full error, but produces a scatter of individual grid-points with values much less than the full error (hence smoothing in the last panel), this scatter reduces with an increase in the number of 'sampling' vectors, but that means increased runtime (assuming constant number of threads, going to 500 sampling vectors increases runtime to about 2000s in my test case).

Haven't tried implementing the Hutchinson approach in Julia yet, so still passing the precision matrix to matlab for inversion...

from divand.jl.

jmbeckers avatar jmbeckers commented on July 25, 2024

The cpme underestimation for good data coverage has been adressed in the experimental routine DIVAnd_scalecpme! from my branch.
Also the DIVAnd_diagapp should work and provide an approximation of the diagonal of P which should work nicely/quickly if L is small compared to the domain size
You might need to install my branch though.

from divand.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.