GithubHelp home page GithubHelp logo

Comments (19)

StefanKarpinski avatar StefanKarpinski commented on June 6, 2024 3

I'm increasingly inclined to just unconditionally store values assigned into a sparse matrix, regardless of whether they are zero or not. So A[i,j] = ±0.0 would create a stored position even though the values are equal to zero. If you're assigning to enough positions in a sparse matrix that this matters, then you're already hosed anyway. Completely separating the structural computation from the numerical one is such a nice clear line in the sand. Of course, many operations do both, but if the structural part of an operations is completely independent of the computational part, it's much simpler to explain and understand what's happening. My suspicion is that all of the "optimizations" that don't store zeros that are created in a sparse matrix are useless anyway since the whole premise of a sparse structure is that you never actually touch most of its elements. By that premise you're not going to be assigning to most positions, so it's fine to store the ones you do. This viewpoint dovetails nicely with the concept of structural zeros as "strong zeros".

from sparsearrays.jl.

martinholters avatar martinholters commented on June 6, 2024 2

We should probably consider embracing the concept of a "strong zero" in the sense that false is right now, and declare structural zeros as such strong zeros. The problem is that A[i,j] doesn't tell you whether it acts as such a strong zero or not. Returning false for getindex of structural zeros would give a very coherent picture semantically, wouldn't it? Too bad this is likely a no-go performance-wise. But then again blindly accessing elements of a sparse matrix without ensuring beforehand to omit structural zeros is sub-optimal anyway. So combined with a way to efficiently iterate over stored elements...

from sparsearrays.jl.

tkelman avatar tkelman commented on June 6, 2024 1

Option 1 results in undesirable behavioral discrepancies between sparse and dense representations of the same underlying mathematical object. Sparse arrays should be strictly a performance optimization, and not have vastly different behavior than their dense equivalents. If we can do IEEE754 correctly in a dense version of an operation, then we should do simple checks in sparse method implementations to get as compatible as reasonably possible.

Other than broadcast, we haven't especially tried and it's lead to obviously wrong behavior several times. Checking for non finite elements and falling back to a dense implementation can be more correct than what we have. It'll be expensive in corner cases that don't occur very often, but I think that's preferable to throwing or silently calculating a wrong answer.

from sparsearrays.jl.

KristofferC avatar KristofferC commented on June 6, 2024 1

+1 for documenting (perhaps as a warning admonition) that sparse linear algebra IEEE is not fulfilled or alt a best effort and if someone cares strongly about it, they could implement it as long as performance doesn't suffer (too much).

Seeing how pretty much the whole sparse linear algebra stack used throughout the decades have made this choice, I don't think we should worry much about it.

from sparsearrays.jl.

nalimilan avatar nalimilan commented on June 6, 2024 1

I agree with @tkelman that it would be better to follow IEEE semantics if that's possible without killing performance. Indeed, getindex does not return false or a special "strong zero" type when extracting non-stored zeros: it returns 0.0. If Base methods for sparse arrays do not follow IEEE semantics, they are inconsistent with what user-written algorithms would "naively" compute when working with a sparse arrays just like any other AbstractArray: they would break the interface.

In practice, I suppose that's generally not an issue, in particular in existing languages/toolkits, which are much less powerful than Julia in terms of abstractions and of the ability to define new types. This inconsistency turn out to be annoying in some corner cases, and these NaN/Inf issues are subtle enough that we don't need to aggravate matters.

from sparsearrays.jl.

martinholters avatar martinholters commented on June 6, 2024 1

that could be handled by allowing for a custom default value

still [A -A] = 🔥

from sparsearrays.jl.

stevengj avatar stevengj commented on June 6, 2024 1

JuliaLang/julia#33821 added an undocumented isstored function.

from sparsearrays.jl.

andreasnoack avatar andreasnoack commented on June 6, 2024

I forgot to mention that I tested SuiteSparse, Scipy, and MATLAB. All three seem to follow 1. since they give the same result as us for the calculation speye(2)*[NaN,1.0]. I gave up on PETSc.

Sparse arrays should be strictly a performance optimization, and not have vastly different behavior than their dense equivalents.

I don't agree but that is what this issue eventually will decide

from sparsearrays.jl.

tkelman avatar tkelman commented on June 6, 2024

Try speye(2) * Inf, a simpler case where I don't think you'll see unanimous behavior in other buggy libraries. Last I checked Matlab does make an effort at IEEE754 consistency in some operations.

Given blas and lapack have bugs in some operations when inputs have non finite data, I don't think it's unreasonable to be careful about the corner cases when implementing that with a check and branch is simple, and think more carefully about the costs when the implementation would be more complicated, like for sparse-sparse multiplication or factorizations. Otherwise we're saying sparse-dense inconsistency is desirable because we're not willing to do a better job even when it wouldn't cost much in implementation complexity or runtime cost for the common cases?

from sparsearrays.jl.

tkelman avatar tkelman commented on June 6, 2024

I don't agree but that is what this issue eventually will decide

What are they then if not a representation of the same mathematical object? Should inserting a stored zero significantly change the behavior of a sparse array? What should sparse .^ sparse, sparse .^ 0, and sparse ./ sparse return? Are https://github.com/JuliaLang/julia/issues/21515#issuecomment-313868609 and JuliaLang/julia#5824 bugs that should be fixed, or acceptable discrepancies? It's a bit dishonest to say the element type is really behaving as Float64 if we're not following IEEE754 as best as we can.

Negative zero is an interesting case where IEEE754 doesn't follow (a1 == a2 && b1 == b2) implies (op(a1, b1) == op(a2, b2)). I believe the only non-NaN violation of that is in the sign of an infinity result, but if anyone has any other examples please correct me there (edit: branch cuts of log, or angle, for various quadrants of complex zero would also differ). Consistency of op(sparse(A), sparse(B)) ≈ op(Array(A), Array(B)) may actually be simpler to achieve for infinity and NaN values than for negative zeros.

I don't think being careful about infinity and NaN would be especially high cost in anything BLAS-2-like or simpler.

from sparsearrays.jl.

mschauer avatar mschauer commented on June 6, 2024

In IEEE754 0.0 and -0.0 double function as true zero and as (signed) very small number (e.g. 1/(-0.0) == -Inf follows the small signed number interpretation).
One can argue that structural zeros are not IEEE754 zeros and that the elements of a sparse matrix are either IEEE754 floating point numbers or structural zeros (so strictly speaking a union type).

From this perspective speye(2)*[Inf,1.0] == [Inf, 1.0]
would be a consequence of structural zeros being unsigned true zeros following to some extend the laws of julia's Boolean zeros (== Any[1.0 false;false 1.0]*[Inf,1.0])

from sparsearrays.jl.

tkelman avatar tkelman commented on June 6, 2024

That seems like a bug in boolean multiplication, Inf * false should probably be NaN.

from sparsearrays.jl.

mschauer avatar mschauer commented on June 6, 2024

I doubt that + (i==j)*z should produce NaN for z = Inf and i != j(e.g. false is not a "zero or very close to zero").

from sparsearrays.jl.

tkelman avatar tkelman commented on June 6, 2024

yes it is

julia> convert(Float64, false)
0.0

julia> *(promote(Inf, false)...)
NaN

from sparsearrays.jl.

martinholters avatar martinholters commented on June 6, 2024

But there is something to the idea that structural zeros are different from IEEE 0.0, otherwise -speye(10000, 10000) would need to be completely full, storing all the -0.0s, which strikes me as rather impractical.

from sparsearrays.jl.

StefanKarpinski avatar StefanKarpinski commented on June 6, 2024

That seems like a bug in boolean multiplication, Inf * false should probably be NaN.

No, this is intentional: bool*x does bool ? x : zero(x). Whether it should or not is another question, but it was necessary to make im = Complex(false, true) work. I believe the other way to make im work is to have a pure Imaginary type.

from sparsearrays.jl.

martinholters avatar martinholters commented on June 6, 2024

The downside with strictly following IEEE semantics of course is that -A would replace all structural zeros of A with stored -0.0s, which may be even more annoying than not following it.

from sparsearrays.jl.

nalimilan avatar nalimilan commented on June 6, 2024

Indeed, that would be terrible. Though that could be handled by allowing for a custom default value, which would also fix other issues (notably the x/0 problem related to NaN).

from sparsearrays.jl.

mschauer avatar mschauer commented on June 6, 2024

The Union{Bool, Float64} model describes how sparse matrix algebra behaves, but neither this nor IEEE does force any meaning of the indexing operation, for

The problem is that A[i,j] doesn't tell you whether it acts as such a strong zero or not.

it should be enough for practical purposes to add something like isstored(A, i, j)

from sparsearrays.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.