GithubHelp home page GithubHelp logo

juliadiff / finitedifferences.jl Goto Github PK

View Code? Open in Web Editor NEW
295.0 16.0 26.0 981 KB

High accuracy derivatives, estimated via numerical finite differences (formerly FDM.jl)

License: MIT License

Julia 100.00%
julia finite-differences numerical-differentiation

finitedifferences.jl's Introduction

FiniteDifferences.jl: Finite Difference Methods

CI Build Status codecov.io PkgEval

Latest Docs Code Style: Blue ColPrac: Contributor's Guide on Collaborative Practices for Community Packages DOI

FiniteDifferences.jl estimates derivatives with finite differences.

See also the Python package FDM.

FiniteDiff.jl vs FiniteDifferences.jl

FiniteDiff.jl and FiniteDifferences.jl are similar libraries: both calculate approximate derivatives numerically. You should definitely use one or the other, rather than the legacy Calculus.jl finite differencing, or reimplementing it yourself. At some point in the future they might merge, or one might depend on the other. Right now here are the differences:

  • FiniteDifferences.jl supports basically any type, whereas FiniteDiff.jl supports only array-ish types
  • FiniteDifferences.jl supports higher order approximation and step size adaptation
  • FiniteDiff.jl supports caching and in-place computation
  • FiniteDiff.jl supports coloring vectors for efficient calculation of sparse Jacobians

FDM.jl

This package was formerly called FDM.jl. We recommend users of FDM.jl update to FiniteDifferences.jl.

Contents

Scalar Derivatives

Compute the first derivative of sin with a 5th order central method:

julia> central_fdm(5, 1)(sin, 1) - cos(1)
-2.4313884239290928e-14

Finite difference methods are optimised to minimise allocations:

julia> m = central_fdm(5, 1);

julia> @benchmark $m(sin, 1)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     486.621 ns (0.00% GC)
  median time:      507.677 ns (0.00% GC)
  mean time:        539.806 ns (0.00% GC)
  maximum time:     1.352 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     195

Compute the second derivative of sin with a 5th order central method:

julia> central_fdm(5, 2)(sin, 1) - (-sin(1))
-8.767431225464861e-11

To obtain better accuracy, you can increase the order of the method:

julia> central_fdm(12, 2)(sin, 1) - (-sin(1))
5.240252676230739e-14

The functions forward_fdm and backward_fdm can be used to construct forward differences and backward differences, respectively.

Dealing with Singularities

The function log(x) is only defined for x > 0. If we try to use central_fdm to estimate the derivative of log near x = 0, then we run into DomainErrors, because central_fdm happens to evaluate log at some x < 0.

julia> central_fdm(5, 1)(log, 1e-3)
ERROR: DomainError with -0.02069596546590111

To deal with this situation, you have two options.

The first option is to use forward_fdm, which only evaluates log at inputs greater or equal to x:

julia> forward_fdm(5, 1)(log, 1e-3) - 1000
-3.741856744454708e-7

This works fine, but the downside is that you're restricted to one-sided methods (forward_fdm), which tend to perform worse than central methods (central_fdm).

The second option is to limit the distance that the finite difference method is allowed to evaluate log away from x. Since x = 1e-3, a reasonable value for this limit is 9e-4:

julia> central_fdm(5, 1, max_range=9e-4)(log, 1e-3) - 1000
-4.027924660476856e-10

Another commonly encountered example is logdet, which is only defined for positive-definite matrices. Here you can use a forward method in combination with a positive-definite deviation from x:

julia> x = diagm([1.0, 2.0, 3.0]); v = Matrix(1.0I, 3, 3);

julia> forward_fdm(5, 1)(ε -> logdet(x .+ ε .* v), 0) - sum(1 ./ diag(x))
-4.222400207254395e-12

A range-limited central method is also possible:

julia> central_fdm(5, 1, max_range=9e-1)(ε -> logdet(x .+ ε .* v), 0) - sum(1 ./ diag(x))
-1.283417816466681e-13

Dealing with Numerical Noise

It could be the case that the function f you'd like compute the derivative of suffers from numerical noise. For example, f(x) could be computed through some iterative procedure with some error tolerance ε. In such cases, finite difference estimates can fail catastrophically. To illustrate this, consider sin_noisy(x) = sin(x) * (1 + 1e-6 * randn()). Then

julia> central_fdm(5, 1)(sin_noisy, 1) - cos(1)
-0.027016678790599657

which is a terrible performance. To deal with this, you can set the keyword argument factor, which specifies the level of numerical noise on the function evaluations relative to the machine epsilon. In this example, the relative error on the function evaluations is 2e-6 (1e-6 * randn() roughly produces a number in [-2e-6, 2e-6]) and the machine epsilon is eps(Float64) ≈ 2.22e-16, so factor = 2e-6 / 2e-16 = 1e10 should be appropriate:

julia> central_fdm(5, 1; factor=1e10)(sin_noisy, 1) - cos(1)
-1.9243663490486895e-6

As a rule of thumb, if you're dealing with numerical noise and Float64s, factor = 1e6 is not a bad first attempt.

Richardson Extrapolation

The finite difference methods defined in this package can be extrapolated using Richardson extrapolation. This can offer superior numerical accuracy: Richardson extrapolation attempts polynomial extrapolation of the finite difference estimate as a function of the step size until a convergence criterion is reached.

julia> extrapolate_fdm(central_fdm(2, 1), sin, 1)[1] - cos(1)
1.6653345369377348e-14

Similarly, you can estimate higher order derivatives:

julia> extrapolate_fdm(central_fdm(5, 4), sin, 1)[1] - sin(1)
-1.626274487942503e-5

In this case, the accuracy can be improved by making the contraction rate closer to 1:

julia> extrapolate_fdm(central_fdm(5, 4), sin, 1, contract=0.8)[1] - sin(1)
2.0725743343774639e-10

This performs similarly to a 10th order central method:

julia> central_fdm(10, 4)(sin, 1) - sin(1)
-1.0301381969668455e-10

If you really want, you can even extrapolate the 10th order central method, but that provides no further gains:

julia> extrapolate_fdm(central_fdm(10, 4), sin, 1, contract=0.8)[1] - sin(1)
5.673617131662922e-10

In this case, the central method can be pushed to a high order to obtain improved accuracy:

julia> central_fdm(20, 4)(sin, 1) - sin(1)
-5.2513549064769904e-14

A Finite Difference Method on a Custom Grid

julia> method = FiniteDifferenceMethod([-2, 0, 5], 1)
FiniteDifferenceMethod:
  order of method:       3
  order of derivative:   1
  grid:                  [-2, 0, 5]
  coefficients:          [-0.35714285714285715, 0.3, 0.05714285714285714]

julia> method(sin, 1) - cos(1)
-3.701483564100272e-13

Multivariate Derivatives

Consider a quadratic function:

julia> a = randn(3, 3); a = a * a'
3×3 Matrix{Float64}:
  4.31995    -2.80614   0.0829128
 -2.80614     3.91982   0.764388
  0.0829128   0.764388  1.18055

julia> f(x) = 0.5 * x' * a * x

Compute the gradient:

julia> x = randn(3)
3-element Vector{Float64}:
 -0.18563161988700727
 -0.4659836395595666
  2.304584409826511

julia> grad(central_fdm(5, 1), f, x)[1] - a * x
3-element Vector{Float64}:
  4.1744385725905886e-14
 -6.611378111642807e-14
 -8.615330671091215e-14

Compute the Jacobian:

julia> jacobian(central_fdm(5, 1), f, x)[1] - (a * x)'
1×3 Matrix{Float64}:
 4.17444e-14  -6.61138e-14  -8.61533e-14

The Jacobian can also be computed for non-scalar functions:

julia> a = randn(3, 3)
3×3 Matrix{Float64}:
  0.844846   1.04772    1.0173
 -0.867721   0.154146  -0.938077
  1.34078   -0.630105  -1.13287

julia> f(x) = a * x

julia> jacobian(central_fdm(5, 1), f, x)[1] - a
3×3 Matrix{Float64}:
  2.91989e-14   1.77636e-15   4.996e-14
 -5.55112e-15  -7.63278e-15   2.4758e-14
  4.66294e-15  -2.05391e-14  -1.04361e-14

To compute Jacobian--vector products, use jvp and j′vp:

julia> v = randn(3)
3-element Array{Float64,1}:
 -1.290782164377614
 -0.37701592844250903
 -1.4288108966380777

julia> jvp(central_fdm(5, 1), f, (x, v)) - a * v
3-element Vector{Float64}:
 -7.993605777301127e-15
 -8.881784197001252e-16
 -3.22519788653608e-14

julia> j′vp(central_fdm(5, 1), f, v, x)[1] - a'x
3-element Vector{Float64}:
 -2.1316282072803006e-14
  2.4646951146678475e-14
  6.661338147750939e-15

finitedifferences.jl's People

Contributors

alexfreudenberg avatar alexrobson avatar ararslan avatar dependabot[bot] avatar devmotion avatar dkarrasch avatar iamed2 avatar jecs avatar juliatagbot avatar kolaru avatar mzgubic avatar nickrobinson251 avatar omus avatar oxinabox avatar piever avatar ranocha avatar rofinn avatar roger-luo avatar sethaxen avatar simeonschaub avatar spaette avatar tpapp avatar viralbshah avatar wesselb avatar willtebbutt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

finitedifferences.jl's Issues

Make History object not part of Method object

History doesn't need to be a mutable field, it can just be a extra immutable struct that is returned.

And the adapt field needs to be on the main object.

I think this would be a more clean API, and it would likely make the compiler happier.

Roadmap to a v1.0.0

This is a lovely little package. So useful!

In recent months it's move into the JuliaDiff org, been renamed for discoverability (by popular demand!), and become the corner stone for testing other JuliaDiff packages (not least ChainRules.jl).

Perhaps it is time to think about a v1.0?

From https://semver.org/

Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

Version 1.0.0 defines the public API. The way in which the version number is incremented after this release is dependent on this public API and how it changes.

Roughly speaking, a major release is saying that we are happy enough with the current API to keep supporting it and don't intended to make any breaking changes to it in, say, the next year or so. In return, users can get all new features "for free" without having to keep opting in to each new 0.y release.

  • some open issues are specifically concerning the API, e.g. #35,
  • some feature requests, e.g. #1, #2, are significant enough that they may make us rethink some APIs?
  • others I do not know e.g. #24

Any issues that fit the bill can be tagged with the v1 milestone.

This is an issue so we can discuss these things and make sure we get to v1.

complex derivatives

there are two different kind of derivatives for complex valued function, one the df/dz, the other is the df/d conj(z). It'd be nice to support both.

j′vp errors on empty array primal input

j′vp errors when the primal input is an empty array, while jvp works fine. Here's an MWE:

julia> foo(x) = isempty(x) ? zero(eltype(x)) : first(x)
foo (generic function with 1 method)

julia> foo(Float64[])
0.0

julia> jvp(central_fdm(5, 1), foo, (Float64[], Float64[]))
0.0

julia> j′vp(central_fdm(5, 1), foo, 1.0, Float64[])
ERROR: DimensionMismatch("")
Stacktrace:
 [1] *(::Transpose{Any,Array{Any,1}}, ::Array{Float64,1}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/LinearAlgebra/src/adjtrans.jl:249
 [2] _j′vp(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/saxen/.julia/packages/FiniteDifferences/0ACfA/src/grad.jl:79
 [3] j′vp(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::Float64, ::Array{Float64,1}) at /Users/saxen/.julia/packages/FiniteDifferences/0ACfA/src/grad.jl:73
 [4] top-level scope at REPL[9]:1

Enable CI on 32-bit builds

In #119, we temporarily turned on 32-bit builds, which all failed. It would be good to run CI on 32-bit builds to work out if this is an issue with tolerances in the test suite or in the source itself, since it would be handy to use this package to check AD rules on 32-bit machines as well.

How do I choose method order to avoid NaN?

I am running into a situation where choosing specific orders for the Method is causing NaNs. Is this expected behaviour (e.g. resulting from some unfortunate math) or is this a bug?

In this example I am trying to take nth order directional derivatives. While silly, I've evaluated the 0th order fdm, and on alternating method orders I'm getting NaNs:

@test sin(2.) ≈ central_fdm(1,0)(ϵ -> sin(2. +ϵ*1),0.) # Nan
@test sin(2.) ≈ central_fdm(2,0)(ϵ -> sin(2. +ϵ*1),0.) # true
@test sin(2.) ≈ central_fdm(3,0)(ϵ -> sin(2. +ϵ*1),0.) # Nan
@test sin(2.) ≈ central_fdm(4,0)(ϵ -> sin(2. +ϵ*1),0.) # true

@test cos(2.) ≈ central_fdm(2,1)(ϵ -> sin(2. +ϵ*1),0.) # true
@test cos(2.) ≈ central_fdm(3,1)(ϵ -> sin(2. +ϵ*1),0.) # true
@test cos(2.) ≈ central_fdm(4,1)(ϵ -> sin(2. +ϵ*1),0.) # true
@test cos(2.) ≈ central_fdm(5,1)(ϵ -> sin(2. +ϵ*1),0.) # true

@test -sin(2.) ≈ central_fdm(3,2)(ϵ -> sin(2. +ϵ*1),0.) # false but ok
@test -sin(2.) ≈ central_fdm(4,2)(ϵ -> sin(2. +ϵ*1),0.) # true
@test -sin(2.) ≈ central_fdm(5,2)(ϵ -> sin(2. +ϵ*1),0.) # true
@test -sin(2.) ≈ central_fdm(6,2)(ϵ -> sin(2. +ϵ*1),0.) # true

Note that I'm only observing this on the 0th order fdm.

Include `FiniteDifferenceMethod` types in docs

One part of #22

AFAIK these are part of the pubilc API

  • My preferred solution is to just export them (currently we doument all and only exports)

  • Alternatively, in case e.g. Forward and Backward are too generic as names to export, we can just add them in their own @doc block

Better debugging info for when things break

Relates to #80 -- FiniteDifferences cannot generally handle functions for which either the size the type of the output is a function of the value of the input. Given that these are fundamental limitations, rather than bugs, it makes sense to provide tooling to help users understand when this issues are encountered.

I propose to add a mode (somehow) that lets the user get easy access to the inputs are which the function will be evaluated by FiniteDifferences. This should be sufficient to make it easy for the user to figure out what's going on when things break.

It would be nice to add helpful warnings when standard things go wrong - eg. your function's output type is different for different evaluations of the function, which can easily happen in slightly unexpected ways, such as in #80 .

gradient of non-smooth functions

Hi,
we are using FiniteDifferences in NNlib.jl to validate the automatic derivative compute through Zygote of some operators, like ReLU and maxpool, that contain a few singularities.
Since we test on a few random inputs, if FD evaluates the function in a small neighborhood of the input the FD estimate should be correct and not affected by the singularities with high probability. What we see instead is that sometimes the estimate is off.
I couldn't figure out from the docs how to select a small grid spacing or something along the lines, I tried with

fdm = FiniteDifferenceMethod(1e-3 .* [-2, -1, 0, 1, 2], 1) 

but with no observable improvement (even worse, I get some cholesky factorization errors If I remember correctly).
Any help with this would be appreciated.

Best,
Carlo

Comparing equivalent functions with unequal finite difference results?

I am a bit confused by a difference between two functions I've written to be equivalent. I am trying to make a function more general by summing over get index operations instead of hard coding the number of terms. I produced what I thought are equivalent functions, and when they are evaluated they are equivalent. However, when I finite difference them the results are slightly different. Is this a numerical error?

method = central_fdm(10,3)
expansion1 = ϵ -> f(x + ϵ.^1 * v[1]/factorial(1) + ϵ.^2 * v[2] / factorial(2) + ϵ.^3 * v[3] / factorial(3))
expansion2 = ϵ -> f(x + sum(ϵ.^i * v[i] / factorial(i) for i in 1:3))
f=sin
x= 2.
v = (1.,1.,1.)

method(expansion1,0.)
method(expansion2,0.)

Which produces:

julia> method(expansion1,0.)
-2.727892280564907

julia> method(expansion2,0.)
-2.7278922805817447

Where is this difference coming from?

Accuracy with Float32s is bad

@wesselb and I had a discussion about this a while ago, and I completely forgot to raise an issue about it. While FiniteDifferences accuracy for functions of Float64s is as you would expect, other types are a different matter. Of particular concern is Float32 as it's almost certainty the next most used type.

IIRC the issue is an issue of defaults. Specifically this one. float always yields a Float64, and eps(::Float64) is very different from eps(Float32). This changes the step-size calculation here.

If someone has time to look into this it would be greatly appreciated. I'm not entirely sure what the correct solution is, but a good approach to solving it might be

  1. recap how in principle to set the eps parameter that is causing the problems (perhaps @wesselb could help here?)
  2. figure out how to set eps based on this and make the appropriate changes
  3. test that it works

Change package name

The name FDM.jl makes the package un-discoverable.
Perhaps it could be changed to FiniteDifferences.jl or similar?

Accuracy is a bit brittle

The README of this package convinced me to use FiniteDifferences instead of Calculus in Measurements for cases where I need a numerical derivative (for functions without known derivative or automatic differentiation isn't possible, e.g. calling into C libraries). However I found that accuracy of FiniteDifferences is quite brittle (sometimes strongly depends on the number of points with big even/odd jumps, see example below) and I need to use a largish number of grid points to achieve the same accuracy I have with Calculus, which in comparison is also much faster (and makes me question the change of the backend). For example, the derivative of cosc in 0 is -(pi^2)/3, with Julia v1.5.3 I get:

julia> using Calculus, FiniteDifferences, BenchmarkTools

julia> -(pi ^ 2) / 3
-3.289868133696453

julia> (p -> FiniteDifferences.central_fdm(p, 1)(cosc, 0)).(2:10)
9-element Array{Float64,1}:
 -3.2575126788312536 # 2
  0.0                # 3
 -3.2894144933275746 # 4
 -3.9730287078618036 # 5 
 -3.289860720421136  # 6
 -3.2898376423854927 # 7
 -3.2898681297554884 # 8
 -3.289868469475128  # 9
 -3.2898681348743732 # 10

julia> Calculus.derivative(cosc, 0) 
-3.2898678494407765

To achieve an accuracy of the same order as Calculus I need 8 grid points, and this is the performance comparison:

julia> @btime FiniteDifferences.central_fdm(8, 1)($cosc, 0)
  2.340 μs (25 allocations: 992 bytes)
-3.2898681297554884

julia> @btime Calculus.derivative($cosc, 0)
  60.591 ns (0 allocations: 0 bytes)
-3.2898678494407765

Is this behaviour really expected?

Stackoverflow with QR

Using j′vp on a QR factorization result in a stackoverflow, like in the following example:

using FiniteDifferences
using LinearAlgebra

X = randn(5, 5)
j′vp(central_fdm(5, 1), X->qr(X).Q, randn(5, 5), X)

I believe it is due to the fact that vec(Q) return a Base.ReshapedArray (for Q, R = qr(X)).

Causes problem while testing JuliaDiff/ChainRules.jl#306

Travis CI Documenter failure

This has been occurring a few times recently where documentation fails to build,

julia --project=docs/ docs/make.jl
[ Info: SetupBuildDirectory: setting up build directory.
[ Info: Doctest: running doctests.
[ Info: ExpandTemplates: expanding markdown templates.
[ Info: CrossReferences: building cross-references.
[ Info: CheckDocument: running document checks.
┌ Warning: `curl -sI --proto =http,https,ftp,ftps 'https://codecov.io/github/JuliaDiff/FiniteDifferences.jl?branch=master' --max-time 10 -o /dev/null --write-out '%{http_code} %{url_effective} %{redirect_url}'` failed:
│   exception =
│    failed process: Process(`curl -sI --proto =http,https,ftp,ftps 'https://codecov.io/github/JuliaDiff/FiniteDifferences.jl?branch=master' --max-time 10 -o /dev/null --write-out '%{http_code} %{url_effective} %{redirect_url}'`, ProcessExited(28)) [28]
│    
└ @ Documenter.DocChecks ~/.julia/packages/Documenter/IGqbY/src/DocChecks.jl:205
[ Info: Populate: populating indices.
ERROR: LoadError: `makedocs` encountered an error. Terminating build
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] runner(::Type{Documenter.Builder.RenderDocument}, ::Documenter.Documents.Document) at /home/travis/.julia/packages/Documenter/IGqbY/src/Builder.jl:242
 [3] dispatch(::Type{Documenter.Builder.DocumentPipeline}, ::Documenter.Documents.Document) at /home/travis/.julia/packages/Documenter/IGqbY/src/Utilities/Selectors.jl:167
 [4] #3 at /home/travis/.julia/packages/Documenter/IGqbY/src/Documenter.jl:319 [inlined]
 [5] cd(::Documenter.var"#3#8"{Documenter.Documents.Document}, ::String) at ./file.jl:104
 [6] #makedocs#1 at /home/travis/.julia/packages/Documenter/IGqbY/src/Documenter.jl:318 [inlined]
 [7] top-level scope at /home/travis/build/JuliaDiff/FiniteDifferences.jl/docs/make.jl:3
 [8] include(::Module, ::String) at ./Base.jl:377
 [9] exec_options(::Base.JLOptions) at ./client.jl:288
 [10] _start() at ./client.jl:484
in expression starting at /home/travis/build/JuliaDiff/FiniteDifferences.jl/docs/make.jl:3
The command "julia --project=docs/ docs/make.jl" exited with 1.
Skipping the after_success step, as specified in the configuration.
Done. Your build exited with 1.

I don't have really any experience with Documenter.jl, is this an actual issue that needs to be looked into?

Automatic tangent / cotangent construction

It would be nice to automatically generate appropriate tangents / cotangents for vectors when writing tests. We could achieve this compositionally -- define how to generate for Reals, Complexs, Arrays etc, and build from there. It would probably be as simple as a pair of functions along the lines of

rand_tangent(rng, x) # Generate an element of the tangent space of x
rand_cotangent(rng, x) # Generate an element of the cotangent space of x

Meaning of order

The order of a method currently refers to the number of grid points. This is different from what is conventional in the literature, where order refers to how the truncation error scales with the step size. It probably makes sense to adhere to that convention.

to_vec gives complex vectors for most complex structured matrices

If I understand correctly to_vec should product a real vector and a function for reconstructing the structured matrix from it. Currently this doesn't work correctly for complex Symmetric, Diagonal, Transpose, and Adjoint matrices, which return complex vectors. e.g.:

julia> using FiniteDifferences, LinearAlgebra

julia> eltype_and_size(x) = (eltype(x), size(x));

julia> x = randn(ComplexF64, 3, 3);

julia> eltype_and_size(first(to_vec(x))) # this is fine
(Float64, (18,))

julia> eltype_and_size(first(to_vec(Diagonal(x))))
(Complex{Float64}, (9,))

julia> eltype_and_size(first(to_vec(Symmetric(x))))
(Complex{Float64}, (9,))

julia> eltype_and_size(first(to_vec(transpose(x))))
(Complex{Float64}, (9,))

julia> eltype_and_size(first(to_vec(adjoint(x))))
(Complex{Float64}, (9,))

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Imaginary part of complex gradient is dropped

In a number of cases at least, it seems that the imaginary part of the gradient for complex inputs and real scalar outputs is dropped. A few examples:

julia> using FiniteDifferences

julia> x = randn(ComplexF64, 3)
3-element Array{Complex{Float64},1}:
   0.5296680539510132 + 0.508966079006392im
  0.45109029662066524 + 1.4033240412225767im
 -0.49491015411581324 + 0.15307916651762313im

julia> FiniteDifferences.grad(central_fdm(5, 1), x -> sum(abs2, x), x)[1] # expected: 2x
3-element Array{Float64,1}:
  1.0593361079020813
  0.9021805932413423
 -0.9898203082316152

julia> 2x
3-element Array{Complex{Float64},1}:
  1.0593361079020265 + 1.017932158012784im
  0.9021805932413305 + 2.8066480824451534im
 -0.9898203082316265 + 0.30615833303524626im

julia> FiniteDifferences.grad(central_fdm(5, 1), abs2, 1.0+2.0im)[1] # expected: 2.0 + 4.0im
1.9999999999999962

Also, am I supposed to see this error?

FiniteDifferences.grad(central_fdm(5, 1), abs2, 1+2im)[1] # expected: 2.0 + 4.0im
ERROR: InexactError: Int64(0.9879814839007404)

Positive Semidefinite FDM

central_fdm will cause problem when one wants the input always being positive semidefinite, e.g the real value logdet function. It cause the gradcheck of logdet in Zygote fails occasionally, any idea how to preserve this property while doing FDM?

Update: I workaround it by doing

gradcheck(x->logdet(x * x'), rand(4, 4))

Huge detriment in computational performance

Hi all!

First of all, I would like to thank you for your package, it's really nice and very helpful. Now let me explain the issue I am facing.

I am seeing a huge decrement in performance once I moved from version v0.9.2 to v0.9.3, and it still persists in the current version v0.9.5.

I prepared a minimal working example for the testing, which seems not so minimal but it is 😄:

using Parameters
using FiniteDifferences
using Dierckx: Spline1D
using DifferentialEquations

params = @with_kw((
  LY = Spline1D(
    [0.08, 0.1666667, 0.25, 0.5, 1.0, 2.0, 3.0, 5.0, 7.0, 10.0, 20.0, 30.0],
    [
      0.0242,
      0.0243,
      0.0243,
      0.0245,
      0.0236,
      0.0226,
      0.0223,
      0.0226,
      0.0237,
      0.0247,
      0.027,
      0.0289,
    ],
    k = 1,
    bc = "nearest",
  ),

  P1 = t -> 1 / (1 + LY(t) * t),
  P2 = t -> 1 / ((1 + LY(t))^t),
  P  = t -> ifelse(t < 1.0, P1(t), P2(t)),

  f = t -> FiniteDifferences.forward_fdm(5, 1)(s -> -log(P(s)), t),
  f′ = t -> FiniteDifferences.forward_fdm(5, 2)(s -> -log(P(s)), t),

  # why these definitions makes ϑ type unstable if they are type stable?
  # f′ = t -> FiniteDifferences.forward_fdm(5, 1)(f, t),
  # f′ = t -> FiniteDifferences.forward_fdm(5, 1)(s -> f(s), t),

  ϰ = 0.4363,
  σ = 0.1491,
  ϑ = t -> 1 / ϰ * f′(t) + f(t) +/ ϰ)^2 / 2 * (1 - exp(-2 * ϰ * t)),
))

function riccati(du, u, p, t)

  @unpack (LY ,P1, P2, P, f, f′, ϰ, σ, ϑ) = p

  cattle = u[1]
  snake = u[2]
  locust = f(t)
  lion = 1
  mouse = ϰ
  loris = ϑ(t)
  aardvark = σ
  deer = 1
  coati = 0
  snail = FiniteDifferences.forward_fdm(5, 1)(f, t)
  crocodile = 0

  du[1] = (((mouse * loris - mouse * locust) - snail) * snake) / lion - ((deer + coati * locust) / 2) * ((aardvark * snake) / lion) ^ 2
  du[2] = ((mouse + crocodile / lion) * snake + (coati / (2lion)) * (aardvark * snake) ^ 2) - lion

  return nothing
end

let
  p = params()

  tspan = (0.0, 1.0)

  Δt = (tspan[2] - tspan[1])

  dt = 1.0 / 252.0

  nodes = Int(ceil(Δt / dt) + 1)

  t = T = [tspan[1] + (i - 1) * dt for i = 1:nodes]

  prob = ODEProblem(riccati, zeros(2), (0.0, 1.0), p)

  prob_func = (prob, i, repeat) -> begin
    ODEProblem(prob.f, prob.u0, (T[i + 1], t[1]), prob.p)
  end

  odes = EnsembleProblem(prob, prob_func = prob_func)

  @time sol = DifferentialEquations.solve(
    odes,
    Tsit5(),
    trajectories = nodes - 1,
    saveat = -dt
  )

end

The important thing is that I have to evaluate both f and f′ many times in this calculation, which means that I have to call FiniteDifferences.forward_fdm many times as well. I also tested with different algorithms, such as central_fdm or backward_fdm and noted the same detriment in performance.

Let me show you my @time results for each version (multiple runs for each case, the first one considers the overhead regarding compilation):

v0.9.2:

15.069737 seconds (41.47 M allocations: 2.262 GiB, 6.35% gc time)
1.536485 seconds (10.84 M allocations: 768.160 MiB, 13.14% gc time)
1.476522 seconds (10.65 M allocations: 759.003 MiB, 11.87% gc time)
1.375865 seconds (10.65 M allocations: 759.003 MiB, 12.61% gc time)

v0.9.3:

34.916895 seconds (60.25 M allocations: 3.414 GiB, 4.11% gc time)
23.306021 seconds (28.84 M allocations: 1.860 GiB, 3.25% gc time)
22.728664 seconds (28.65 M allocations: 1.851 GiB, 3.22% gc time)
21.738619 seconds (28.65 M allocations: 1.851 GiB, 3.35% gc time)

v0.9.5 (current):

35.646730 seconds (60.24 M allocations: 3.413 GiB, 4.04% gc time)
26.331707 seconds (45.91 M allocations: 2.749 GiB, 3.83% gc time)
22.284941 seconds (28.65 M allocations: 1.851 GiB, 3.16% gc time)
21.381123 seconds (28.65 M allocations: 1.851 GiB, 3.05% gc time)

As you see, this reduces performance by a factor of ≈ 20. It is important to remark that all versions lead to the correct answer.

Could you please help me find a way around this issue?

Thank you very much!

Regards,

Ramiro.

to_vec for non-real vectors is type-unstable

There are two separate problems that arise from to_vec(x::AbstractVector)'s use of the splatted form of vcat:

julia> using FiniteDifferences, Test

julia> @inferred to_vec([1.0+2.0im]) # splatting to vcat is not type-stable
ERROR: return type Tuple{Array{Float64,1},FiniteDifferences.var"#Vector_from_vec#36"{Array{Complex{Float64},1},Array{Array{Float64,1},1},Array{FiniteDifferences.var"#Complex_from_vec#34",1}}} does not match inferred return type Tuple{Union{Array{Any,1}, Array{Float64,1}},FiniteDifferences.var"#Vector_from_vec#36"{Array{Complex{Float64},1},Array{Array{Float64,1},1},Array{FiniteDifferences.var"#Complex_from_vec#34",1}}}
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] top-level scope at REPL[5]:1

julia> to_vec(ComplexF64[])[1] # splatting hits `vcat() => Any`
Any[]

For the latter case, see also JuliaLang/julia#38532

jvp returns incorrect tangent for 2-arg normalize

For y=norm(x[, p]), if p is not provided, jvp produces the correct tangent vector (with the same shape as y), but if p is provided, it returns a scalar.

julia> x, Δx = [1.0, 2.0, 3.0], [0.3, 0.2, 0.1];

julia> p, Δp = 2.0, 0.5;

julia> normalize(x)
3-element Array{Float64,1}:
 0.2672612419124244
 0.5345224838248488
 0.8017837257372732

julia> jvp(central_fdm(5, 1), normalize, (x, Δx)) # 1-arg looks good
3-element Array{Float64,1}:
  0.061088283865706784
  0.015272070966444149
 -0.0305441419328158

julia> normalize(x, p)
3-element Array{Float64,1}:
 0.2672612419124244
 0.5345224838248488
 0.8017837257372732

julia> jvp(central_fdm(5, 1), normalize, (x, Δx), (p, Δp)) # um, what?
0.08883239652055

Breaking changes in v0.9.3

v0.9.3 introduced some breaking changes

Setup:

julia> using LinearAlgebra, FiniteDifferences, StaticArrays
julia> x = Float64[1, 2, 3]
julia> fdm = central_fdm(5, 1)
julia> x2 = SVector{3}(x);

Jacobian of the identity function no longer produces the identity matrix.

julia> jacobian(fdm, identity, x)[1] # expected identity matrix

on v0.1.2:

3×3 Array{Float64,2}:
 1.0          6.50515e-15  6.50515e-15
 1.30103e-14  1.0          1.30103e-14
 0.0          0.0          1.0

on v0.1.3:

3×3 Array{Float64,2}:
 4.16334e-16  4.16334e-16  4.16334e-16
 8.32667e-16  8.32667e-16  8.32667e-16
 0.0          0.0          0.0

StaticArrays no longer supported

julia> jacobian(fdm, identity, x2)[1]

on v0.1.2:

 3×3 Array{Float64,2}:
  1.0          6.50515e-15  6.50515e-15
  1.30103e-14  1.0          1.30103e-14
  0.0          0.0          1.0

on v0.1.3:

ERROR: MethodError: no method matching to_vec(::SArray{Tuple{3},Float64,1,3})
Closest candidates are:
  to_vec(::Number) at /Users/saxen/.julia/packages/FiniteDifferences/kqb1h/src/to_vec.jl:8
  to_vec(::Array{#s121,1} where #s121<:Number) at /Users/saxen/.julia/packages/FiniteDifferences/kqb1h/src/to_vec.jl:15
  to_vec(::Array{T,1} where T) at /Users/saxen/.julia/packages/FiniteDifferences/kqb1h/src/to_vec.jl:17
  ...
Stacktrace:
 [1] jacobian(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::SArray{Tuple{3},Float64,1,3}; len::Nothing) at /Users/saxen/.julia/packages/FiniteDifferences/kqb1h/src/grad.jl:28
 [2] jacobian(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::SArray{Tuple{3},Float64,1,3}) at /Users/saxen/.julia/packages/FiniteDifferences/kqb1h/src/grad.jl:28
 [3] top-level scope at REPL[34]:1

Conjugation convention changed from v0.9 to v0.10

I noticed when adding FiniteDifferences v0.10 compatibility to ChainRules (JuliaDiff/ChainRules.jl#198) that many of the * tests suddenly failed.

It seems that FiniteDifferences changed its conjugation convention in the change from v0.9.8 to v0.10.0. An example:

Setup:

julia> using FiniteDifferences

julia> x, y, Δz = 1.0 + 2.0im, 3.0 + 4.0im, 5.0 + 6.0im;

On v0.9.8:

julia> j′vp(central_fdm(5, 1), *, Δz, x, y)
(-8.999999999999446 + 38.000000000000284im, -6.999999999999783 + 15.99999999999862im)

julia> conj.(j′vp(central_fdm(5, 1), *, conj(Δz), x, y))
(39.00000000000018 - 1.9999999999994031im, 16.999999999998604 - 4.000000000000034im)

On v0.10.0 (and v0.10.2)

julia> j′vp(central_fdm(5, 1), *, Δz, x, y)
(38.99999999999998 - 2.0000000000012363im, 16.9999999999999 - 3.999999999999728im)

julia> conj.(j′vp(central_fdm(5, 1), *, conj(Δz), x, y))
(-9.000000000000066 + 37.999999999999986im, -7.00000000000015 + 16.00000000000108im)

So it seems j′vp switched conjugate conventions in the jump to v0.10.0. I didn't see this documented anywhere. Was this an intended change?

Less accurate results on Julia 1.3.1 vs Julia 1.1.1

If I run

FiniteDifferences.fdm(FiniteDifferences.central_fdm(9, 5), y->exp(y), 1.0, adapt=4)

on Julia 1.1.1 I get:

2.718281560120131

whereas on Julia 1.3.1 I get

2.7182639618941993

which is much less accurate (error of 1.8e-5 vs 2.7e-7).

This broke some tests of ours. Does anyone know why the accuracy is reduced?

Both on v0.9.2.

Documentation

Our docs appear to be a bit out of date. In particular the jvp and j′vp stuff isn't well documented :(

Bug when using Matrix as differential for Adjoint

#131 introduced a bug when differentials of Adjoint are regular matrices. Minimal example:

julia> using FiniteDifferences, LinearAlgebra

julia> n, T = 3, ComplexF64;

julia> x = randn(T, n);

julia> y = adjoint(x);

julia> ȳ_adjoint = adjoint(randn(T, n))
1×3 adjoint(::Vector{ComplexF64}) with eltype ComplexF64:
 -0.0351965+0.649028im  0.868707+0.237612im  -1.45535-0.149933im

julia> ȳ_mat = collect(ȳ_adjoint);

julia> ȳ_composite = FiniteDifferences.Composite{typeof(y)}(parent=parent(ȳ_adjoint));

julia> ȳ_adjoint' # expected
3-element Vector{ComplexF64}:
 -0.03519650654436998 - 0.6490275763507989im
   0.8687071179992576 - 0.23761165442658064im
  -1.4553505347699434 + 0.14993263689809244im

julia> j′vp(central_fdm(5, 1), adjoint, ȳ_adjoint, x)[1] # fine
3-element Vector{ComplexF64}:
 -0.035196506544367144 - 0.6490275763508034im
    0.8687071179992422 - 0.2376116544265735im
   -1.4553505347699498 + 0.14993263689809289im

julia> j′vp(central_fdm(5, 1), adjoint, ȳ_composite, x)[1] # fine
3-element Vector{ComplexF64}:
 -0.035196506544367144 - 0.6490275763508034im
    0.8687071179992422 - 0.2376116544265735im
   -1.4553505347699498 + 0.14993263689809289im

julia> j′vp(central_fdm(5, 1), adjoint, ȳ_mat, x)[1] # conjugated
3-element Vector{ComplexF64}:
 -0.03519650654436785 + 0.6490275763508094im
    0.868707117999241 + 0.23761165442657906im
  -1.4553505347699507 - 0.1499326368980859im

I've stared at the PR but can't figure out where it went wrong. Or maybe the issue is that a regular matrix is not an acceptable cotangent for an Adjoint. This was caught by the ChainRules test suite.
@willtebbutt I also noticed that PR disabled testing of to_vec for many complex types. Was that intentional?

Extending / refactoring `to_vec` API

The to_vec API has served us pretty well for a long time, but there are certain things that it can't handle.

In particular, when applying to_vec to Differentials it is necessary to output something of the same size as the primal to which it will be added when performing finite differencing. For example, what should to_vec(::Zero) return? Since Zero doesn't carry information around with it that tells you what primal it is going to be added to it's not possible to know how to convert it into a vector of an appropriate size.

There are other cases where this bites. For example, if the primal is a Vector and the differential a Fill. In this case, naively applying to_vec to both yields vectors of different lengths since to_vec(::Fill) returns a length-1 vector as only a single number is required to uniquely specify a Fill, while to_vec(::Vector) is essentially the identity operation.

I proposed to introduce a new function as follows:

function differential_to_vec(primal, differential)
    function from_vec(differential_vec)
        return thing_like_differential
    end
    return vector_form_differential, from_vec 
end

This function literally just ties together a primal and a differential, so that we have enough information to to_vec the differential. That is all.

to_vec doesn't preserve equality

Firstly, I'm very fond of to_vec -- it's served us well over the last couple of years. We know it has some shortcomings though, that we will need to address at some point.

Presently we have implementations of to_vec like this instead of implementations that look like this.

IIRC the reason for this is that if you define things in the latter fashion then you'll find if you do something like

A = randn(5, 4)
A_vec, _ = to_vec(A)
At_vec, _ = to_vec(transpose(A))

then A_vec == At_vec. My suspicion is that at some point in time, we were trying to compare the equality of two things based on their to_vec representation and it came up that things weren't being compared properly. We do a similar thing with Diagonal etc.

Unfortunately, it seems quite straightforward to construct examples which don't admit such a straightforward resolution as we employed for Adjoint, Transpose, and Diagonal. For a trivial example, consider that

A = randn(5)
A == (A, ) # is false -- a `Tuple` is never equal to a `Vector`

However, since to_vec(::Tuple) simply concatenates to_vec of each of the fields, we'll find that

to_vec(A)[1] == to_vec((A, ))[1]

This leads me to believe that it's likely not possible to define to_vec such that equality is preserved.

As such, I propose that we

  1. embrace, and explicitly document, that knowing whether to_vec(x)[1] == to_vec(y)[1] in general tells you nothing about whether x == y, and vice versa. We should point this out to explicitly warn people away from trying to check whether two things are equal based on their to_vec representation.
  2. refactor some of the things (like Tranpose and Adjoint) to use the struct representation, which is more computationally efficient.
  3. to_vec should purely be thought of as a way to make FiniteDifferences' life easier. We might even consider ceasing to export it.

Now that @oxinabox has added some new approximate equality checking to ChainRulesTestUtils, this ought to be less of a problem anyway, because it's now more straightforward to check whether things are approximately equal.

Of course, if anyone can see a different path forward, I'm all ears.

Allow keyword arguments and skipping positional arguments

Something I ran into in ChainRules was that FDM's j′vp currently cannot deal with non-numeric, non-differentiable arguments, nor can it deal with keyword arguments. One example of that is sum with a function argument and/or with the dims keyword argument. You can wrap it yourself like

FDM.j′vp(central_fdm(5, 1), x->sum(abs2, x, dims=1), ȳ, x)

but it would be nice to have a way to express this directly in the j′vp function signature.

To get around this limitation I implemented this monstrocity: https://github.com/JuliaDiff/ChainRules.jl/blob/master/test/test_util.jl#L53-L78, but that should never see the light of day beyond being used in tests.

Adapt with grad, jvp, etc

Is it clear how to use the fancy new adapt kwarg with grad, jvp, and the like? Or do we need to extend them to cope with it?

Revert last commit

I stupidly bumped the patch version 🤦‍♂️ rather than minor version when merging #76 , as helpfully pointed out by @simeonschaub . I want to

  1. revert to the previous commit
  2. make another patch release correcting the mistake
  3. make a minor release as originally intended.

Unfortunately, my googling + git skills aren't up to the task at the minute. Help (either hints about how to go about doing this, or in the form of a PR that does 1 and 2) would be much appreciated.

ChainRules types in FiniteDifferences to replace to_vec

As suggested in #90 (comment) , we might want to consider moving away from to_vec towards defining operations on ChainRules's types directly.

In particular #91 implements the difference operation, which is the only operation that we're missing to let us approximately compute tangents. Still TODO is

  1. determine how to compute cotangents within this framework + implement it, and
  2. determine whether or not we require some notion of a Jacobian (probably we do) and implement it, and
  3. define the norm of a differential, from which we get isapprox for free if we also define subtraction, which is really easy because we've already defined addition and scalar multiplication. I think we can do this because I think we can always derive an appropriate norm for a differential, that is, I think we can treat any given differential as being an element of an appropriate normed vector space. (Note that we've not defined inner products between differentials, and I don't think we need to. Doing so would be one way to go about defining a norm, but it probably makes sense to go straight to a norm if we can't think of a reason why would need to define inner products. I'm open to suggestions here.) This is something we'll need to sort out in ChainRulesCore, as per JuliaDiff/ChainRulesCore.jl#184.

Implementing this will immediately resolve:

  1. #92
  2. JuliaDiff/ChainRulesTestUtils.jl#24 as it is covered by difference already

but will generally significantly improve the compatibility between ChainRuless and FiniteDifferences, and should make one's experience testing differentials a much more pleasant experience.

Central Finite Difference Method returns value of type Any

Hi!

I was wondering if the following is or is not an issue. This is a MWE:

julia> using FiniteDifferences
julia> const _fdm = FiniteDifferences.central_fdm(5, 1)
julia> @code_warntype _fdm(sin, 1)
Variables
  d::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}
  f::Core.Compiler.Const(sin, false)
  x::Int64

Body::Any
1%1 = Core.NamedTuple()::Core.Compiler.Const(NamedTuple(), false)
│   %2 = Base.pairs(%1)::Core.Compiler.Const(Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}(), false)
│   %3 = FiniteDifferences.:(var"#_#7")(%2, d, f, x)::Any
└──      return %3

This returns a value of type Any and sometimes I am afraid if this is potentially a performant killer for code where FiniteDifferences is used.

I would appreciate any insight you can give me!

Thanks!

Users of FDM can update to FiniteDifferences

We should add the following advice to the README?

  • We renamed the package for discoverability
  • Turns out to rename it we had to actually register it as a new package (with a new UUID)...
  • This means the old FDM URL redirects here. Packages dependening of FDM will not be affected. But all future updates will be to FiniteDifferences (not FDM).
  • As far as i can tell, functionally this new package has no breaking changes from the last FDM release (counting #34 as a bug-fix)
  • User can safely update their Project.toml files from
[deps]
FDM = "e25cca7e-83ef-51fa-be6c-dfe2a3123128"
...

[compat]
FDM = "0.6"

to

[deps]
FiniteDifferences = "26cc04aa-876d-5657-8c51-4c34ba976000"
...

[compat]
FiniteDifferences = "0.7"

gradient of maximum is not correct

This is causing test broken in Zygote

fdm = central_fdm(5, 3)
Random.seed!(0);
M = rand(2, 3);
tmp = similar(M);

fdm(0.0) do ϵ
           tmp .= M
           tmp[1] += ϵ
           return maximum(tmp)
end
43.35451497451203

Error computing j′vp of a scalar -> matrix function

I can't compute the j′vp for the function p -> x^p on FiniteDifferences v0.10.0, where x is a matrix and p is a scalar:

julia> using FiniteDifferences

julia> fdm = central_fdm(5, 1);

julia> x = [1.0 2.0; 3.0 4.0]; # real input

julia> x^3.0 # real output 
2×2 Array{Float64,2}:
 37.0   54.0
 81.0  118.0

julia> seed = [1.0 1.0; 1.0 1.0]; # real seed

julia> j′vp(fdm, p -> x^p, seed, 3.0)  # but an error is raised that one of the vectors has length 8?
ERROR: DimensionMismatch("dimensions must match: a has dims (Base.OneTo(8),), b has dims (Base.OneTo(4),), mismatch at 1")
Stacktrace:
 [1] promote_shape at ./indices.jl:178 [inlined]
 [2] promote_shape at ./indices.jl:169 [inlined]
 [3] +(::Array{Float64,1}, ::Array{Float64,1}) at ./arraymath.jl:45
 [4] add_sum(::Array{Float64,1}, ::Array{Float64,1}) at ./reduce.jl:21
 [5] _mapreduce(::FiniteDifferences.var"#22#24"{FiniteDifferences.var"#49#51"{Int64,Base.var"#64#65"{Base.var"#64#65"{Base.var"#64#65"{typeof(first),typeof(to_vec)},var"#7#8"},FiniteDifferences.var"#Real_from_vec#30"},Array{Float64,1}},Float64,UnitRange{Int64},Array{Float64,1},Float64}, ::typeof(Base.add_sum), ::IndexLinear, ::Base.OneTo{Int64}) at ./reduce.jl:403
 [6] _mapreduce_dim(::Function, ::Function, ::NamedTuple{(),Tuple{}}, ::Base.OneTo{Int64}, ::Colon) at ./reducedim.jl:312
 [7] #mapreduce#580 at ./reducedim.jl:307 [inlined]
 [8] mapreduce at ./reducedim.jl:307 [inlined]
 [9] _sum at ./reducedim.jl:657 [inlined]
 [10] #sum#584 at ./reducedim.jl:653 [inlined]
 [11] sum at ./reducedim.jl:653 [inlined]
 [12] fdm(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::FiniteDifferences.var"#49#51"{Int64,Base.var"#64#65"{Base.var"#64#65"{Base.var"#64#65"{typeof(first),typeof(to_vec)},var"#7#8"},FiniteDifferences.var"#Real_from_vec#30"},Array{Float64,1}}, ::Float64, ::Val{true}; condition::Int64, bound::Float64, eps::Float64, adapt::Int64, max_step::Float64) at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:270
 [13] fdm(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::FiniteDifferences.var"#49#51"{Int64,Base.var"#64#65"{Base.var"#64#65"{Base.var"#64#65"{typeof(first),typeof(to_vec)},var"#7#8"},FiniteDifferences.var"#Real_from_vec#30"},Array{Float64,1}}, ::Float64, ::Val{true}) at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:229
 [14] #fdm#25 at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:281 [inlined]
 [15] fdm at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:281 [inlined] (repeats 2 times)
 [16] #_#7 at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:93 [inlined]
 [17] Central at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/methods.jl:93 [inlined]
 [18] #48 at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/grad.jl:16 [inlined]
 [19] iterate at ./generator.jl:47 [inlined]
 [20] _collect(::Base.OneTo{Int64}, ::Base.Generator{Base.OneTo{Int64},FiniteDifferences.var"#48#50"{FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}},Base.var"#64#65"{Base.var"#64#65"{Base.var"#64#65"{typeof(first),typeof(to_vec)},var"#7#8"},FiniteDifferences.var"#Real_from_vec#30"},Array{Float64,1}}}, ::Base.EltypeUnknown, ::Base.HasShape{1}) at ./array.jl:678
 [21] collect_similar(::Base.OneTo{Int64}, ::Base.Generator{Base.OneTo{Int64},FiniteDifferences.var"#48#50"{FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}},Base.var"#64#65"{Base.var"#64#65"{Base.var"#64#65"{typeof(first),typeof(to_vec)},var"#7#8"},FiniteDifferences.var"#Real_from_vec#30"},Array{Float64,1}}}) at ./array.jl:607
 [22] map(::Function, ::Base.OneTo{Int64}) at ./abstractarray.jl:2072
 [23] #jacobian#47 at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/grad.jl:15 [inlined]
 [24] jacobian at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/grad.jl:10 [inlined]
 [25] _j′vp(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/grad.jl:79
 [26] j′vp(::FiniteDifferences.Central{UnitRange{Int64},Array{Float64,1}}, ::Function, ::Array{Float64,2}, ::Float64) at /Users/saxen/.julia/packages/FiniteDifferences/VHfqf/src/grad.jl:73
 [27] top-level scope at REPL[16]:1

 julia> j′vp(fdm, p -> p*x, seed, 3.0) # this scalar -> matrix function is fine
 (10.000000000000309,)

julia> j′vp(fdm, x->x^3.0, seed, x) # this matrix -> matrix function is also fine
([51.00000000000287 86.99999999999915; 66.9999999999964 110.9999999999985],)

jvp docstring

It's out of date and inconsistent with the actual API

jvp incorrect result rather than erroring

There appears to be a problem with jvp when the v supplied is real and the primal is complex:

using FiniteDifferences
jvp(central_fdm(5,1), abs2, (3.0 + im, 0.25))

#+RESULTS:
: 1.9999999999995874
jvp(central_fdm(5,1), abs2, (3.0 + im, 0.25 + 0im))

#+RESULTS:
: 1.4999999999999445

The correct, answer here is 1.5. This should produce an error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.