juliamath / doublefloats.jl Goto Github PK
View Code? Open in Web Editor NEWmath with more good bits
License: MIT License
math with more good bits
License: MIT License
Matmul is not working.
For some reason, the special matmul was implemented to only work for square matrices.
It also doesn't support zero dimensions.
PR incoming
julia-1.1> One=Double64(1.0)
1.0
julia-1.1> small=Double64(0.25*eps(1.0))
5.551115123125783e-17
julia-1.1> a = One - small
1.0
julia-1.1> b = One + small
1.0
julia-1.1> a < 1.0
true
julia-1.1> a > 1.0
true
julia-1.1> b < 1.0
false
julia-1.1> b > 1.0
false
julia-1.1> b == 1.0
false
Note that such comparisons are used for domain checks in various functions.
Title says it all really. Is adding support for random normal samples possible for DoubleFloats (or more accurately, is it likely)?
Could you do a quick v0.7.11 bugfix release to get commit 915ebeb in? The typo fixed there is currently breaking BAT.jl.
For reasons currently beyond me operations with vectors of DoubleFloats don't seem to vectorize when computing inner products, sums and other reductions. For example:
using DoubleFloats
function my_sum(xs::Vector{T}) where {T}
s = zero(T)
for x in xs
s += x
end
return s
end
If I run
JULIA_LLVM_ARGS="-pass-remarks-analysis=loop-vectorize" julia -O3 -L example.jl -e "my_sum(rand(Double64, 100))"
it outputs
remark: example.jl:6:0: loop not vectorized: loop control flow is not understood by vectorizer
remark: example.jl:6:0: loop not vectorized: value that could not be identified as reduction is used outside the loop
remark: example.jl:6:0: loop not vectorized: loop control flow is not understood by analyzer
Removing the branch in https://github.com/JuliaMath/DoubleFloats.jl/blob/master/src/math/ops/op_dbdb_db.jl#L3 does not seem to fix the issue.
Your Base.tryparse
implementation is currently commented out, which means that e.g. parse(Double64, "3.2")
fails with a MethodError
.
(This breaks ChangePrecision.jl among other things.)
I came across this weird output of sinpi
:
julia> sinpi(sqrt(2))
-0.9639025328498774
julia> sinpi(sqrt(Double64(2)))
-0.2662553420414155
julia> sinpi(sqrt(BigFloat(2)))
-0.9639025328498773302883368552795782761203067465991152157249104129629695305684294
julia> cospi(sqrt(2))
-0.2662553420414152
It seems the result for Double64
is wrong, and sinpi
seems to return the value of cospi
for this argument. I looked at the sinpi
implementation in Base, since the issue may be there, but could not easily see what went wrong. I'll look further into it, but am posting this issue in the meantime.
tryparse
uses isnothing(x)
instead of x === nothing
, e.g.
DoubleFloats.jl/src/type/parse.jl
Line 39 in 8668818
This causes errors on Julia 1.0 since isnothing
was introduced in 1.1. This can be seen here: https://travis-ci.com/ericphanson/SDPAFamily.jl/builds/127247390 where Travis passed on 1.2 but failed on 1.0.
julia> a = Double64(NaN); b = zero(Double64) / zero(Double64);
julia> isequal(a, b)
false
julia> isequal(NaN, NaN)
true
I'm getting a strange behavior of muladd
and fma
with DoubleFloats v0.3.2. muladd
returns a tuple instead of a DoubleFloat
and fma
fails:
julia> muladd(2.0, DoubleFloat(3.0), 4.0)
(10.0, 0.0)
julia> typeof(muladd(2.0, DoubleFloat(3.0), 4.0))
Tuple{Float64,Float64}
julia> fma(2.0, DoubleFloat(3.0), 4.0)
ERROR: UndefVarError: x?? not defined
Stacktrace:
[1] fma(::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64) at /user/.julia/packages/DoubleFloats/gB9qU/src/math/arithmetic/fma.jl:38
[2] fma at /user/.julia/packages/DoubleFloats/gB9qU/src/math/arithmetic/fma.jl:53 [inlined]
[3] fma(::Float64, ::DoubleFloat{Float64}, ::Float64) at ./promotion.jl:347
[4] top-level scope at none:0
julia> versioninfo()
Julia Version 1.0.1
Commit 0d713926f8 (2018-09-29 19:05 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, skylake)
With DoubleFloat v0.3.1, all was fine:
julia> muladd(2.0, DoubleFloat(3.0), 4.0)
1.0e+01
julia> typeof(muladd(2.0, DoubleFloat(3.0), 4.0))
DoubleFloat{Float64}
julia> fma(2.0, DoubleFloat(3.0), 4.0)
1.0e+01
The tag name "0.7.22" is not of the appropriate SemVer form (vX.Y.Z).
cc: @JeffreySarnoff
Both iseven(d64"2")
and isodd(d64"2")
give method errors, because iseven
and isodd
are not normally defined for floating point types such as Float64
.
when I tried to compose a test for complex division, julia kept telling me that cdiv is not defined
using current versions of Optim and DoubleFloats on v1.2.0-rc2
julia> using Optim
julia> using DoubleFloats
julia> f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
f (generic function with 1 method)
julia> optimize(f, [Double64(0.0), Double64(0.0)])
Status: success
Candidate solution
Error showing value of type Optim.MultivariateOptimizationResults{NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters},Float64,Array{DoubleFloat{Float64},1},DoubleFloat{Float64},DoubleFloat{Float64},Array{OptimizationState{DoubleFloat{Float64},NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}},1},Bool}:
ERROR: StackOverflowError:
Stacktrace:
[1] ini_dec(::DoubleFloat{Float64}, ::Int64, ::Array{UInt8,1}) at ./printf.jl:1008 (repeats 80000 times)
julia> optimize(f, [BigFloat(0.0), BigFloat(0.0)])
Status: success
Candidate solution
Minimizer: [1.00e+00, 1.00e+00]
Minimum: 3.525527e-09
Found with
Algorithm: Nelder-Mead
Initial Point: [0.00e+00, 0.00e+00]
Convergence measures
√(Σ(yᵢ-ȳ)²)/n ≤ 1.0e-08
Work counters
Iterations: 60
f(x) calls: 118
On the most recent release, v0.2.0, this error message is given when finding tan(Double64(4.0))
:
julia> tan(Double64(4.0))
ERROR: UndefVarError: inv_pi1o1_t64 not defined
Stacktrace:
[1] modpi(::DoubleFloat{Float64}) at /Users/verzani/.julia/packages/DoubleFloats/w6zrB/src/math/arithmetic/modpi.jl:46
Is there a reason why DoubleFloats.jl doesn't use the same convention as Base Julia? As far as I know the other Base number types in Julia return ones instead of throwing a DomainError, so why would DoubleFloats do so?
function Base.:(^)(r::DoubleFloat{T}, n::Int) where {T<:IEEEFloat}
if (n == 0)
iszero(r) && throw(DomainError("0^0"))
return one(DoubleFloat{T})
end
...
The package DoubleDouble.jl
which is now deprecated in favor of DoubleFloats.jl
seems to run 10 times faster. Here are some data: for 512 x 512 matrix multiply:
For DoubleDouble
on Julia 6.2:
The code:
using DoubleDouble
srand(123)
n=500
A=rand(n,n)
B=rand(n,n)
@time A*B
@time C=A*B
Ad=map(Double,A)
Bd=map(Double,B)
@time Ad*Bd
@time Cd=Ad*Bd
Ab=map(BigFloat,A)
Bb=map(BigFloat,B)
@time Ab*Bb
@time Cb=Ab*Bb;
vecnorm(Cb-C),vecnorm(Cb-Cd)
The output:
0.760771 seconds (246.38 k allocations: 13.529 MiB)
0.020722 seconds (7 allocations: 1.908 MiB)
4.113227 seconds (336.95 k allocations: 17.923 MiB, 2.83% gc time)
3.102003 seconds (13 allocations: 3.815 MiB)
92.818986 seconds (502.24 M allocations: 24.324 GiB, 40.85% gc time)
115.261513 seconds (502.00 M allocations: 24.313 GiB, 44.28% gc time)
(1.805819154666807124971283783712203336688579278627334328304525314230137409530732e-11, 6.165584841387163497016967114443636899352678465783015830879065544438127870571583e-28)
The results on juliaBox are similar.
For DoubleFloats
on julia 1.0
The code:
using DoubleFloats, Random, LinearAlgebra
Random.seed!(123)
n=500
A=rand(n,n)
B=rand(n,n)
@time A*B
@time C=A*B
Ad=Double64.(A)
Bd=Double64.(B)
@time Ad*Bd
@time Cd=Ad*Bd
Ab=map(BigFloat,A)
Bb=map(BigFloat,B)
@time Ab*Bb
@time Cb=Ab*Bb;
norm(Cb-C),norm(Cb-Cd)
The results:
0.080844 seconds (6 allocations: 1.908 MiB, 79.15% gc time)
0.022727 seconds (6 allocations: 1.908 MiB)
31.637581 seconds (12 allocations: 3.815 MiB)
32.924434 seconds (12 allocations: 3.815 MiB)
72.552449 seconds (502.00 M allocations: 26.183 GiB, 38.97% gc time)
76.039207 seconds (502.00 M allocations: 26.183 GiB, 38.76% gc time)
(1.80177057469210812126496564089303771446532868324947837791160773801388809191472e-11,
5.5466913217631572054798245925465557e-28)
is this the expected behavior or am I doing something wrong?
N.B. DoubleFloats
cannot be used on JuliaBox with 1.0 yet.
julia> using DoubleFloats
[ Info: Precompiling DoubleFloats [497a8b3b-efae-58df-a0af-a86822472b78]
julia> Double64(0.2)
0.2
This is different from the output that is shown in the README. Am I doing something wrong?
julia> x = rand(Complex{Double64})
8.8764703879967913620991845635721518e-01 + 5.4860279659695834921081733503989141e-01im
julia> y = rand(Double64)
9.8476005186057369403555480924516985e-01
julia> x / y
ERROR: MethodError: /(::Complex{DoubleFloat{Float64}}, ::DoubleFloat{Float64}) is ambiguous. Candidates:
/(x::N, y::DoubleFloat{T}) where {T, N<:Number} in DoubleFloats at /Users/sascha/.julia/packages/DoubleFloats/stK0/src/math/arithmetic/promote.jl:8
/(z::Complex, x::Real) in Base at complex.jl:324
Possible fix, define
/(::Complex, ::DoubleFloat{T})
Stacktrace:
[1] top-level scope at none:0
julia> x * y
ERROR: MethodError: *(::Complex{DoubleFloat{Float64}}, ::DoubleFloat{Float64}) is ambiguous. Candidates:
*(x::N, y::DoubleFloat{T}) where {T, N<:Number} in DoubleFloats at /Users/sascha/.julia/packages/DoubleFloats/stK0/src/math/arithmetic/promote.jl:6
*(z::Complex, x::Real) in Base at complex.jl:312
Possible fix, define
*(::Complex, ::DoubleFloat{T})
Stacktrace:
[1] top-level scope at none:0
julia> x + y
ERROR: MethodError: +(::Complex{DoubleFloat{Float64}}, ::DoubleFloat{Float64}) is ambiguous. Candidates:
+(x::N, y::DoubleFloat{T}) where {T, N<:Number} in DoubleFloats at /Users/sascha/.julia/packages/DoubleFloats/stK0/src/math/arithmetic/promote.jl:2
+(z::Complex, x::Real) in Base at complex.jl:304
Possible fix, define
+(::Complex, ::DoubleFloat{T})
Stacktrace:
[1] top-level scope at none:0
julia> x - y
ERROR: MethodError: -(::Complex{DoubleFloat{Float64}}, ::DoubleFloat{Float64}) is ambiguous. Candidates:
-(x::N, y::DoubleFloat{T}) where {T, N<:Number} in DoubleFloats at /Users/sascha/.julia/packages/DoubleFloats/stK0/src/math/arithmetic/promote.jl:4
-(z::Complex, x::Real) in Base at complex.jl:310
Possible fix, define
-(::Complex, ::DoubleFloat{T})
Stacktrace:
[1] top-level scope at none:0
julia> using DoubleFloats
julia> exp(-Inf)
0.0
julia> exp(-Double64(Inf))
NaN
julia> exp(-Inf+0.0im)
0.0 + 0.0im
julia> exp(ComplexDF64(-Inf))
NaN - NaN*im
The latest DoubleFloats version for Julia v1.0 is v0.3.5
However for Julia v0.7, the DoubleFloats version is held at v0.1.11
Is it possible to use the latest version with v0.7?
julia> DoubleFloat(2)
ERROR: MethodError: no method matching DoubleFloat(::Int64)
This errors instead of returning something like 0.0
julia> DoubleFloats.calc_exp(DoubleFloat(-Inf))
ERROR: InexactError: Int64(Int64, NaN)
Stacktrace:
[1] Type at ./float.jl:700 [inlined]
[2] calc_exp(::DoubleFloat{Float64}) at /Users/bdeonovic/.julia/packages/DoubleFloats/hBAJS/src/math/elementary/explog.jl:143
[3] top-level scope at none:0
I ran into the following problem today:
julia> using DoubleFloats
julia> sin(1.0 + 0im)
0.8414709848078965 + 0.0im
julia> sin(Double64(1) + 0im)
0.0 + 0.0im
julia> cos(Double64(1) + 0im)
0.0 + 0.0im
julia> tan(Double64(1) + 0im)
ERROR: InexactError: Int64(NaN)
I don't see this for nonzero imaginary part:
julia> sin(Double64(1) + im) ≈ sin(1.0 + im)
true
Is there a reason why there the rank of m
is checked instead of the rank of evecs
? There are rank deficient matrices that are diagonalizable and full rank matrices that are not.
julia> exp(Double64(NaN))
ERROR: InexactError: Int64(NaN)
Stacktrace:
[1] Type at ./float.jl:703 [inlined]
[2] calc_exp(::DoubleFloat{Float64}) at /Users/andreasnoack/.julia/packages/DoubleFloats/FaJUi/src/math/elementary/explog.jl:143
[3] exp(::DoubleFloat{Float64}) at /Users/andreasnoack/.julia/packages/DoubleFloats/FaJUi/src/math/elementary/explog.jl:18
[4] top-level scope at none:0
the show function for a vector of DoubleFloats prints out the whole vector which is quite annoying. Behavior should be similar to the other floating point values.
The NaN / Inf handling seems to introduce an overhead of roughly 3x for simple arithmetic (here is subtraction):
julia> @btime DoubleFloats.sub_dbdb_db_nonfinite($a, $b)
1.969 ns (0 allocations: 0 bytes)
0.05936196323585907
julia> @btime DoubleFloats.sub_dbdb_db($a, $b)
6.748 ns (0 allocations: 0 bytes)
0.05936196323585907
This is really a lot for some applications. I think this is also the reason for #31. Is there a good way to opt out of the NaN checks?
In the current implementation of rand
for DoubleFloat
, there is a hidden bug, happening when the generated UInt64
is close to typemax(UInt64)
. E.g., reproducing the implementation, with T == Float64
:
u = rand(rng, UInt64)
f = Float64(u)
uf = UInt64(f)
ur = (uf > u ? uf - u : u - uf)
rf = Float64(ur)
v = DoubleFloat(T(5.421010862427522e-20 * f), T(5.421010862427522e-20 * rf))
There are two errors:
DoubleFloat{T}(...)
, because the constructor without specified type parameter doesn't normalize the two values. For example, taking u == typemax(UInt)
, v
will have both fields equal to 1.0
.v == 2.0
(still for u == typemax(UInt)
), which is wrong given that rand
should produce a number in [0, 1)
Sorry, I came over another issue after updating to 0.1.9.
nan(DoubleFloat{Float32})
and DoubleFloat{Float32}(NaN32)
work, but DoubleFloat{Float32}(NaN)
does not.
This is not crucial, but means that we can't use Double32
s in JuliaNLSolvers with the current codebase.
julia> Double32(NaN)
ERROR: MethodError: no method matching DoubleFloat{Float32}(::Float32, ::Float64)
Closest candidates are:
DoubleFloat{Float32}(::T<:Union{Float16, Float32, Float64}) where T<:Union{Float16, Float32, Float64} at /home/asbjorn/.julia/packages/DoubleFloats/haDmR/src/Double.jl:75
DoubleFloat{Float32}(::T<:Union{Float16, Float64}, ::T<:Union{Float16, Float64}) where T<:Union{Float16, Float64} at /home/asbjorn/.julia/packages/DoubleFloats/haDmR/src/Double.jl:95
DoubleFloat{Float32}(::I<:Integer, ::T<:Union{Float16, Float64}) where {T<:Union{Float16, Float64}, I<:Integer} at /home/asbjorn/.julia/packages/DoubleFloats/haDmR/src/Double.jl:97
...
Stacktrace:
[1] DoubleFloat{Float32}(::Float64) at /home/asbjorn/.julia/packages/DoubleFloats/haDmR/src/Double.jl:75
[2] top-level scope at none:0
Right now floatmax
sets the lo
bit to zero, but maybe it should be as large as possible. For example:
julia> Double16(6.55e4, 15)
6.5519e+04
julia> Double16(6.55e4, 15) > floatmax(Double16)
true
While reading code, it appears that lines 42&43 of DoubleFloats.jl/src/Double.jl
refer to an undefined type F, which should perhaps be T. (Obviously not important, or someone would have commented before; thanks for your package, which is helping me learn Julia...)
@inline LO(x::T) where {T<:IEEEFloat} = zero(F)
@inline HILO(x::T) where {T<:IEEEFloat} = x, zero(F)
The tag name "0.7.7" is not of the appropriate SemVer form (vX.Y.Z).
cc: @JeffreySarnoff
The following gives a MethodError
:
julia> round(d64".035", digits=2)
ERROR: MethodError: no method matching iseven(::Float64)
Closest candidates are:
iseven(::Missing) at missing.jl:83
iseven(::Integer) at int.jl:91
iseven(::DoubleFloat{T<:Union{Float16, Float32, Float64}}) where T<:Union{Float16, Float32, Float64} at /Users/scott/.julia/packages/DoubleFloats/DUWqx/src/type/predicates.jl:130
Stacktrace:
[1] iseven(::DoubleFloat{Float64}) at /Users/scott/.julia/packages/DoubleFloats/DUWqx/src/type/predicates.jl:132
[2] round(::DoubleFloat{Float64}, ::RoundingMode{:Nearest}) at /Users/scott/.julia/packages/DoubleFloats/DUWqx/src/math/prearith/floorceiltrunc.jl:106
[3] _round_invstep(::DoubleFloat{Float64}, ::DoubleFloat{Float64}, ::RoundingMode{:Nearest}) at ./floatfuncs.jl:159
[4] _round_digits at ./floatfuncs.jl:186 [inlined]
[5] #round#542 at ./floatfuncs.jl:144 [inlined]
[6] #round at ./none:0 [inlined] (repeats 2 times)
[7] top-level scope at REPL[19]:1
Getting this warning with DoubleFloats v0.7.23
:
Julia-1.0.3> using DoubleFloats
[ Info: Precompiling DoubleFloats [497a8b3b-efae-58df-a0af-a86822472b78]
┌ Warning: Package DoubleFloats does not have LinearAlgebra in its dependencies:
│ - If you have DoubleFloats checked out for development and have
│ added LinearAlgebra as a dependency but haven't updated your primary
│ environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with DoubleFloats
└ Loading LinearAlgebra into DoubleFloats from project dependency, future warnings for DoubleFloats are suppressed.
Same warning with Julia v1.1.0
The IEEE Decimal Floating Point package, DecFP.jl
, has been around for almost 4 years now.
It would be nice if the string macro for this package did not conflict, so people could use both packages.
I'd suggest it be changed to df64
.
The tag name "0.7.12" is not of the appropriate SemVer form (vX.Y.Z).
cc: @JeffreySarnoff
With T = DoubleFloat{Float64}
, I get T(Inf) == NaN
. Is this expected behaviour, or would you want that to become T(Inf) == convert(T, Inf)
or T(Inf) == inf(T)
?
This causes problems for example with the Parameters.jl
package,
using Parameters, DoubleFloats
@with_kw struct TestMe{T}
a::T = Inf
end
T = DoubleFloat{Float64}
t = TestMe{T}()
t.a == inf(T) # false
isnan(t.a) # true
We have some code in LineSearches.jl that uses the Parameters.jl
package and this issue comes up there.
julia> 1.0/0.0
Inf
julia> one(Double64)/zero(Double64)
NaN
Hi,
I just saw that the travis CI builds for the 0.9.6 release and master fail with
power functions: Test Failed at /home/travis/build/JuliaMath/DoubleFloats.jl/test/functions.jl:19
Expression: Double64(0.0) ^ 0
Expected: DomainError
No exception thrown
For DoubleFloats, sqrt(NaN)
throws a DomainError
.
Should it be consistent with other floats (Float32, Float64) and return NaN
?
Julia-1.0.3> using DoubleFloats
Julia-1.0.3> x = Float32(0)/0
NaN32
Julia-1.0.3> sqrt(x)
NaN32
Julia-1.0.3> x = Float64(0)/0
NaN
Julia-1.0.3> sqrt(x)
NaN
Julia-1.0.3> x = Double64(0)/0
NaN
Julia-1.0.3> sqrt(x)
ERROR: DomainError with sqrt(x) expects x >= 0:
Stacktrace:
[1] sqrt_dd_dd(::Tuple{Float64,Float64}) at D:\Users\plowman\.julia\packages\DoubleFloats\3xCg7\src\math\ops\op_dd_dd.jl:63
[2] sqrt_db_db at D:\Users\plowman\.julia\packages\DoubleFloats\3xCg7\src\math\ops\op_db_db.jl:26 [inlined]
[3] sqrt(::DoubleFloat{Float64}) at D:\Users\plowman\.julia\packages\DoubleFloats\3xCg7\src\math\ops\arith.jl:9
[4] top-level scope at none:0
Now that code coverage is finally working, it shows that so far 80% of all lines are covered. In particular it seems that for certain functions not all branches are covered. I think it would be great if we get to near 100% coverage.
You can see the detailed statistics here.
Look at this article please, he use round function after each iteration.
https://www.codeproject.com/Articles/884606/The-double-double-type
First, good to see that this package takes form :) I just gave it a short spin and found a couple of things which are not working as intended.
There are a bunch of float specific functions not defined which cause different issues (e.g. you cannot take the square root of complex number at the moment). These include
realmin(::Type{Double{Float64,<:Emphasis})
and realmax(:Type{Double{Float64,<:Emphasis})
typemin(:Type{Double{Float64,<:Emphasis})
and typemax(:Type{Double{Float64,<:Emphasis})
nextfloat(:: Double)
eps(:Type{Double{Float64,<:Emphasis})
Also it seems that minmax
needs to be imported from Base
julia> abs(Double(rand())+2.0im)
ERROR: MethodError: no method matching minmax(::Double{Float64,Accuracy}, ::Double{Float64,Accuracy})
You may have intended to import Base.minmax
Closest candidates are:
minmax(::T, ::T, ::T) where T at /Users/sascha/.julia/v0.7/DoubleFloats/src/support/maxmin.jl:71
minmax(::T, ::T, ::T, ::T) where T at /Users/sascha/.julia/v0.7/DoubleFloats/src/support/maxmin.jl:91
Stacktrace:
[1] hypot(::Double{Float64,Accuracy}, ::Double{Float64,Accuracy}) at /Users/sascha/.julia/v0.7/DoubleFloats/src/math/moremath.jl:4
[2] abs(::Complex{Double{Float64,Accuracy}}) at ./complex.jl:259
[3] top-level scope
Also the behaviour for NaN
and Inf
seems to be different than for Float64
julia> T = Double{Float64,Performance}
julia> T(NaN)
FastDouble(NaN, NaN)
julia> T(Inf) - 1
FastDouble(NaN, NaN)
julia> Inf - 1
Inf
It seems that the CI is currently setup such that only 0.7 is tested but also tests on 0.7 are allowed to fail, i.e., the tests will always pass.
The following gives an error
julia> Double64(10.0)^0
ERROR: UndefVarError: a not defined
Stacktrace:
[1] ^(::DoubleFloat{Float64}, ::Int64) at /Users/dgleich/.julia/packages/DoubleFloats/jiK9a/src/math/elementary/explog.jl:75
[2] literal_pow(::typeof(^), ::DoubleFloat{Float64}, ::Val{0}) at ./intfuncs.jl:247
[3] top-level scope at none:0
[4] eval at ./boot.jl:319 [inlined]
[5] #353 at /Users/dgleich/.julia/packages/Atom/jJn7Y/src/repl.jl:125 [inlined]
[6] with_logstate(::getfield(Main, Symbol("##353#355")), ::Base.CoreLogging.LogState) at ./logging.jl:397
[7] with_logger(::Function, ::Atom.Progress.JunoProgressLogger) at ./logging.jl:493
[8] top-level scope at /Users/dgleich/.julia/packages/Atom/jJn7Y/src/repl.jl:124
and the error is pretty obviously a tiny typo!
function Base.:(^)(r::DoubleFloat{T}, n::Int) where {T<:AbstractFloat}
if (n == 0)
iszero(a) && throw(DomainError("0^0"))
return one(DoubleFloat{T})
end
(which should be iszero(r)
instead).
I was going to do a pull request to fix with some new test cases, but I wasn't sure where you want them organized. Here are a few test cases that should catch these
@testset "Exponential functions" begin
@test Double64(10.0)^0 == Double64(1.0)
@test Double64(10.0)^1 == Double64(10.0)
@test_throws DomainError Double64(0.0)^0
end
FFTs aren't supported for double floats. Is there any plan to do this? In principle it should be as simple as following the instructions in https://github.com/JuliaMath/AbstractFFTs.jl.
I'm getting this message on attempts to precompile various packages...
Package DoubleFloats does not have Random in its dependencies
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.