GithubHelp home page GithubHelp logo

arblib.jl's Introduction

Arblib.jl

DOI ci codecov Aqua QA

This package is a thin, efficient wrapper around Arb - a C library for arbitrary-precision ball arithmetic. Since 2023 Arb is part of Flint.

Installation

using Pkg
pkg"add Arblib"

What is Arb?

From the Arb documentation:

Arb is a C library for rigorous real and complex arithmetic with arbitrary precision. Arb tracks numerical errors automatically using ball arithmetic, a form of interval arithmetic based on a midpoint-radius representation. On top of this, Arb provides a wide range of mathematical functionality, including polynomials, power series, matrices, integration, root-finding, and many transcendental functions. Arb is designed with efficiency as a primary goal, and is usually competitive with or faster than other arbitrary-precision packages.

Types

The following table indicates how Arb C-types can be translated to the Julia side. Note that all Julia structs additionally contain a precision field storing the precision used. Julia types with Ref in their name are similar to the Ref type in base Julia. They contain a pointer to an object of the corresponding type, as well as a reference to it parent object to protect it from garbage collection.

Arb Julia
mag_t Mag or MagRef
arf_t Arf or ArfRef
arb_t Arb or ArbRef
acb_t Acb or AcbRef
arb_t* ArbVector or ArbRefVector
acb_t* AcbVector or AcbRefVector
arb_mat_t ArbMatrix or ArbRefMatrix
acb_mat_t AcbMatrix or AcbRefMatrix
arb_poly_t ArbPoly or ArbSeries
acb_poly_t AcbPoly or AcbSeries

Indexing of an ArbMatrix returns an Arb whereas indexing ArbRefMatrix returns an ArbRef. An ArbMatrix A can also be index using the ref function , e.g, ref(A, i, j) to obtain an ArbRef.

Additionally, there are multiple union types defined to capture a Ref and non-Ref version. For example Arb and ArbRef are subtypes of ArbLike. Similarly, we provide MagLike, ArfLike, ArbLike, AcbLike, ArbVectorLike, AcbVectorLike, ArbMatrixLike, AcbMatrixLike.

Both ArbPoly and ArbSeries wrap the arb_poly_t type. The difference is that ArbSeries has a fixed length and is therefore suitable for use when Taylor series are computed using the _series functions in Arb. Similar for AcbPoly and AcbSeries.

Example:

julia> A = ArbMatrix([1 2; 3 4]; prec=64)
2×2 ArbMatrix:
 1.000000000000000000  2.000000000000000000
 3.000000000000000000  4.000000000000000000

julia> a = A[1,2]
2.000000000000000000

julia> Arblib.set!(a, 12)
12.00000000000000000

# Memory in A not changed
julia> A
2×2 ArbMatrix:
 1.000000000000000000  2.000000000000000000
 3.000000000000000000  4.000000000000000000

julia> b = ref(A, 1, 2)
2.000000000000000000

julia> Arblib.set!(b, 12)
12.00000000000000000

# Memory in A also changed
julia> A
2×2 ArbMatrix:
 1.000000000000000000  12.00000000000000000
 3.000000000000000000   4.000000000000000000

Naming convention

Arb functions are wrapped by parsing the Arb documentation and applying the following set of rules to "Juliafy" the function names:

  1. The parts of a function name which only refer to the type of input are removed since Julia has multiple dispatch to deal with this problem.
  2. Functions which modify the first argument get an ! appened.
  3. For functions which take a precision argument this arguments becomes a prec keyword argument which is by default set to the precision of the first argument (if applicable).
  4. For functions which take a rounding mode argument this arguments becomes a rnd keyword argument which is by default set to RoundNearest.

Example: The function

arb_add_si(arb_t z, const arb_t x, slong y, slong prec)`

becomes

add!(z::ArbLike, x::ArbLike, y::Int; prec = precision(z))

Constructors and setters

Arb defines a number of functions for setting something to a specific value, for example void arb_set_si(arb_t y, slong x). All of these are renamed to set! and rely on multiple dispatch to choose the correct one. In addition to the ones defined in Arb there is a number of methods of set! added in Arblib to make it more convenient to work with. For example there are setters for Rational and all irrationals defined in Base.MathConstants. For Arb there is also a setter which takes a tuple (a, b) representing an interval and returns a ball containing this interval.

Almost all of the constructors are simple wrappers around these setters. This means that it's usually more informative to look at the methods for set! than for say Arb to figure out what constructors exists. Both Arb and Acb are constructed in such a way that the result will always enclose the input.

Example:

x = Arblib.set!(Arb(), π)
y = Arb(π)

x = Arblib.set!(Arb(), 5//13)
y = Arb(5//13)

x = Arblib.set!(Arb(), (0, π))
y = Arb((0, π))

Pitfalls when interacting with the Julia ecosystem

Arb is made for rigorous numerics and any functions which do not produce rigorous results are clearly marked as such. This is not the case with Julia in general and you therefore have to be careful when interacting with the ecosystem if you want your results to be completely rigorous. Below are three things which you have to be extra careful with.

Implicit promotion

Julia automatically promotes types in many cases and in particular you have to watch out for temporary non-rigorous values. For example 2(π*(Arb(ℯ))) is okay, but not 2π*Arb(ℯ)

julia> 2*(Arb(ℯ)))
[17.079468445347134130927101739093148990069777071530229923759202260358457222314 +/- 9.19e-76]

julia> 2π*Arb(ℯ)
[17.079468445347133465140073658536286170170195258393831755094914544308087031794 +/- 7.93e-76]

julia> Arblib.overlaps(2*(Arb(ℯ))), 2π*Arb(ℯ))
false

Non-rigorous approximations

In many cases this is obvious, for example Julias built in methods for solving linear systems will not produce rigorous results.

TODO: Come up with more examples

Implementation details

In some cases the implementation in Julia implicitly makes certain assumptions to improve performance and this can lead to issues. For example, prior to Julia version 1.8 the minimum and maximum methods in Julia checked for NaN results (on which is short fuses) using x == x, which works for most numerical types but not for Arb (x == x is only true if the radius is zero). See JuliaLang/julia#36287 and in particular JuliaLang/julia#45932 for more details. Since Julia version 1.8 the minimum and maximum methods work correctly for Arb, for earlier versions of Julia it only works correctly in some cases.

These types of problems are the hardest to find since they are not clear from the documentation but you have to read the implementation, @which and @less are your friends in these cases.

Example

Here is the naive sine compuation example form the Arb documentation in Julia:

using Arblib

function sin_naive!(res::Arb, x::Arb)
    s, t, u = zero(x), zero(x), zero(x)
    tol = one(x)
    Arblib.mul_2exp!(tol, tol, -precision(tol))
    k = 0
    while true
        Arblib.pow!(t, x, UInt(2k + 1))
        Arblib.fac!(u, UInt(2k + 1))
        Arblib.div!(t, t, u)
        Arblib.abs!(u, t)

        if u  tol
            Arblib.add_error!(s, u)
            break
        end
        if iseven(k)
            Arblib.add!(s, s, t)
        else
            Arblib.sub!(s, s, t)
        end
        k += 1
    end
    Arblib.set!(res, s)
end

let prec = 64
    while true
        x = Arb("2016.1"; prec = prec)
        y = zero(x)
        y = sin_naive!(y, x)
        print("Using $(lpad(prec, 5)) bits, sin(x) = ")
        println(Arblib.string_nice(y, 10))
        y < zero(y) && break
        prec *= 2
    end
end

Special functions

Arblib extends the methods from SpecialFunctions.jl with versions from Arb. In some cases the Arb version is more general than the version in SpecialFunctions, for example ellipk is not implemented for complex arguments in SpecialFunctions but it is in Arb. We refer to the Arb documentation for details about the Arb-versions.

Some methods from SpecialFunctions are however not implemented in Arb and does are not extended, these are mostly scaled version of methods. Arb does however implement many special functions that are not in SpecialFunction and at the moment there is no user friendly interface for most of them.

Support for multi-threading

Enabling a threaded version of flint can be done by setting the environment variable NEMO_THREADED=1. Note that this should be set before Arblib.jl is loaded. To set the actual number of threads, use Arblib.flint_set_num_threads($numberofthreads).

arblib.jl's People

Contributors

dependabot[bot] avatar devmotion avatar joel-dahne avatar kalmarek avatar katatimme avatar lucifer1004 avatar nanleij avatar saschatimme avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

arblib.jl's Issues

Why Acb is not a subtype of Complex?

I wonder why I get the following:

julia> Arb <: Real
true

julia> Arb <: AbstractFloat
true

julia> Acb <: Complex
false

julia> Acb <: Complex{AbstractFloat}
false

julia> VERSION
v"1.7.2"

Is this behavior on purpose, on is it a bug?

Printing in IJulia deadlocks

On Julia 1.5.2 in IJulia 1.22 (latest version)

show(Acb(2.3));

deadlocks. In the terminal everything is fine. Anybody any clue what is going on here?

ArbMatrix(rand(BigFloat, 5,5))

fails due to missing set! method

ERROR: MethodError: no method matching set!(::Ptr{Arblib.arb_struct}, ::BigFloat)
Closest candidates are:
  set!(::ArbLike, ::Rational) at /home/kalmar/.julia/dev/Arblib/src/constructors.jl:153
  set!(::ArfLike, ::BigFloat) at /home/kalmar/.julia/dev/Arblib/src/arbcall.jl:294
  set!(::ArbLike, ::ArbLike) at /home/kalmar/.julia/dev/Arblib/src/arbcall.jl:294
  ...
Stacktrace:
 [1] setindex!(::Arblib.arb_mat_struct, ::BigFloat, ::Int64, ::Int64) at /home/kalmar/.julia/dev/Arblib/src/matrix.jl:41
 [2] setindex! at /home/kalmar/.julia/dev/Arblib/src/matrix.jl:49 [inlined]
 [3] ArbMatrix(::Array{BigFloat,2}; prec::Int64) at /home/kalmar/.julia/dev/Arblib/src/matrix.jl:114
 [4] ArbMatrix(::Array{BigFloat,2}) at /home/kalmar/.julia/dev/Arblib/src/matrix.jl:110
 [5] top-level scope at REPL[9]:1

(this is a reminder for #82)

What to return from arbcall?

this is just to have a space for discussion on the topic rised in here:

@Joel-Dahne wrote:
I think they case of Bool versus Int32 is interesting as well. While many of the int types in arb represents booleans, some of them are flags with more possible values. It would fit much better in Julia to return a Bool instead of an Int32 when possible, but then we need some way of determining when this can be done. When it occurs as an argument we can use Union{Bool, Int32} or maybe even Integer.

Another solution is to specialize `@arbcall macro on classes of functions, (one of them: predicates), e.g.

@arbcall "arb_add(...)" # normal function
@arbcall Predicate "int arb_is_zero(const arb_t x)" iszero

which should call

macro arbcall(ftype::Expr, arbsignature_str::String,
    jl_fname=jlfname(Arbfunction(arbsignature_str)))
    af = Arbfunction(arbsignature_str)
    local T = eval(ftype)
    return jlcode(T, af, jl_fname)
end

function jlcode(::Type{Predicate}, af::Arbfunction, jl_fname=jlfname(af))
    returnT = returntype(af)
    cargs = arguments(af)
    args, kwargs = jlargs(af)

    call = :(
        function $jl_fname($(args...); $(kwargs...))
              res = ccall(Arblib.@libarb($(arbfname(af))),
              $returnT,
              $(Expr(:tuple, ctype.(cargs)...)),
              $(Symbol.(name.(cargs))...))
              return !iszero(res)
        end
    )
end

(I don't like this eval, but i'm not sure what to do about it)

Possibly use ScopedValues for precision (and rounding?)

There is a (currently open) PR for Julia about using ScopedValues for setprecision with BigFloat. If this ends up being merged it seems like a good idea for us to do the same. I could see this being useful in a number of situations.

prec / precision asymmetry

This annoys me on a purely aesthetic level:

a = Arb(2; prec = 128)
b = Arb(prec = 2 * precision(a))

This should either be both prec or both precision. I know there is precision in Base but I think nobody writes generic code using precision anyways.

Wrong name of parsed function

The function acb_hypgeom_si which computes the sine integral Si in Arb is named hypgeom! in Arblib since it assumes the si means signed integer. Adding this as an issue here so that I don't forget it.

Make a first release

I think we are getting to a point where we can have a first release.

  • Merge #32
  • Merge #61
  • Merge #54
  • Merge #57
  • Merge #59
  • Create a README outlining our conversion strategy and introducing the types

@kalmarek @Joel-Dahne please amend the list with the things you have in mind.

structs design

julia> using Arblib
[ Info: Precompiling Arblib [fb37089c-8514-4489-9461-98f9c8763369]

julia> x = Arblib.Arf(big(π), 128)
3.141592653589793238462643383279502884197169399375105820974944592307816406286198

julia> Arblib.add!(Arf(128), x, x)
6.283185307179586476925286766559005768390572716594890071716397365518720742691867

julia> using BenchmarkTools

julia> @benchmark Arblib.add!($y, $x, $x)
BenchmarkTools.Trial: 
  memory estimate:  48 bytes
  allocs estimate:  3
  --------------
  minimum time:     202.775 ns (0.00% GC)
  median time:      219.299 ns (0.00% GC)
  mean time:        234.778 ns (0.20% GC)
  maximum time:     43.592 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     530

julia> @benchmark Arblib.set!($y, $x)
BenchmarkTools.Trial: 
  memory estimate:  32 bytes
  allocs estimate:  2
  --------------
  minimum time:     106.172 ns (0.00% GC)
  median time:      114.050 ns (0.00% GC)
  mean time:        121.140 ns (0.31% GC)
  maximum time:     24.958 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     925

julia> @benchmark Arblib.Arb(128)
BenchmarkTools.Trial: 
  memory estimate:  96 bytes
  allocs estimate:  2
  --------------
  minimum time:     18.179 ns (0.00% GC)
  median time:      24.541 ns (0.00% GC)
  mean time:        125.080 ns (46.35% GC)
  maximum time:     101.395 μs (81.59% GC)
  --------------
  samples:          10000
  evals/sample:     998

at the moment we have too many allocations, we need to change the design

incorrect global enclosure

I'm not sure if this is a problem with Arb docs, or not:
http://arblib.org/acb_mat.html#c.acb_mat_eig_global_enclosure
states that the enclosure is given in absolute value (euclidean norm), however:

julia> A = [
               0.6873474041954415 0.7282180564881044 0.07360652513458521
               0.000835810121029068 0.9256166870757694 0.5363310989411239
               0.07387174694790022 0.4050436025621329 0.20226010388885896
           ];

julia> B = [
               0.8982563031334123 0.3029712969740874 0.8585014523679579
               0.7583002736998279 0.8854763478184455 0.3031103325817668
               0.2319572749472405 0.5769840251057949 0.5119507333628952
           ];

julia> M = AcbMatrix(A + B * im, prec = 64);

julia> let M = M    
           tol = 1e-12
           λ_approx, R_approx = Arblib.approx_eig_qr(M, tol = tol)
           
           ε = Arblib.eig_global_enclosure!(Arblib.Mag(), M, λ_approx, R_approx)
           
           λs = Arblib.eig_simple!(similar(M, size(M,1)), M, λ_approx, R_approx) # too keep the order
           
           @info Arblib.get(ε)
           @info abs.(λs - λ_approx) .< Arb(Arf(ε))
           @info real.(λs - λ_approx) .< Arb(Arf(ε))
           @info imag.(λs - λ_approx) .< Arb(Arf(ε))
       end
[ Info: 6.230695738404469e-13
[ Info: Bool[0, 0, 0]
[ Info: Bool[1, 1, 1]
[ Info: Bool[1, 1, 1]

Flag for enabling warnings when converting from `AbstractFloat`

I find that one of the easiest mistakes to make when working with Arblib in Julia is to accidentally use Float64 for some internal computations. For example for x::Arb it's perfectly fine to do any of the following

2x
x / 2
x^2
π * x
x / π
x^π

But you get a wrong result for, e.g.,

2π * x
x / log(2)
x^(1 / 3) # x^(1 // 3) works though
sqrt(2) * x

One of the common things for all of the wrong examples is that they involve a conversion from AbstractFloat to Arb. The Idea I have had is to have some sort of flag or macro which, when enable, prints a warning whenever a AbstractFloat is converted to Arb. So it would behave something like

function convert(::Type{Arb}, x::AbstractFloat)
    @warn "converting a $(typeof(x)) to Arb, it's possible that this is a mistake."
    return Arb(x)
end

Does this sound interesting to you?

I don't know exactly how this would be implemented, I feel like it should be possible but I don't know exactly how. Any thoughts?

Towards v1.0

I have been thinking that it might be time for a 1.0 release. The interface has been fairly stable for some time and I see no reason to stay on 0.x. There are a couple of things I would like to tackle before 1.0

  • Finish the transition to Flint v3
  • Finish the first version of the documentation
  • Reconsider the overloading of some Base methods, in particular union and intersect but also contains
  • Reduce usage of XLike types in favor for XOrRef types in the high level interface

The first two points are more or less self explanatory. For the documentation I have made some progress the last month or so.

For union and intersect I have realized that the way we define them contradict the version in Base, and is also easily misused. The issue is that Real types are so both union and intersect already have a well defined meaning for those types, which is different from how we currently use it. For example

julia> using Arblib
julia> union(1.0, 2.0)
2-element Vector{Float64}:
 1.0
 2.0
julia> union(Arb(1), 2.0)
2-element Vector{Arb}:
 1.0000000000000000000000000000000000000000000000000000000000000000000000000000
 2.0000000000000000000000000000000000000000000000000000000000000000000000000000
julia> union(Arb(1), Arb(2))
[+/- 2.01]
julia> intersect(1.0, 1.0)
1-element Vector{Float64}:
 1.0
julia> intersect(Arb(1.0), 1.0)
1-element Vector{Arb}:
 1.0000000000000000000000000000000000000000000000000000000000000000000000000000
julia> intersect(Arb(1.0), Arb(1.0))
1.0000000000000000000000000000000000000000000000000000000000000000000000000000

For that reason I think it would be better to use some other name for taking the union or intersection of two Arb or Acb. My proposal would be setunion and setintersection, in line with for example getinterval, getball and setball. But I'm open to other ideas! The method Base.contains also has similar issues, but I haven't thought so much about how to deal with that one.

Regarding XLike types I have noticed that I very rarely need to use anything other than the XOrRef types with the high-level interface. When I do use the, it is mainly together with the lower-level interface. In most cases the high-level interface already uses XOrRef, but there are some exceptions:

  • Most predicates use XLike
  • hash
  • a couple more spread out though the code
    The main reason to avoid the use of XLike is that it includes Ptr types and this can lead to unexpected behavior. For example we currently overload hash(x::Ptr{Arblib.arb_struct}) to compute the hash of the arb_struct it points to, but if the pointer is null this gives a segmentation fault. For other pointer types the hash of the null pointer is well defined. For predicates it is for example slightly confusing what iszero(Ptr{Arblib.arb_struct}) means, do we check if the pointer is zero or the thing it points to? For these reasons it seems reasonable to me to keep the use of XLike to the low-level interface as much as possible.

I might come up with more things to handle before 1.0, in that case I'll add them here.

Plotting and subtyping `AbstractFloat`

I want to be able to plot the Number types we define in Arblib.jl directly using Plots.jl. Currently plot([Arb(1)]) fails because float(::Arb) defaults to AbstractFloat(::Arb) which is not defined. There are two ways to solve this, either we define float(x::Arb) = x directly or we make Arb a subtype of AbstractFloat in which case AbstractFloat(::Arb) will work. Similar things happen for Arf.

For Arf I think it makes perfect sense to subtype AbstractFloat, after all it's essentially identical to a BigFloat which does subtype that. For Arb I'm not entirely sure, we could maybe see it as an AbstractFloat with the radius being some extra information. In practice I think this would work quite well, and everything that makes sense on an AbstractFloat I believe makes sense on an Arb as well. The alternative would be to define float(x::Arb) = x, so we would treat it as a float in practice but not subtype it, I'm not sure if that is better in any way. I can note here that float doesn't have to return a subtype of AbstractFloat, for example float(::Complex{Int}) is Complex{Float64}.

For Acb I believe we want to handle it similar to Complex{Arb}, which would mean adding these two plot recipes

RecipesBase.@recipe function f(A::Array{Acb})
    xguide --> "Re(x)"
    yguide --> "Im(x)"
    real.(A), imag.(A)
end

RecipesBase.@recipe function f(x::AbstractArray{T}, y::Array{Acb}) where {T<:Real}
    yguide --> "Re(y)"
    zguide --> "Im(y)"
    x, real.(y), imag.(y)
end

However I realised that many methods, at least in Julia Base, starts by doing float(x) on any input, so it could still be beneficial to define float(x::Acb) = Acb. I can also mention that it doesn't make sense to have Acb <: AbstractFloat since AbstractFloat <: Real.

Finally a comment about plotting balls. In the above plotting an Arb or an Acb would plot the midpoint of the ball. In many cases this makes sense, when working with high precision balls with small radius which is exactly what Arb is optimized for. This covers most of my uses. However in some cases you would like to plot the whole ball, including the radius. It would be nice if we could come up with a convenient method for doing that as well, possibly through some @userplot recipe.

Arb * ArbMatrix becomes Matrix{Arb} instead of ArbMatrix

Hi, I came across this issue a few times now, so I thought I would mention it here.

julia> Arb(2)*ArbMatrix([1 2; 3 4])
2×2 Matrix{Arb}:
 2.0000000000000000000000000000000000000000000000000000000000000000000000000000  …  4.0000000000000000000000000000000000000000000000000000000000000000000000000000
 6.0000000000000000000000000000000000000000000000000000000000000000000000000000     8.0000000000000000000000000000000000000000000000000000000000000000000000000000

So after multiplying a scalar with a matrix things start to break down because you don't have an ArbMatrix anymore
My workaround is using Arblib.addmul!(B,A,c) or Arblib.mul!(B,A,c) depending on the case, but I think this should be added here too because it is a common operation for matrices. I'm not sure about the types (e.g., how to handle cases like Acb * ArbMatrix except for defining separate functions for them), but without typing something like this would probably suffice:

function Base.:(*)(c, A)
    B = similar(A)
    Arblib.mul!(B,A,c)
    B
end

Arblib.setprecision broken for AcbRefMatrix

When trying to update HC.jl the test fail on

Error During Test at /home/runner/work/HomotopyContinuation.jl/HomotopyContinuation.jl/test/certification_test.jl:148
  Got exception outside of a @test
  TaskFailedException
  
      nested task error: MethodError: no method matching Arblib.AcbRefMatrix(::Arblib.acb_mat_struct, ::Int64)
      
      Closest candidates are:
        Arblib.AcbRefMatrix(::Integer, ::Integer; prec)
         @ Arblib ~/.julia/packages/Arblib/5sqKw/src/matrix.jl:78
        Arblib.AcbRefMatrix(::Arblib.acb_mat_struct; shallow, prec)
         @ Arblib ~/.julia/packages/Arblib/5sqKw/src/types.jl:338
      
      Stacktrace:
        [1] setprecision(M::Arblib.AcbRefMatrix, p::Int64)
          @ HomotopyContinuation ~/work/HomotopyContinuation.jl/HomotopyContinuation.jl/src/certification.jl:625
        [2] set_arb_precision!
          @ ~/work/HomotopyContinuation.jl/HomotopyContinuation.jl/src/certification.jl:629 [inlined]
        [3] extended_prec_certify_solution(F::CompiledSystem{(0x90bb3d0906e1abfd, 1)}, solution_candidate::Vector{ComplexF64}, x̃₀::Vector{ComplexF64}, cert_params::HomotopyContinuation.CertificationParameters, cert_cache::CertificationCache, index::Int64; max_precision::Int64, extended_certificate::Bool)
          @ HomotopyContinuation ~/work/HomotopyContinuation.jl/HomotopyContinuation.jl/src/certification.jl:1080

I currently don't have time to look into the details but wanted to at least open the issue to keep track of it

This `shallow` idea is not working

Setting:

julia> v = AcbVector(3)
3-element AcbVector:
 0
 0
 0

julia> a, b = Acb(3), Acb(5)
(3.0000000000000000000000000000000000000000000000000000000000000000000000000000, 5.0000000000000000000000000000000000000000000000000000000000000000000000000000)

Now I wanted to add a and b and store the result in v[1]:

julia> Arblib.add!(v[1], a, b)

julia> v
3-element AcbVector:
 0
 0
 0

Oh. Not like I would things expect to behave. But, then I remember: We had this shallow flag. Let's try that:

julia> v1 = Acb(getindex(v.acb_vec, 1; shallow=true)
0

julia> Arblib.add!(v1, a, b)

julia> v1
8.0000000000000000000000000000000000000000000000000000000000000000000000000000

julia> v[1]
0

Ouch.

Only thing that works:

julia> Arblib.add!(v.acb_vec[1], a, b)

julia> v
3-element AcbVector:
 8.0000000000000000000000000000000000000000000000000000000000000000000000000000
                                                                              0
                                                                              0

Also @kalmarek found this beauty

julia> v[1] = Arb("0.1")
[0.1000000000000000000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]
julia> Arblib.add!(getindex(v,1,shallow=true), v[1], Acb(1)); v[1]
[0.06875000000000000000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]
julia> Arblib.add!(getindex(v,1,shallow=true), v[1], Acb(1)); v[1]
[0.06679687500000000000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]
julia> Arblib.add!(getindex(v,1,shallow=true), v[1], Acb(1)); v[1]
[0.06667480468750000000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]
julia> Arblib.add!(getindex(v,1,shallow=true), v[1], Acb(1)); v[1]
[0.06666717529296875000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]
julia> Arblib.add!(getindex(v,1,shallow=true), v[1], Acb(1)); v[1]
[0.06666669845581054687500000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]

Similar issues will happen with AcbMatrix and AcbVector :/

Make sure that MPFR and Flint are built in thread safe mode

http://arblib.org/issues.html#thread-safety-and-caches

Arb should be fully threadsafe, provided that both MPFR and FLINT have been built in threadsafe mode.

Arb may cache some data (such as the value of π and Bernoulli numbers) to speed up various computations. In threadsafe mode, caches use thread-local storage.

I am not sure how the thread local caches play together with Julia threads but we should keep an eye on this later on (or ask Frederik to clarify)

fine-tune the ispredicate function

e.g. when a returned Cint is/is not a Bool.

functions missed by the current function:

  • arb_mat_solve
  • eig_simple, etc. where the return value symbolizes success/failure to perform a task;

Functions that return non-boolean Cint:

  • *_cmp

eig_simple_rump!(...; side=:left) segfaults inside libarblib

julia> A = [
               0.6873474041954415 0.7282180564881044 0.07360652513458521
               0.000835810121029068 0.9256166870757694 0.5363310989411239
               0.07387174694790022 0.4050436025621329 0.20226010388885896
           ];

julia> B = [
               0.8982563031334123 0.3029712969740874 0.8585014523679579
               0.7583002736998279 0.8854763478184455 0.3031103325817668
               0.2319572749472405 0.5769840251057949 0.5119507333628952
           ];

julia> M = MatT(A + B * im, prec = 64);

julia> λs1, _ = Arblib.eig_simple_rump(M, side = :right)
(Acb[[1.30153309871612730 +/- 2.32e-18] + [1.80765893841642298 +/- 6.06e-18]*I, [0.682258049701900519 +/- 4.76e-19] + [0.394077524731353095 +/- 5.11e-19]*I, [-0.168566953257957990 +/- 5.20e-19] + [0.093946921166976970 +/- 5.44e-19]*I], Acb[[0.4449264898696659941 +/- 5.65e-20] + [0.4634294069416568042 +/- 6.91e-20]*I [-0.6798370500753717779 +/- 1.38e-20] - [0.2620245033973304097 +/- 2.86e-20]*I [0.631991508186129629 +/- 3.22e-19] + [-0.003670352145081746 +/- 3.99e-19]*I; [0.3906072184390861097 +/- 1.14e-20] + [0.5459402715947035059 +/- 3.27e-20]*I [0.4639872344939433396 +/- 3.81e-20] + [0.3039649915535414256 +/- 7.11e-20]*I [-0.210579599428613159 +/- 4.58e-19] + [-0.587107155572761272 +/- 3.24e-19]*I; [0.1739472525947631981 +/- 4.43e-20] + [0.3261766047144971842 +/- 4.30e-20]*I [-0.05774479193929384316 +/- 7.28e-21] + [0.4312026289542442071 +/- 6.97e-20]*I [-0.07234894806558699552 +/- 8.31e-22] + [0.6558941721622767361 +/- 2.06e-20]*I])

julia> λs2, _ = Arblib.eig_simple_rump(M, side=:left)

signal (11): Segmentation fault
in expression starting at REPL[338]:1
acb_mat_solve at /workspace/srcdir/arb-2.18.1/acb_mat/solve.c:17
Allocations: 63278843 (Pool: 63255393; Big: 23450); GC: 71
[1]    468126 segmentation fault (core dumped)  julia --sysimage ~/.julia/sys_custom.so --project

Auto export all defined functions

Should we automatically add an export statement for each generated function in arbcall? I would say yes. This makes it much more convenient to write a one-off script using the library

len detection in arbcall is broken

this commit broke it:
57fff06
@saschatimme can you explain what was the reason to remove Ptr{arb_struct} from jltype(ca::Carg{ArbVector}) (and the same for AcbVector? (I really need to undust my memory about the function of all of those Unions...)

In any case

Union{ArbVector,cstructtype(ArbVector),Ptr{arb_struct}},

needs to be kept in sync with jltype

also: none of the tests caught it; we need to add

            (
                "int _acb_vec_is_zero(acb_srcptr vec, slong len)",
                [
                    :(vec::$(Union{AcbVector, Arblib.acb_vec_struct})),
                ],
                [
                    Expr(:kw, :(len::Integer), :(length(vec)))
                ]
            )

to jlargs testset.

Reconsider finalizer attachment

Currently Arb, Arf, etc. look all like this

# c memory layout
struct arf_struct
    exponent::UInt      # fmpz
    size::UInt          # mp_size_t
    mantissa1::UInt     # mantissa_struct of length 128
    mantissa2::UInt
end

# mutable struct containing precision
mutable struct Arf <: Real
    arf::arf_struct
    prec::Int
   
   function Arf(si::Int64; prec::Integer=DEFAULT_PRECISION[])
        res = new(arf_struct(0,0,0,0), prec)
        init_set!(res, si)
        finalizer(clear!, res)
        return res
    end
end

In particular, the Arf struct is mutable since we need to attach a finalizer somewhere. We ended up with this design since this was the only design which only results in one allocation in Julia <= 1.4. Starting from Julia 1.5 however, it is possible to also have

# c memory layout
mutable struct arf_struct
    exponent::UInt      # fmpz
    size::UInt          # mp_size_t
    mantissa1::UInt     # mantissa_struct of length 128
    mantissa2::UInt

   function Arf(...)
        ... attach finalizer etc.
    end
end

# struct containing precision
struct Arf <: Real
    arf::arf_struct
    prec::Int
end

result in only one allocation.
I think this design question is deeply tied together with how we want to handle indexing into Arb-owned matrices. Using the second approach, we can return an Arf without any extra allocations with the caveat that the memory lifetime is controlled by the matrix (but this is the same in C). Our current approach on the other hand would always trigger an allocation (either due to making a copy and attaching a finalizer, or constructing a new mutable struct).

Any thoughts?

default show for Arb might be confusing

This is an interval containing 0:

julia> let x = sin(Arb(π))
           @info "" x contains(x, 0) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: 
│   x = [+/- 2.83e-77]
│   contains(x, 0) = true
│   Arblib.midref(x) = 1.0969174409793520767073235159733943661910882381141135060401716459409529154108e-77Float64(Arblib.radref(x)) = 1.730607217577946e-77

This is an interval containing 1:

julia> let x = sin(Arb(π)/2)
           @info "sin(Arb(π))" x contains(x, 1) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: sin(Arb(π))
│   x = [1.000000000000000000000000000000000000000000000000000000000000000000000000000 +/- 1.74e-77]
│   contains(x, 1) = true
│   Arblib.midref(x) = 0.99999999999999999999999999999999999999999999999999999999999999999999999999999Float64(Arblib.radref(x)) = 8.737900781332737e-78

This is an interval containing 1.5:

julia> let x = 1.5sin(Arb(π)/2)
           @info "1.5sin(Arb(π)/2)" x contains(2x, 3) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: 1.5sin(Arb(π)/2)
│   x = [1.5000000000000000000000000000000000000000000000000000000000000000000000000000 +/- 4.77e-77]
│   contains(2x, 3) = true
│   Arblib.midref(x) = 1.5Float64(Arblib.radref(x)) = 3.0379188354575523e-77

let's try the same with set_interval!:

julia> let x = Arblib.set_interval!(Arb(), Arf(-0.5), Arf(0.5))
           @info "[-0.5, 0.5]" x contains(x, 0) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: [-0.5, 0.5]
│   x = [+/- 0.501]
│   contains(x, 0) = true
│   Arblib.midref(x) = 0Float64(Arblib.radref(x)) = 0.5000000009313226

julia> let x = Arblib.set_interval!(Arb(), Arf(0.5), Arf(1.5))
           @info "[0.5, 1.5]" x contains(x, 1) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: [0.5, 1.5]
│   x = [1e+0 +/- 0.501]
│   contains(x, 1) = true
│   Arblib.midref(x) = 1Float64(Arblib.radref(x)) = 0.5000000009313226

julia> let x = Arblib.set_interval!(Arb(), Arf(1.0), Arf(2.0))
           @info "[1.0, 2.0]" x contains(2x, 3) Arblib.midref(x) Float64(Arblib.radref(x))
       end
┌ Info: [1.0, 2.0]
│   x = [+/- 2.01]               this is unexpected
│   contains(2x, 3) = true
│   Arblib.midref(x) = 1.5Float64(Arblib.radref(x)) = 0.5000000009313226

moreover:

julia> let x = Arblib.set_interval!(Arb(), Arf(1.0), Arf(2.0))
           @info "[1.0, 2.0]" Arblib.string_decimal(x) Arblib.string_nice(x)
       end
┌ Info: [1.0, 2.0]
│   Arblib.string_decimal(x) = "1.5 +/- 0.5"
└   Arblib.string_nice(x) = "[+/- 2.01]"

@arbcall_str

using Pkg
Pkg.activate(@__DIR__)
using Arblib
import Arblib: Mag, arb_rnd

function jlfname(arbfname,
        prefixes=("arf", "arb", "acb", "mag"),
        suffixes=("si", "ui", "d", "arf", "arb");
        inplace=true)
    strs = split(arbfname, "_")
    k = findfirst(s->s  prefixes, strs)
    l = findfirst(s->s  suffixes, reverse(strs))
    @show k,l
    fname = join(strs[k:end-l+1], "_")
    return inplace ? Symbol(fname, "!") : Symbol(fname)
end

const Ctypes = Dict{String, DataType}(
    "void"      => Cvoid,
    "int"       => Cint,
    "slong"     => Clong,
    "ulong"     => Culong,
    "double"    => Cdouble,
    "arf_t"     => Arf,
    "arb_t"     => Arb,
    "acb_t"     => Acb,
    "mag_t"     => Mag,
    "arf_rnd_t" => arb_rnd
)


macro arbcall_str(str)
    @show str
    header_regex = r"(?<returntype>\w+)\s+(?<arbfunction>[\w_]+)\((?<args>.*)\)"

    m = match(header_regex, str)

    returnT = Ctype[m[:returntype]]

    arbf = String(m[:arbfunction])
    jlf = jlfname(arbf, inplace=true)

    args = match.(r"(?<type>\w+)\s+(?<name>\w+)",
        strip.(split(replace(m[:args], "const" =>""), ","))
        )
    arg_names = Symbol.([m[:name] for m in args])
    jl_types = [Ctypes[m[:type]] for m in args]
    c_types = [T  (Arf, Arb, Acb, Mag) ? Ref{T} : T for T in jl_types]

    jl_args = [:($a::$T) for (a, T) in zip(arg_names, jl_types)]

    if :prec in arg_names
        k = findfirst(==(:prec), arg_names)
        @assert c_types[k] == Clong
        p = esc(:prec)
        if first(jl_types)  (Arf, Arb, Acb)
            default = :(precision($(arg_names[1])))
        else
            default = :(Arblib.DEFAULT_PRECISION[])
        end
        jl_args[k] = Expr(:kw, :($p::Integer), default)
    end

    if :rnd in arg_names
        k = findfirst(==(:rnd), arg_names)
        @assert c_types[k] == arb_rnd
        r = esc(:rnd)
        jl_args[k] = Expr(:kw, :($r::Union{arb_rnd, RoundingMode}), :(RoundNearest))
    end

    res = first(arg_names)

    return :(
        function $jlf($(jl_args...))
            ccall(Arblib.@libarb($arbf),
            $returnT,
            $(Expr(:tuple, c_types...)),
            $(arg_names...))
            return $res
        end
        )
end
julia> @macroexpand @arbcall_str("void arf_mag_set_ulp(mag_t res, const arf_t x,slong prec)")
:(function Main.set_ulp!(var"#255#res"::Mag, var"#256#x"::Arf, prec::Main.Integer=(Main.Arblib).DEFAULT_PRECISION[])
      #= /home/kalmar/.julia/dev/Arblib/tmp.jl:74 =#
      ccall(("arf_mag_set_ulp", "/home/kalmar/.julia/dev/Arblib/deps/usr/lib/libarb.so"), Nothing, (Ref{Mag}, Ref{Arf}, Int64), var"#255#res", var"#256#x", prec)
      #= /home/kalmar/.julia/dev/Arblib/tmp.jl:78 =#
      return var"#255#res"
  end)

julia> @macroexpand @arbcall_str("int arf_add_si(arf_t res, const arf_t x,slong y,slong prec,arf_rnd_t rnd)")
:(function Main.add!(var"#257#res"::Arf, var"#258#x"::Arf, var"#259#y"::Int64, prec::Main.Integer=Main.precision(var"#257#res"), rnd::Main.Union{Main.arb_rnd, Main.RoundingMode}=Main.RoundNearest)
      #= /home/kalmar/.julia/dev/Arblib/tmp.jl:74 =#
      ccall(("arf_add_si", "/home/kalmar/.julia/dev/Arblib/deps/usr/lib/libarb.so"), Int32, (Ref{Arf}, Ref{Arf}, Int64, Int64, arb_rnd), var"#257#res", var"#258#x", var"#259#y", prec, rnd)
      #= /home/kalmar/.julia/dev/Arblib/tmp.jl:78 =#
      return var"#257#res"
  end)

and

julia> arbcall"int arf_add_si(arf_t res, const arf_t x,slong y,slong prec,arf_rnd_t rnd)"
add! (generic function with 3 methods)

julia> x = Arf(5.1)
5.0999999999999996447286321199499070644378662109375

julia> res = zero(x)
0

julia> add!(res, x, 4)
9.0999999999999996447286321199499070644378662109375

Many allocations when calculating hypergeometric functions

julia> @btime Arblib.hypgeom_bessel_jy!(Arb(0), Arb(0), Arb(10), Arb(1 // 100))
  47.990 μs (190 allocations: 14.17 KiB)
[2.691138339236344421096392716256098006835716550746321140061519723025122548483e-30 +/- 6.78e-106]

I cannot figure out why there are so many allocations on the Julia side.

Mega-Flint

There is work in progress for merging Flint with Arb, Calcium and generic-rings, see flintlib/flint#1218. Once FLINT_jll has been updated to reflect this we will have to update Arblib as well.

The changes we need to do seem to be minimal. I tried building a local version of FLINT_jll pointing to the updated branch of Flint and the main changes I needed to do was to remove the dependence on Arb_jll and replace the @libarb macro with @libflint. In addition to this I needed to make a few updates to account for changes in the master version of Arb, but that is tangential.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

rand allocations

as observed in #82:

julia> @btime ArbMatrix(rand(Arb(prec=256), 100,100));
  4.085 ms (50007 allocations: 3.21 MiB)

julia> @btime ArbMatrix(rand(Arb, 100,100));
  4.507 ms (60005 allocations: 4.27 MiB)

i.e. 6 allocations per entry despite

julia> @btime Arb(rand($BigFloat));
  271.410 ns (5 allocations: 336 bytes)

Also:

julia> @btime ArbMatrix(rand($BigFloat, 100, 100));
  2.647 ms (30006 allocations: 2.06 MiB)

julia> @btime ArbMatrix(Arb.(rand($BigFloat, 100, 100)));
  3.293 ms (50008 allocations: 3.28 MiB)

julia> @btime ArbMatrix(rand($Arb, 100, 100));
  4.521 ms (60005 allocations: 4.27 MiB)

i.e. I botched random API :P

Random Generation

I try to generate random matrices with Arblib. I notice that there is a random generation function in Arb to generate real-valued random matrices:

void arb_mat_randtest(arb_mat_t mat, flint_rand_t state, slong prec, slong mag_bits).

But I don't find its correspondence in Arblib. I have tried Arblib.randtest! and other options. Julia always returns error messages like this: "ERROR: UndefVarError: randtest! not defined".

Does Arblib have a julia wrapper for random generation?

Result depends on value of out-variable

Consider the code below

x = Acb(Arblib.load_string!(Arb(),"1 -2 20000001 -1f"))
res1 = Acb(Arblib.load_string!(Arb(),"1 -1 8000001 -1c"))
res2 = Acb(Arblib.load_string!(Arb(),"0 0 0 0"))
Arblib.real_abs!(res1, x, false)
Arblib.real_abs!(res2, x, false)

You would expect res1 and res2 to have the same value, however

julia> res1
[+/- 1.01]
julia> res2
[+/- 0.501]

I encountered this issue when trying to track down some bugs in the integration. So far I have not been able to track down exactly where things go wrong so I thought I would put it here in case you find something before I do.

Cannot set Int128

julia> using Arblib

julia> x = typemax(Int128)
170141183460469231731687303715884105727

julia> y = Arb()
0

julia> y[] = x
ERROR: InexactError: trunc(Int64, 170141183460469231731687303715884105727)
Stacktrace:
 [1] throw_inexacterror(::Symbol, ::Type{Int64}, ::Int128) at ./boot.jl:558
 [2] checked_trunc_sint at ./boot.jl:580 [inlined]
 [3] toInt64 at ./boot.jl:629 [inlined]
 [4] Int64 at ./boot.jl:708 [inlined]
 [5] convert at ./number.jl:7 [inlined]
 [6] cconvert at ./essentials.jl:388 [inlined]
 [7] set! at /Users/sascha/.julia/packages/Arblib/MiZZg/src/arbcall.jl:295 [inlined]
 [8] setindex!(::Arb, ::Int128) at /Users/sascha/.julia/packages/Arblib/MiZZg/src/types.jl:296
 [9] top-level scope at REPL[4]:1

A*A' significantly slower than A*A for ArbMatrices

Hi,

I was trying to see if I could use this instead of BigFloats for some of my computations, for which i (among other things like cholesky decompositions and computing the minimum eigenvalue) need to do A*A' for matrices of size up to about 300x300, which is currently the bottleneck.

I came across the following remarkable difference:

julia> A = ArbMatrix(rand(BigFloat,100,100));

julia> @benchmark $A*$A
BenchmarkTools.Trial:
  memory estimate:  48 bytes
  allocs estimate:  1
  --------------
  minimum time:     128.750 ms (0.00% GC)
  median time:      132.684 ms (0.00% GC)
  mean time:        133.845 ms (0.00% GC)
  maximum time:     148.832 ms (0.00% GC)
  --------------
  samples:          38
  evals/sample:     1

julia> @benchmark $A*$A'
BenchmarkTools.Trial:
  memory estimate:  249.02 MiB
  allocs estimate:  4080001
  --------------
  minimum time:     3.125 s (22.31% GC)
  median time:      3.331 s (26.56% GC)
  mean time:        3.331 s (26.56% GC)
  maximum time:     3.537 s (30.31% GC)
  --------------
  samples:          2
  evals/sample:     1

So multiplying with a transpose is significantly slower than normal multiplication, maybe because of all the allocations which do happen in the multiplication with the transpose.
Comparing it to BigFloats:

julia> B = rand(BigFloat,100,100);

julia> @benchmark $B*$B
BenchmarkTools.Trial:
  memory estimate:  218.35 MiB
  allocs estimate:  4100002
  --------------
  minimum time:     911.017 ms (11.87% GC)
  median time:      930.252 ms (11.95% GC)
  mean time:        933.218 ms (11.67% GC)
  maximum time:     962.344 ms (12.72% GC)
  --------------
  samples:          6
  evals/sample:     1

julia> @benchmark $B*$B'
BenchmarkTools.Trial:
  memory estimate:  218.35 MiB
  allocs estimate:  4100002
  --------------
  minimum time:     1.228 s (8.92% GC)
  median time:      1.237 s (9.04% GC)
  mean time:        1.238 s (8.64% GC)
  maximum time:     1.255 s (9.08% GC)
  --------------
  samples:          5
  evals/sample:     1

So for BigFloats, multiplying with the transpose is also a bit slower, but only 30% compared to about 2500%. Another way would be to use normal matrices with Arb entries:

julia> C = rand(Arb,100,100);

julia> @benchmark $C*$C
BenchmarkTools.Trial:
  memory estimate:  124.66 MiB
  allocs estimate:  2040002
  --------------
  minimum time:     1.572 s (14.81% GC)
  median time:      1.807 s (20.96% GC)
  mean time:        1.872 s (25.84% GC)
  maximum time:     2.237 s (37.54% GC)
  --------------
  samples:          3
  evals/sample:     1

julia> @benchmark $C*$C'
BenchmarkTools.Trial:
  memory estimate:  124.66 MiB
  allocs estimate:  2040002
  --------------
  minimum time:     2.014 s (11.50% GC)
  median time:      2.029 s (19.55% GC)
  mean time:        2.166 s (22.57% GC)
  maximum time:     2.456 s (34.14% GC)
  --------------
  samples:          3
  evals/sample:     1

So this is more comparable to the BigFloats, only a factor 2 difference in both cases. (Interestingly, it only uses about half the memory/allocations)

To me it seems like the ArbMatrices fall back to the generic matrix multiplication in the case with the transpose. Is there an easy way around this?

PS: I also tried Nemo (which has similar speed for A*A and A*A'), but I also need the minimum eigenvalue and the cholesky decomposition, which are currently not in Nemo. So I would need to convert back and forth to BigFloats, or compute those myself. With Nemo, I'm not sure how to do this but with Arblib the conversion is easy. Hence why I would very much like a similar speed for A*A' as for A*A.

non-operability with Nemo arbs

I'm migrating my old project from Nemo to Arblib and I wanted to compare numerics (just to double check).

julia> using Arblib, Nemo
[ Info: Precompiling Arblib [fb37089c-8514-4489-9461-98f9c8763369]

Welcome to Nemo version 0.19.0

Nemo comes with absolutely no warranty whatsoever


julia> RR = Nemo.ArbField(256)
Real Field with 256 bits of precision and error bounds

julia> x = RR("0.1")
[0.1000000000000000000000000000000000000000000000000000000000000000000000000000 +/- 1.95e-78]

julia> Arb(RR("0.1"))
ERROR: MethodError: no method matching set!(::Arb, ::arb)
Closest candidates are:
  set!(::ArbLike, ::Union{Int128, UInt128}) at /home/kalmar/.julia/dev/Arblib/src/setters.jl:29
  set!(::ArbLike, ::Union{Ptr{Arblib.mag_struct}, Mag, MagRef, Arblib.mag_struct, BigFloat}) at /home/kalmar/.julia/dev/Arblib/src/setters.jl:35
  set!(::ArbLike, ::Rational; prec) at /home/kalmar/.julia/dev/Arblib/src/setters.jl:41
  ...
Stacktrace:
 [1] Arb(::arb; prec::Int64) at /home/kalmar/.julia/dev/Arblib/src/constructors.jl:13
 [2] Arb(::arb) at /home/kalmar/.julia/dev/Arblib/src/constructors.jl:13
 [3] top-level scope at REPL[4]:1

julia> function Arblib.set!(res::Arblib.Arb, x::Nemo.arb)
           ccall(Arblib.@libarb("arb_set"), Cvoid, (Ref{Arblib.Arb}, Ref{Nemo.arb}), res, x)
           return res
       end

julia> Arb(x)
realloc(): invalid pointer

signal (6): Aborted
in expression starting at REPL[6]:1
gsignal at /usr/lib/libc.so.6 (unknown line)
abort at /usr/lib/libc.so.6 (unknown line)
__libc_message at /usr/lib/libc.so.6 (unknown line)
malloc_printerr at /usr/lib/libc.so.6 (unknown line)
realloc at /usr/lib/libc.so.6 (unknown line)
jl_realloc at /buildworker/worker/package_linux64/build/src/gc.c:3318
flint_realloc at /workspace/srcdir/flint2-2.7.0/memory_manager.c:118
arf_set at /workspace/srcdir/arb-2.19.0/arf.h:397 [inlined]
arb_set at /workspace/srcdir/arb-2.19.0/arb/set.c:17
set! at ./REPL[5]:2 [inlined]
#Arb#60 at /home/kalmar/.julia/dev/Arblib/src/constructors.jl:13
Arb at /home/kalmar/.julia/dev/Arblib/src/constructors.jl:13
unknown function (ip: 0x7f82f5b9b952)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2231 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2398
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1690 [inlined]
do_call at /buildworker/worker/package_linux64/build/src/interpreter.c:117
eval_value at /buildworker/worker/package_linux64/build/src/interpreter.c:206
eval_stmt_value at /buildworker/worker/package_linux64/build/src/interpreter.c:157 [inlined]
eval_body at /buildworker/worker/package_linux64/build/src/interpreter.c:566
jl_interpret_toplevel_thunk at /buildworker/worker/package_linux64/build/src/interpreter.c:660
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:840
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:790
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:790
jl_toplevel_eval_in at /buildworker/worker/package_linux64/build/src/toplevel.c:883
eval at ./boot.jl:331
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2214 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2398
eval_user_input at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:134
repl_backend_loop at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:195
start_repl_backend at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:180
#run_repl#37 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:292
run_repl at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:288
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2231 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2398
#807 at ./client.jl:399
jfptr_YY.807_45002.clone_1 at /opt/julia-1.5.3/lib/julia/sys.so (unknown line)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2214 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2398
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1690 [inlined]
do_apply at /buildworker/worker/package_linux64/build/src/builtins.c:655
jl_f__apply_latest at /buildworker/worker/package_linux64/build/src/builtins.c:705
#invokelatest#1 at ./essentials.jl:710 [inlined]
invokelatest at ./essentials.jl:709 [inlined]
run_main_repl at ./client.jl:383
exec_options at ./client.jl:313
_start at ./client.jl:506
jfptr__start_53898.clone_1 at /opt/julia-1.5.3/lib/julia/sys.so (unknown line)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2214 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2398
unknown function (ip: 0x401931)
unknown function (ip: 0x401533)
__libc_start_main at /usr/lib/libc.so.6 (unknown line)
unknown function (ip: 0x4015d4)
Allocations: 8662130 (Pool: 8660124; Big: 2006); GC: 10
[1]    1859597 abort (core dumped)  julia --project

I'm not sure what is causing this, i.e. is it a feature, or is it a bug?

Enable GitHub page for documentation

The documentation seems to be building correctly now, it is available at the gh-pages branch. I believe the next step is to point GitHub pages to this branch and I don't think I have access to this, so you would have to make the changes @kalmarek . Following the documentation I found the description for how to do it here.

ArbVector (first try)

Just to let you know, I tried the following:

const libarb = Nemo.libarb
import Nemo.acb_struct

mutable struct AcbVector <: AbstractVector{acb_struct}
    ptr::Ptr{acb_struct}
    length::Int
    precision::Int

    function AcbVector(n::Integer, precision::Integer)
        v = new(
            ccall((:_acb_vec_init, libarb), Ptr{acb_struct}, (Clong,), n),
            n,
            precision,
        )
        finalizer(clear!, v)
        return v
    end
end

Base.cconvert(::Type{Ptr{acb_struct}}, acb_v::AcbVector) = acb_v.ptr
Base.size(acb_v::AcbVector) = (acb_v.length,)
Base.precision(acb_v::AcbVector) = acb_v.precision

function clear!(acb_v::AcbVector)
    ccall(
        (:_acb_vec_clear, libarb),
        Cvoid,
        (Ptr{acb_struct}, Clong),
        acb_v,
        length(acb_v),
    )
end

Base.@propagate_inbounds function Base.getindex(acb_v::AcbVector, i::Integer)
    @boundscheck checkbounds(acb_v, i)
    return unsafe_load(acb_v.ptr, i)
end

_get_ptr(acb_v::AcbVector, i::Int = 1) =
    acb_v.ptr + (i - 1) * sizeof(acb_struct)

function AcbVector(v::AbstractVector{acb}, p = prec(parent(first(v))))
    acb_v = AcbVector(length(v), p)
    for (i, val) in zip(eachindex(acb_v), v)
        ccall(
            (:acb_set, libarb),
            Cvoid,
            (Ptr{acb_struct}, Ref{acb}),
            _get_ptr(acb_v, i),
            val,
        )
    end
    return acb_v
end

function approx_eig_qr!(v::AcbVector, R::acb_mat, A::acb_mat)
    ccall(
        (:acb_mat_approx_eig_qr, libarb),
        Cint,
        (
            Ptr{acb_struct},
            Ptr{Cvoid},
            Ref{acb_mat},
            Ref{acb_mat},
            Ptr{Cvoid},
            Int,
            Int,
        ),
        v,
        C_NULL,
        R,
        A,
        C_NULL,
        0,
        prec(parent(A)),
    )
    return v
end

and it works pretty well (this is still Nemo based). Eigenvalues in Nemo are incorrect so if you need them quickly here there are:

function approx_eig_qr!(v::AcbVector, R::acb_mat, A::acb_mat)
    ccall(
        (:acb_mat_approx_eig_qr, libarb),
        Cint,
        (
            Ptr{acb_struct},
            Ptr{Cvoid},
            Ref{acb_mat},
            Ref{acb_mat},
            Ptr{Cvoid},
            Int,
            Int,
        ),
        v,
        C_NULL,
        R,
        A,
        C_NULL,
        0,
        prec(parent(A)),
    )
    return v
end

function (C::AcbField)(z::acb_struct)
    res = zero(C)
    ccall((:acb_set, libarb), Cvoid, (Ref{acb}, Ref{acb_struct}), res, z)
    return res
end

function LinearAlgebra.eigvals(A::acb_mat)
    n = nrows(A)
    CC = base_ring(A)
    p = prec(CC)
    λ_approx = AcbVector(n, p)
    R_approx = similar(A)
    v = approx_eig_qr!(λ_approx, R_approx, A)

    λ = AcbVector(n, p)
    b = ccall(
        (:acb_mat_eig_multiple, libarb),
        Cint,
        (Ptr{acb_struct}, Ref{acb_mat}, Ptr{acb_struct}, Ref{acb_mat}, Int),
        λ,
        A,
        λ_approx,
        R_approx,
        p,
    )

    return CC.(λ)
end

function (C::AcbField)(z::acb_struct)
    res = zero(C)
    ccall((:acb_set, libarb), Cvoid, (Ref{acb}, Ref{acb_struct}), res, z)
    return res
end

function LinearAlgebra.eigvals(A::acb_mat)
    n = nrows(A)
    CC = base_ring(A)
    p = prec(CC)
    λ_approx = AcbVector(n, p)
    R_approx = similar(A)
    v = approx_eig_qr!(λ_approx, R_approx, A)

    λ = AcbVector(n, p)
    b = ccall(
        (:acb_mat_eig_multiple, libarb),
        Cint,
        (Ptr{acb_struct}, Ref{acb_mat}, Ptr{acb_struct}, Ref{acb_mat}, Int),
        λ,
        A,
        λ_approx,
        R_approx,
        p,
    )

    return CC.(λ)
end

EDIT: Updated with precision field

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.