GithubHelp home page GithubHelp logo

sciml / recursivearraytools.jl Goto Github PK

View Code? Open in Web Editor NEW
202.0 10.0 56.0 2.27 MB

Tools for easily handling objects like arrays of arrays and deeper nestings in scientific machine learning (SciML) and other applications

Home Page: https://docs.sciml.ai/RecursiveArrayTools/stable/

License: Other

Julia 100.00%
vector array recursion sciml scientific-machine-learning

recursivearraytools.jl's Introduction

RecursiveArrayTools.jl

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs

codecov Build Status build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

RecursiveArrayTools.jl is a set of tools for dealing with recursive arrays like arrays of arrays.

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

Example

using RecursiveArrayTools
a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
vA = VectorOfArray(a)
vB = VectorOfArray(b)

vA .* vB # Now all standard array stuff works!

a = (rand(5), rand(5))
b = (rand(5), rand(5))
pA = ArrayPartition(a)
pB = ArrayPartition(b)

pA .* pB # Now all standard array stuff works!

recursivearraytools.jl's People

Contributors

aayushsabharwal avatar arnostrouwen avatar baggepinnen avatar chriselrod avatar chrisrackauckas avatar christopher-dg avatar dehann avatar dependabot[bot] avatar devmotion avatar drchainsaw avatar femtocleaner[bot] avatar frankschae avatar gabrielgellner avatar github-actions[bot] avatar gregstrq avatar janlucaklees avatar jdeldre avatar jipolanco avatar jlchan avatar jonniedie avatar lewih avatar lxvm avatar ma-sadeghi avatar mateuszbaran avatar nantonel avatar oscardssmith avatar ranocha avatar sharanry avatar thazhemadam avatar yingboma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recursivearraytools.jl's Issues

`recursive_bottom_eltype` breaks when given non-numeric type or value

Hi all,

I'd like to discuss the implementation of the recursive_bottom_eltype function found in utils.jl on line 79 - 80.

I don't think the context is really necessary here. If you are curious, I added it at the end. I just happened to stumble across this.

So here's what I don't like:
The recursive_bottom_eltype seems to be written with only Numbers in mind, while at the same time not explicitly disallowing non-numeric values.
This leaves us with a function that works for numbers, but for Strings (and I suspect all other non-numeric type) it fails badly in a StackOverflowError. Because there is no other definition for non-numeric types other than the one that call itself recursively.

Steps to reproduce this error:
Run recursive_bottom_eltype("I love to eat up stack space") in your Julia shell.

How I would like it:
Instead of failing with a StackOverflowError the function should either directly throw an adequate error for non-numeric values, or it should be extended to also work for other types as well.

And here is where I would like to hear your suggestions. I don't know this Package really at all, so I can't tell which way would be the better one.

I would offer to implement a solution myself, after hearing your feedback on which way would be more suitable for this package.

As promised, the context:
I am trying to implement a dynamic solver (using DiffEqFlux) that uses a custom (not yet build) DSL. The idea is that this way the end-user does not need to learn Julia and how to handle any of the Packages involved, but just can model their problem using this DSL.
So I am at the beginning of this and rather than using a DLS I just use strings that will be evaluated during the solving with the Meta package. (I know, I know, performance and security; This is just to get things going and figure out other stuff in my project.) And so I ended up passing arrays of strings into the solving function of DiffEqFlux (more specifically in the problem for the solving function) rather than a arrays of numbers. And somehow diffeq_fd ends up calling recursive_bottom_eltype on the u0 or u array (not too sure form looking at their code). But as explained above, strings cause the recursive_bottom_eltype function to recurse infinitely. And thus, here I am complaining ;P

Please enable CartesianIndices for VectorOfArray

To escape "AI Shape Hell" and map across shapes like a cthulu-esque ameboid monstrosity, I hacked a way to use CartesianIndices with VectorOfArray. Here's the code. It's pretty lame since I've only been doing julia for less than a week but perhaps could be useful for this module

Test Case:

v = VectorOfArray([rand(20), rand(10,10), rand(3,3,3)])

function voa_indices(voa)
    CI = []
    for array_number in 1:length(voa)
        for ci in CartesianIndices(voa[array_number])
            ci = CartesianIndex(array_number, Tuple(ci)...)
            push!(CI, ci)
        end
    end
    return CI
end

for i in voa_indices(v)
    @show i
    @show v[i]
    println()
end

Fix so test case passes:

@inline Base.getindex(VA::AbstractVectorOfArray{T,N}, I::CartesianIndex) where {T, N} = VA.u[Tuple(I)[1]][Tuple(I)[2:end]...]

Thanks for your work, julia definitely seems more powerful than python

don't override display

You are overloading display in order to customize output. This is the wrong way to do it — see the manual on custom pretty printing.

To customize REPL output, you should override Base.show(io::IO, x::MyType) and/or Base.show(io::IO, ::MIME"text/plain", x::MyType), as explained in the manual.

Overriding display directly will break IJulia, Juno, and other environments that have a custom display mechanism.

Outer product of ArrayPartitions do not produce "matrix like" objects.

Expected: expected outer product produces something of shape (n, n)' for two n-dimensional ArrayPartitions. Got: (n*n,)` shaped array of partial outer products.

julia> using RecursiveArrayTools

julia> xap = ArrayPartition([1.0], [2.0, 3.0])
([1.0], [2.0, 3.0])

julia> xap*xap'
([1.0 2.0 3.0], [2.0 4.0 6.0; 3.0 6.0 9.0])

julia> size(ans)
(9,)

Implementation of linear algebra for ArrayPartition

The following MWE pertains to the implementation of vec: basically, if the elements inside the ArrayPartition are vectors, all works as it should. However, if the ArrayPartition contents are matrices then the size of the return from vec is incorrect.

using NLsolve
using RecursiveArrayTools

function mymodel(F, vars)
    for i in 1:2
        x = vars.x[i]    
        F.x[i][1,1] = (x[1,1]+3)*(x[1,2]^3-7)+18.0
        F.x[i][1,2] = sin(x[1,2]*exp(x[1,1])-1)
        F.x[i][2,1] = (x[2,1]+3)*(x[2,2]^3-7)+19.0
        F.x[i][2,2] = sin(x[2,2]*exp(x[2,1])-3)
    end
end

# To show that the function works
F = ArrayPartition([0.0 0.0; 0.0 0.0],[0.0 0.0; 0.0 0.0])
u0= ArrayPartition([0.1 1.2; 0.1 1.2], [0.1 1.2; 0.1 1.2])
result = mymodel(F, u0) 
vec(u0)

# To show the NLsolve error that results with ArrayPartition: 
nlsolve(mymodel, u0)

This may be related to the way that linear algebra was implemented for the fix to this issue: #44 (comment)

Some DifferentialEquations.jl solvers fail when using `ArrayPartition`

Note the use of ArrayPartition below

julia> using DifferentialEquations,RecursiveArrayTools

julia> function lorenz(du,u,p,t)
           du[1] = 10.0*(u[2]-u[1])
           du[2] = u[1]*(28.0-u[3]) - u[2]
           du[3] = u[1]*u[2] - (8/3)*u[3]
       end
lorenz (generic function with 1 method)

julia> u0 = ArrayPartition([1.0,0.0],[0.0])
([1.0, 0.0], [0.0])

julia> tspan = (0.0,100.0)
(0.0, 100.0)

julia> prob = ODEProblem(lorenz,u0,tspan)
ODEProblem with uType ArrayPartition{Float64,Tuple{Array{Float64,1},Array{Float64,1}}} and tType Float64. In-place: true
timespan: (0.0, 100.0)
u0: [1.0, 0.0][0.0]

julia> sol = solve(prob,Tsit5());

julia> sol = solve(prob,AutoTsit5(Rosenbrock23()));
ERROR: MethodError: no method matching ldiv!(::LinearAlgebra.LU{Float64,Array{Float64,2}}, ::ArrayPartition{Float64,Tuple{Array{Float64,1},Array{Float64,1}}})
Closest candidates are:
  ldiv!(::LinearAlgebra.Transpose{#s561,#s560} where #s560<:LinearAlgebra.LowerTriangular where #s561, ::AbstractArray{T,1} where T) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/triangular.jl:1217
  ldiv!(::LinearAlgebra.Transpose{#s561,#s560} where #s560<:LinearAlgebra.LowerTriangular where #s561, ::AbstractArray{T,1} where T, ::AbstractArray{T,1} where T) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/triangular.jl:1201
  ldiv!(::LinearAlgebra.Transpose{#s561,#s560} where #s560<:LinearAlgebra.UnitLowerTriangular where #s561, ::AbstractArray{T,1} where T) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/triangular.jl:1235
  ...

(Everything works fine without ArrayPartition, i.e. u0 = [1.0, 0.0, 0.0])

I'm using DifferentialEquations v5.3.1, RecursiveArrayTools v0.18.3, and Julia 1.0.1 on Ubuntu 16.04.

Apparent compatibility issue with CuArrays

Array broadcast fusion not working as expected with CuArrays

using RecursiveArrayToolsm, CuArrays, CUDAnative
a = ArrayPartition(([1.0f0] |> cu,[2.0f0] |> cu,[3.0f0] |> cu))
b = ArrayPartition(([0.0f0] |> cu,[0.0f0] |> cu,[0.0f0] |> cu))
@. a + CUDAnative.pow(b, 2f0)

throws:

ERROR: LoadError: GPU compilation failed, try inspecting generated code with any of the @device_code_... macros
CompilerError: could not compile #19(CuArrays.CuKernelState, CuDeviceArray{Float32,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}},Tuple{Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Float32}}); passing and using non-bitstype argument

  • argument_type = Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}},Tuple{Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Float32}}
  • argument = 4
    Stacktrace:
    [1] check_invocation(::CUDAnative.CompilerContext, ::LLVM.Function) at >/home/jack/.julia/packages/CUDAnative/EsWDI/src/compiler/validation.jl:30
    [2] #compile_function#78(::Bool, ::Function, ::CUDAnative.CompilerContext) at ./logging.jl:319
    [3] compile_function at /home/jack/.julia/packages/CUDAnative/EsWDI/src/compiler/driver.jl:56 [inlined]
    [4] #cufunction#77(::Base.Iterators.Pairs{Symbol,getfield(GPUArrays, Symbol("##19#20")),Tuple{Symbol},NamedTuple{(:inner_f,),Tuple{getfield(GPUArrays, Symbol("##19#20"))}}}, ::Function, ::CUDAdrv.CuDevice, ::Any, ::Any) at /home/jack/.julia/packages/CUDAnative/EsWDI/src/compiler/driver.jl:22
    [5] (::getfield(CUDAnative, Symbol("#kw##cufunction")))(::NamedTuple{(:inner_f,),Tuple{getfield(GPUArrays, Symbol("##19#20"))}}, ::typeof(cufunction), ::CUDAdrv.CuDevice, ::Function, ::Type) at ./none:0
    [6] macro expansion at /home/jack/.julia/packages/CUDAnative/EsWDI/src/execution.jl:219 [inlined]
    [7] _cuda(::CUDAnative.KernelWrapper{getfield(GPUArrays, Symbol("##19#20"))}, ::getfield(GPUArrays, Symbol("##19#20")), ::Tuple{}, ::NamedTuple{(:blocks, :threads),Tuple{Tuple{Int64},Tuple{Int64}}}, ::CuArrays.CuKernelState, ::CuDeviceArray{Float32,1,CUDAnative.AS.Global}, ::Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}},Tuple{Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{CuDeviceArray{Float32,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Float32}}) at /home/jack/.julia/packages/CUDAnative/EsWDI/src/execution.jl:177
    [8] macro expansion at ./gcutils.jl:87 [inlined]
    [9] _gpu_call(::Function, ::CuArray{Float32,1}, ::Tuple{CuArray{Float32,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}},Tuple{Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}},Float32}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/jack/.julia/packages/CuArrays/F96Gk/src/gpuarray_interface.jl:68
    [10] gpu_call(::Function, ::CuArray{Float32,1}, ::Tuple{CuArray{Float32,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}},Tuple{Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{CuArray{Float32,1},Tuple{Bool},Tuple{Int64}},Float32}}}, ::Int64) at /home/jack/.julia/packages/GPUArrays/3E1qk/src/abstract_gpu_interface.jl:151
    [11] gpu_call at /home/jack/.julia/packages/GPUArrays/3E1qk/src/abstract_gpu_interface.jl:128 [inlined]
    [12] copyto! at /home/jack/.julia/packages/GPUArrays/3E1qk/src/broadcast.jl:14 [inlined]
    [13] copyto! at ./broadcast.jl:768 [inlined]
    [14] copy at ./broadcast.jl:744 [inlined]
    [15] materialize at ./broadcast.jl:724 [inlined]
    [16] broadcast(::getfield(Base.Broadcast, Symbol("##1#2")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Tuple{Base.OneTo{Int64}},typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##7#8")){Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}},getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##9#10")){getfield(Base.Broadcast, Symbol("##11#12"))}},getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##13#14")){getfield(Base.Broadcast, Symbol("##15#16"))}},getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##5#6")){getfield(Base.Broadcast, Symbol("##3#4"))}}}}}, ::CuArray{Float32,1}, ::CuArray{Float32,1}, ::Float32) at ./broadcast.jl:702
    [17] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(+),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{ArrayPartition},Nothing,typeof(CUDAnative.pow),Tuple{ArrayPartition{Float32,Tuple{CuArray{Float32,1},CuArray{Float32,1},CuArray{Float32,1}}},Float32}}}}) at /home/jack/.julia/packages/RecursiveArrayTools/V9Xjw/src/array_partition.jl:293
    [18] top-level scope at none:0
    [19] include at ./boot.jl:317 [inlined]
    [20] include_relative(::Module, ::String) at ./loading.jl:1038
    [21] include(::Module, ::String) at ./sysimg.jl:29
    [22] include(::String) at ./client.jl:388
    [23] top-level scope at none:0
    in expression starting at /home/jack/Dropbox/lab/Dendronotus/julia/scantest.jl:53

Support for OffsetArrays

Currently this gives:

julia> using OffsetArrays

julia> VectorOfArray([OffsetArray(rand(2), 0:1)])
ERROR: size not supported for arrays with axes (0:1,); see http://docs.julialang.org/en/latest/devdocs/offset-arrays/

but in principle I guess it could work (maybe Compat.axes should be used instead).

similar(v) broken for v::VectorOfArray

similar(v) for v::VectorOfArray{T, N, A} gives you an instance of AbstractArray{T,N} instead of another VectorOfArray.
MWE:

a = VectorOfArray([[1,2,3,4,5],[6,7,8,9,10]])
similar(a)

gives you a 5×2 Array{Int64,2}.

I guess you can write something like

@inline Base.similar(VA::AbstractVectorOfArray) = VectorOfArray([similar(VA[i]) for i in eachindex(VA)])

to deal with this problem.

ArrayPartition element-type

Right now it's always Any. That is fine because broadcast and the split usage doesn't actually use the slow indexing, so it doesn't really effect Diffeq (except for the algorithms which cannot broadcast yet). However, it would be nice to have this be "smarter" by using some promote type.

Vector of StaticArrays inside ArrayPartition

I have tried to use a Vector of StaticArrays inside an ArrayPartition. This led to a non-trivial problem with DifferentialEquations.jl, when I tried to use such an ArrayPartition as the state vector for an ODE: the resulting element type of ArrayPartition happened to be Any and it broke recursive_unitless_bottom_eltype function which is used somewhere inside DifferentialEquations.jl.

In order to illustrate this, consider the following example:

using RecursiveArrayTools, StaticArrays

a = ArrayPartition((zeros(Float64, 10), zeros(SVector{3,Float64}, 10)))

The output of typeof(a) is

ArrayPartition{Any,Tuple{Array{Float64,1},Array{SArray{Tuple{3},Float64,1,3},1}}}

And if you try to do recursive_unitless_bottom_eltype(a), you get a never-ending recursion:

ERROR: StackOverflowError:
Stacktrace:
[1] recursive_unitless_bottom_eltype(::Type{Any}) at ~/.julia/packages/RecursiveArrayTools/BVS4M/src/utils.jl:86 (repeats 79984 times)

In the example I provided, it would be logical if the element type of ArrayPartition is Float64. It seems that we can get such behaviour if we slightly change the constructor for ArrayPartition:

function ArrayPartition(x::S, ::Type{Val{copy_x}}=Val{false}) where {S<:Tuple,copy_x}
  T = promote_type(recursive_bottom_eltype.(x)...)
  if copy_x
    return ArrayPartition{T,S}(copy.(x))
  else
    return ArrayPartition{T,S}(x)
  end
end

The only change I propose to make is to replace eltype with recursive_bottom_eltype in the type promotion. Potentially, this solution can deal with any kind of crazy nesting of arrays.

setindex not implemented

Right now, one may read from the partition via .x[...], but not write:

using RecursiveArrayTools
u0=ArrayPartition(rand(2,2),rand(3,3),rand(4,4))
u0.x[1]=rand(2,2)

ERROR: MethodError: no method matching setindex!(::Tuple{Array{Float64,2},Array{Float64,2},Array{Float64,2}}, ::Array{Float64,2}, ::Int64)

For in-place updates f(du, u,p,t), it would be useful to be able to do the the assignment to du via the .x[...] index rather than the normal linear index.

Multi-level broadcast of ArrayPartition allocates

But single-level is fine:

using RecursiveArrayTools, Test

xce0 = ArrayPartition(zeros(2),[0.])
xcde0 = copy(xce0)
function foo(y, x)
	y .= y .+ x
	nothing
end
foo(xcde0, xce0)
@test 0 == @allocated foo(xcde0, xce0)
function foo(y, x)
	y .= y .+ 2 .* x
	nothing
end
foo(xcde0, xce0)
@test 0 == @allocated foo(xcde0, xce0)

using BenchmarkTools
@btime foo(xcde0, xce0)

ap = ArrayPartition([100], [4, 1.3], ArrayPartition(ArrayPartition([1], [1, 4.4]), [0.4, 2, 1, 45]));
ap2 = recursivecopy(ap)
@btime foo(ap, ap2)

recursivecopy! for sparse inputs

This currently does not work:

using DiffEqBase,OrdinaryDiffEq
function fun1(t,u,du)
    A = sparse([1,2],[2,1],[-1.0im,-1.0im])
    A_mul_B!(du,A,u)
end
p = ODEProblem(fun1,sparsevec(complex.([1.0,0.0])),(0.0,3pi));
r = solve(p,Vern8(),adaptive=false,dt=3pi/100);

ArrayPartition broadcast() assumes input and output types are the same

broadcast(f, x::ArrayPartition) always tries to return a result of the same type as x, which can create some odd behaviors and errors:

julia> using RecursiveArrayTools

julia> x = ArrayPartition([1, 2], [3.0, 4.0])
RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}(([1, 2], [3.0, 4.0]))

julia> broadcast(y -> y + pi, x)
ERROR: InexactError()
Stacktrace:
 [1] convert(::Type{Int64}, ::Float64) at ./float.jl:679
 [2] setindex! at ./multidimensional.jl:247 [inlined]
 [3] macro expansion at ./broadcast.jl:154 [inlined]
 [4] macro expansion at ./simdloop.jl:73 [inlined]
 [5] macro expansion at ./broadcast.jl:147 [inlined]
 [6] _broadcast!(::##5#6, ::Array{Int64,1}, ::Tuple{Tuple{Bool}}, ::Tuple{Tuple{Int64}}, ::Array{Int64,1}, ::Tuple{}, ::Type{Val{0}}, ::CartesianRange{CartesianIndex{1}}) at ./broadcast.jl:139
 [7] broadcast_c! at ./broadcast.jl:211 [inlined]
 [8] broadcast!(::Function, ::Array{Int64,1}, ::Array{Int64,1}) at ./broadcast.jl:204
 [9] macro expansion at /home/rdeits/.julia/v0.6/RecursiveArrayTools/src/array_partition.jl:102 [inlined]
 [10] broadcast!(::##5#6, ::RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}, ::RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}) at /home/rdeits/.julia/v0.6/RecursiveArrayTools/src/array_partition.jl:100
 [11] broadcast(::##5#6, ::RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}) at /home/rdeits/.julia/v0.6/RecursiveArrayTools/src/array_partition.jl:107

julia> broadcast(y -> true, x)
RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}(([1, 1], [1.0, 1.0]))

Fixing this seems challenging (and possibly not worthwhile right now), but I figured I'd bring it up anyway.

ArrayPartition copyto! dimension mismatch with Array{T,N} where N>1

Using Julia 1.1.1:

julia> using RecursiveArrayTools

julia> x = ArrayPartition(rand(10,10),rand(10));

julia> y = zero(x);

julia> copyto!(y,x)
ERROR: DimensionMismatch("array could not be broadcast to match destination")
Stacktrace:
 [1] copyto!(::ArrayPartition{Float64,Tuple{Array{Float64,2},Array{Float64,1}}}, ::ArrayPartition{Float64,Tuple{Array{Float64,2},Array{Float64,1}}}) at ./broadcast.jl:456
 [2] top-level scope at none:0

I've notice there's no error when ArrayPartition is constructed using only vectors:

julia> x = ArrayPartition(rand(2), rand(10)+im.*randn(10));

julia> y = zero(x);

julia> copyto!(y,x)

Seems to be related to #67, but I've opened a new issue because the error I get is different.

Immutable FieldVectors from StaticArrays.jl

SArray from StaticArrays works fine, but using a custom immutable type which is a subtype of FieldVector, recursivecopy! fails.

using StaticArrays
using RecursiveArrayTools

a = [SVector(1.,2.)]
b = similar(a)
recursivecopy!(b, a)


struct Bad <: FieldVector{2,Float64}
    a::Float64
    b::Float64
end

a = [Bad(1.,2.)]
b = similar(a)
recursivecopy!(b, a)

This results in

type Bad is immutable

Stacktrace:
 [1] copy!(::Bad, ::Bad) at ./multidimensional.jl:805
 [2] recursivecopy!(::Array{Bad,1}, ::Array{Bad,1}) at /.../.julia/v0.6/RecursiveArrayTools/src/utils.jl:21
 [3] include_string(::String, ::String) at ./loading.jl:515

According to the documentation of StaticArrays.jl

it is also worth noting that FieldVectors may be mutable or immutable

Thus, the following works fine.

mutable struct NotBad <: FieldVector{2,Float64}
    a::Float64
    b::Float64
end
a = [NotBad(1.,2.)]
b = zeros(a)
recursivecopy!(b, a)

I came across this while I was using OrdinaryDiffEq.jl with u of the form [Bad(1.,2.)]. However, this has been working some months ago, see SciML/OrdinaryDiffEq.jl#50.

Error when broadcast-assigning scalar to ArrayPartition

I don't know enough about broadcasting or the AbstractArray interface to know what's actually going on here. This is with version 1.0.0.

julia> a = ArrayPartition(zeros(10))
([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],)

julia> a .= 1.
ERROR: MethodError: copyto!(::ArrayPartition{Float64,Tuple{Array{Float64,1}}}, ::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{0},Tuple{Base.OneTo{Int64}},typeof(identity),Tuple{Float64}}) is ambiguous. Candidates:
  copyto!(dest::ArrayPartition, bc::Base.Broadcast.Broadcasted) in RecursiveArrayTools at /Users/student/.julia/packages/RecursiveArrayTools/vxlTS/src/array_partition.jl:247
  copyto!(dest::AbstractArray, bc::Base.Broadcast.Broadcasted{#s623,Axes,F,Args} where Args<:Tuple where F where Axes where #s623<:Base.Broadcast.AbstractArrayStyle{0}) in Base.Broadcast at broadcast.jl:803
Possible fix, define
  copyto!(::ArrayPartition, ::Base.Broadcast.Broadcasted{#s623,Axes,F,Args} where Args<:Tuple where F where Axes where #s623<:Base.Broadcast.AbstractArrayStyle{0})
Stacktrace:
 [1] materialize!(::ArrayPartition{Float64,Tuple{Array{Float64,1}}}, ::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{0},Nothing,typeof(identity),Tuple{Float64}}) at ./broadcast.jl:756
 [2] top-level scope at none:0

Change in result of vec(sol) in 0.7 as compared to 0.6.

The way vec is dealing with ODESolution seems to have changed.

f1 = @ode_def LotkaVolterraTest begin
  dx = a*x - x*y
  dy = -3y + x*y
end a
u0 = [1.0;1.0]
tspan = (0.0,10.0)
p = [1.5]
prob1 = ODEProblem(f1,u0,tspan,[1.5])
t = collect(range(0, stop=10, length=200))
sol = solve(prob1,Tsit5();saveat=t,save_everystep=false,dense=false,kwargs...)
vec(sol)

Gives an array of length 200 and does not unwrap the 2-element arrays as it should (and gave in 0.6).

Error with VectorOfArray and Juno display

Something seems to have changed with how it VectorOfArray interact with the Juno display system. If you do:

using RecursiveArrayTools

# This throws the error
x1 = VectorOfArray([rand(10) for i = 1:2])
# This does not
x1;

# This works
x1[1]

You get the following error in Juno:

UndefVarError: Atom not defined
defaultrepr(::VectorOfArray{Float64,2,Array{Array{Float64,1},1}}, ::Bool) at types.jl:79
defaultrepr at types.jl:79 [inlined]
render(::Juno.Inline, ::VectorOfArray{Float64,2,Array{Array{Float64,1},1}}) at init.jl:5
render(::Juno.Inline, ::Juno.Copyable) at view.jl:49
render at display.jl:19 [inlined]
displayandrender(::VectorOfArray{Float64,2,Array{Array{Float64,1},1}}) at showdisplay.jl:127
(::getfield(Atom, Symbol("##119#124")){String})() at eval.jl:102
macro expansion at essentials.jl:742 [inlined]
(::getfield(Atom, Symbol("##115#120")))(::Dict{String,Any}) at eval.jl:86
handlemsg(::Dict{String,Any}, ::Dict{String,Any}) at comm.jl:164
(::getfield(Atom, Symbol("##19#21")){Array{Any,1}})() at task.jl:259

Reshape

I get an error reshaping these lazy arrays, because their length means something else. Is there a reason not to fix this, perhaps like this?

using RecursiveArrayTools

A = VectorOfArray([rand(4) for i=1:3] )
size(A) == (4,3)

reshape(A, (2,6))  # error

function Base._reshape(parent::VectorOfArray, dims::Base.Dims)
    n = prod(size(parent))
    prod(dims) == n || _throw_dmrs(n, "size", dims)
    Base.__reshape((parent, IndexStyle(parent)), dims)
end

reshape(A, (2,6)) # no error! 

flatten & reshape

Its not really a party until reshape's recursive cousin sees just how flat flatten flattens.

allocating linearalgebra

Hi,

I am surprised by the following behavior. Can you please tell me if this is expected?

Thank you,

Best regards,

using BenchmarkTools

xce0 = ArrayPartition(zeros(2),[0.])
xcde0 = copy(xce0)

function foo(y, x)
	y .= y .+ 2 .* x
end

@btime foo($xcde0, $xce0)

which gives

julia> @btime foo($xcde0, $xce0)
  30.138 ns (3 allocations: 48 bytes)

broadcast assign an array to ArrayPartition

import RecursiveArrayTools: ArrayPartition

a = ArrayPartition([1], [2])

# works
a[1] = 10
a[2] = 20

# does not
a .= [10, 20]

with DimensionMismatch("destination axes (Base.OneTo(1),) are not compatible with source axes (Base.OneTo(2),)")

error on 0.7 beta: ArgumentError: Package Statistics [...] is required but does not seem to be installed

I ran into this problem originally when trying to install DifferentialEquations.jl, but it seems to be an issue with this package. On trying to load the package I get an error around a "Statistics" package, which I'm unable to install separately using the package manager.

(v0.7) pkg> add RecursiveArrayTools
  Updating registry at `~/.julia/registries/Uncurated`
  Updating git-repo `https://github.com/JuliaRegistries/Uncurated.git`
 Resolving package versions...
  Updating `~/.julia/environments/v0.7/Project.toml`
  [731186ca] + RecursiveArrayTools v0.16.2
  Updating `~/.julia/environments/v0.7/Manifest.toml`
  [3cdcf5f2] + RecipesBase v0.4.0
  [731186ca] + RecursiveArrayTools v0.16.2
  [ae029012] + Requires v0.5.2

julia> using RecursiveArrayTools
[ Info: Precompiling module RecursiveArrayTools
ERROR: LoadError: ArgumentError: Package Statistics [10745b16-79ce-11e8-11f9-7d13ad32a3b2] is required but does not seem to be installed:
 - Run `Pkg.instantiate()` to install all recorded dependencies.

Stacktrace:
 [1] _require(::Base.PkgId) at ./loading.jl:951
 [2] require(::Base.PkgId) at ./loading.jl:879
 [3] require(::Module, ::Symbol) at ./loading.jl:874
 [4] include at ./boot.jl:317 [inlined]
 [5] include_relative(::Module, ::String) at ./loading.jl:1075
 [6] include(::Module, ::String) at ./sysimg.jl:29
 [7] top-level scope at none:0
 [8] eval at ./boot.jl:319 [inlined]
 [9] eval(::Expr) at ./client.jl:394
 [10] top-level scope at ./none:3 [inlined]
 [11] top-level scope at ./<missing>:0
in expression starting at /home/curry/.julia/packages/RecursiveArrayTools/aCa0/src/RecursiveArrayTools.jl:5
ERROR: Failed to precompile RecursiveArrayTools to /home/curry/.julia/compiled/v0.7/RecursiveArrayTools/6B7t.ji.
Stacktrace:
 [1] error at ./error.jl:33 [inlined]
 [2] compilecache(::Base.PkgId) at ./loading.jl:1205
 [3] _require(::Base.PkgId) at ./loading.jl:1007
 [4] require(::Base.PkgId) at ./loading.jl:879
 [5] require(::Module, ::Symbol) at ./loading.jl:874

Vector of differently sized arrays

julia> size(VectorOfArray([randn(2), randn(4)]))
(2, 2)

This is surprising at least, and runs the risk of subtle bugs if one tries to iterate on the VectorOfArray as if it was a multi-dimensional array.

Incorrect display

julia> using RecursiveArrayTools

julia> RecursiveArrayTools.ArrayPartition{Tuple{Vector{Float64}, Vector{Float64}}}(([0.0499001, 0.0995482],[0.497004, -0.0090711]))
RecursiveArrayTools.ArrayPartition{Tuple{Array{Float64,1},Array{Float64,1}}} with arrays::
   0.0499001
   0.497004 
 #undef     
 #undef

The incorrect display can lead to segmentation fault when bounds check is off.

-> % julia -q --check-bounds=no                                                                
julia> using RecursiveArrayTools; RecursiveArrayTools.ArrayPartition{Tuple{Vector{Float64}, Vec
tor{Float64}}}(([0.0499001, 0.0995482],[0.497004, -0.0090711]))                                
RecursiveArrayTools.ArrayPartition{Tuple{Array{Float64,1},Array{Float64,1}}} with arrays::

signal (11): Segmentation fault
....

`ldiv!` on ArrayPartition allocates

Currently ldiv! on ArrayPartition allocates. This could in theory call into a generic backsolve which uses one part of the ArrayPartition at a time to be non-allocating.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

mean(x::VectorOfArrays, dims=i) off by factor of size(x,1)

julia> using RecursiveArrayTools
julia> using Statistics
julia> x = VectorOfArray([[i+j for i=j:j+2] for j=1:2])
2-element Array{Array{Int64,1},1}:
 [2, 3, 4]
 [4, 5, 6]

julia> mean(x,dims=1), mean(Array(x),dims=1)
([9.0 15.0], [3.0 5.0])

julia> mean(x,dims=2), mean(Array(x),dims=2)
([9.0; 12.0; 15.0], [3.0; 4.0; 5.0])

ArrayPartition within ArrayPartition

Is using an ArrayPartition within another ArrayPartition supported? I'm having the following problem:

julia> using StaticArrays, RecursiveArrayTools

julia> x = ArrayPartition(ArrayPartition(@SMatrix [1.1 1.2; 1.3 1.4]), SVector(1.2))
([1.1 1.2; 1.3 1.4], [1.2])

julia> y = 1.1 * x
([1.21, 1.43, 1.32, 1.54], [1.32])

julia> typeof(x.x[1])
ArrayPartition{Float64,Tuple{SArray{Tuple{2,2},Float64,2,4}}}

julia> typeof(y.x[1])
Array{Float64,1}

Somehow the type of the underlying array is being changed. I would have expected that array type to remain the same.

Initialisation of an empty VectorOfArrays

It was difficult for me to figure out how to construct an empty VectorOfArrays.
Maybe the documentation here could be improved? (Or I am not experiences enough...)

This line constructs an empty array of arrays
a_empty = Array{Array{Float64,1}}([])
but the following line crashes
vv = VectorOfArray( a_empty )
with the error

ERROR: BoundsError: attempt to access 0-element Array{Array{Float64,N} where N,1} at index [1]
Stacktrace:
[1] getindex at .\array.jl:728 [inlined]
[2] VectorOfArray(::Array{Array{Float64,N} where N,1}) at C:\Users\steff.juliapro\JuliaPro_v1.2.0-1\packages\RecursiveArrayTools\kOAyM\src\vector_of_array.jl:13 [3] top-level scope at none:0

Using the other constructor works, i.e.
v = VectorOfArray(a_empty , (1,1))

Broadcast type-stability for ArrayPartition

Just saw your blog post, and I'm excited to try out the ArrayPartition type. But I'm seeing some type instability for a simple example when broadcasting. Am I doing something wrong?

               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.6.0-pre.beta.461 (2017-05-09 18:23 UTC)
 _/ |\__'_|_|_|\__'_|  |  release-0.6/518b9d92e4* (fork: 1 commits, 7 days)
|__/                   |  x86_64-linux-gnu

julia> using RecursiveArrayTools

julia> x = ArrayPartition([1, 2], [3.0, 4.0])
RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}(([1, 2], [3.0, 4.0]))

julia> f(y) = y + 1
f (generic function with 1 method)

julia> f.(x)
RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}(([2, 3], [4.0, 5.0]))

julia> @code_warntype f.(x)
Variables:
  #self#::Base.#broadcast
  f::#f
  B::Tuple{RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}}
  A::RecursiveArrayTools.ArrayPartition{_} where _

Body:
  begin  # line 106:
      A::RecursiveArrayTools.ArrayPartition{_} where _ = (Core._apply)(RecursiveArrayTools.ArrayPartition, $(Expr(:new, Base.Generator{Tuple{Array{Int64,1},Array{Float64,1}},Base.#similar}, :($(QuoteNode(similar))), :((Core.getfield)((Base.getfield)(B, 1)::RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}, :x)::Tuple{Array{Int64,1},Array{Float64,1}}))))::RecursiveArrayTools.ArrayPartition{_} where _
      (RecursiveArrayTools.broadcast!)(f::#f, A::RecursiveArrayTools.ArrayPartition{_} where _, (Core.getfield)(B::Tuple{RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}}}, 1)::RecursiveArrayTools.ArrayPartition{Tuple{Array{Int64,1},Array{Float64,1}}})::Any
      return A::RecursiveArrayTools.ArrayPartition{_} where _
  end::RecursiveArrayTools.ArrayPartition{_} where _

access by UnitRange not implemented/performance issue with .x

Access linearly works, but accessing via range does not:

u0=ArrayPartition(rand(1,2),rand(1,3))
u0[1:4]

DimensionMismatch("output array is the wrong size; expected (Base.OneTo(4),), got (5,)")

Although it can be avoided by something like u0[:][1:4]

Somewhat related, there seems to be quite a noticeable performance hit when accessing the solution's .x tuple. e.g.

using DifferentialEquations,RecursiveArrayTools


function eqn_flat(du,u,p,t)
    du .= 0.3 .* u
end
    
function eqn(du,u,p,t)
    for i in 1:RecursiveArrayTools.npartitions(u)
        du.x[i] .= 0.3 .* u.x[i]
    end
end

u0 = ArrayPartition([rand(2,2) for i in 1:20]...)
u0f = rand(2,2,20)

tspan=(0.0,30.0)

prob = ODEProblem(eqn,u0,tspan)
prob_f = ODEProblem(eqn_flat,u0f,tspan)
sol = solve(prob,Tsit5());
sol_f = solve(prob_f,Tsit5());

if one then wants to do further processing on the, e.g. first 10 2-by-2 matrices solution,

function process(sol)
    f=[sol(tx)[:,:,j] for tx in 0:0.01:30, j in 1:10]
end

function process_x(sol)
    f=[sol(tx).x[j] for tx in 0:0.01:30, j in 1:10]
end

function process_x_all(sol)
    f=[reshape(sol(tx)[:][1:40],2,2,10) for tx in 0:0.01:30]
end

using BenchmarkTools

@btime process(sol_f);

67.081 ms (1050096 allocations: 39.59 MiB)
(julia default matrices method)

@btime process_x(sol);

254.091 ms (1649858 allocations: 84.89 MiB)
(access via .x tuple)

if we use : and change 2d comprehension to 1d, using process_x_all

@btime process_x_all(sol);

26.270 ms (182996 allocations: 12.20 MiB)

@btime process_x_all(sol_f);

6.594 ms (111015 allocations: 7.16 MiB)

of course, with the flat solution we do not need [:]

function process_x_all_flat(sol)
    f=[reshape(sol(tx)[1:40],2,2,10) for tx in 0:0.01:30]
end

@btime process_x_all_flat(sol_f);

6.435 ms (108014 allocations: 5.06 MiB)

either way, looks like there's a 3~4x slowdown when accessing ArrayPartition.

Something off with broadcast of ArrayPartition with NTuple

I tried something with OrdinaryDiffEq.jl bare with me.

julia> f = (u, p, t) -> .- u
#38 (generic function with 1 method)

julia> prb = ODEProblem(f, (1.0:5.0...,), (0.0,2.0))
ODEProblem with uType NTuple{5,Float64} and tType Float64. In-place: false
timespan: (0.0, 2.0)
u0: (1.0, 2.0, 3.0, 4.0, 5.0)

julia> sprb = solve(prb, Tsit5(), abstol=1e-17, reltol=1e-17, save_on=false)
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 2-element Array{Float64,1}:
 0.0
 2.0
u: 2-element Array{RecursiveArrayTools.ArrayPartition{Float64,NTuple{5,Float64}},1}:
 1.02.03.04.05.0                                                                             
 0.135335283236613170.270670566473226350.40600584970983820.54134113294645270.6766764161830637

julia> sprb.u[2] .- ((1:5).*exp(-2)) #Check with the real result
(4.718447854656915e-16, 0.13533528323661365, 0.2706705664732255, 0.40600584970984, 0.541341132946451)

It seems like second argument of the broadcast uses only the first element.

julia> sprb.u[2] .- 1
(-0.8646647167633869, -0.7293294335267737, -0.5939941502901618, -0.4586588670535473, -0.32332358381693627)

julia> sprb.u[2] .- (1:5)
(-0.8646647167633869, -0.7293294335267737, -0.5939941502901618, -0.4586588670535473, -0.32332358381693627)

julia> sprb.u[2] .- collect(1:5)
(-0.8646647167633869, -0.7293294335267737, -0.5939941502901618, -0.4586588670535473, -0.32332358381693627)

Except, a value with the same type

julia> sprb.u[2] .- sprb.u[2]
(0.0, 0.0, 0.0, 0.0, 0.0)

julia> sprb.u[2] .- sprb.u[1]
(-0.8646647167633869, -1.7293294335267737, -2.593994150290162, -3.4586588670535474, -4.323323583816936)

I do not know where to blame (here or Base) but normal NTuple works as expected

julia> (1:5...,) .- (1:5)
5-element Array{Int64,1}:
 0
 0
 0
 0
 0

non-scalar indexing is broken for ArrayPartition

using RecursiveArrayTools
a = ArrayPartition(1:5, 1:6)

a[1:8]
a[[1,3,8]]

both throw

ERROR: DimensionMismatch("output array is the wrong size; expected (Base.OneTo(8),), got (11,)")
Stacktrace:
 [1] throw_checksize_error(::ArrayPartition{Int64,Tuple{Array{Int64,1},Array{Int64,1}}}, ::Tuple{Base.OneTo{Int64}}) at ./multidimensional.jl:625
 [2] _unsafe_getindex(::IndexCartesian, ::ArrayPartition{Int64,Tuple{UnitRange{Int64},UnitRange{Int64}}}, ::UnitRange{Int64}) at ./multidimensional.jl:602
 [3] _getindex at ./multidimensional.jl:589 [inlined]
 [4] getindex(::ArrayPartition{Int64,Tuple{UnitRange{Int64},UnitRange{Int64}}}, ::UnitRange{Int64}) at ./abstractarray.jl:905

Single-indexing a[3] works just fine, obviously.

Need an adjoint for constructor VectorOfArray

I have a loss function such as below:

function loss(params)
    sol_ = solve(prob_seird, Tsit5(),p=params, abstol = 1e-8, reltol = 1e-6, 
           saveat=tsteps)
    sol = convert(Array,VectorOfArray(sol_.u))
    l1 = sum((diff(sol[:,1,:],dims=2)-epi_array[:,1,2:end]).^2)
    l2 = sum((diff(sol[:,10,:],dims=2)-epi_array[:,2,2:end]).^2)
    println(size(l1))
    return l1+l2
end

where sol_.u is Array{Array{Float64,2},1} and epic_array is Array{Float32,3}. I use convert to turn sol_.u into Array{Float32,3}.
Solution is from:

function SEIRD_mobility_coupled_outer(mobility_, coupling_matrix_, nn_)
    
    function SEIRD_mobility_coupled_inner(du, u, p_, t)
        s =@view u[:,1]
        e =@view u[:,2]
        id1 =@view u[:,3]
        id2 =@view u[:,4] 
        id3 =@view u[:,5] 
        id4 =@view u[:,6] 
        id5 =@view u[:,7] 
        id6 =@view u[:,8] 
        id7 =@view u[:,9] 
        d =@view u[:,10] 
        ir1 =@view u[:,11] 
        ir2 =@view u[:,12] 
        ir3 =@view u[:,13] 
        ir4 =@view u[:,14] 
        ir5 =@view u[:,15]
        r =@view u[:,16] 
        
        ds =@view du[:,1]
        de =@view du[:,2]
        did1 =@view du[:,3]
        did2 =@view du[:,4] 
        did3 =@view du[:,5] 
        did4 =@view du[:,6] 
        did5 =@view du[:,7] 
        did6 =@view du[:,8] 
        did7 =@view du[:,9] 
        dd =@view du[:,10] 
        dir1 =@view du[:,11] 
        dir2 =@view du[:,12] 
        dir3 =@view du[:,13] 
        dir4 =@view du[:,14] 
        dir5 =@view du[:,15]
        dr =@view du[:,16]
        
        κ, α, γ = softplus.(p_[1:3])
    # κ*α and γ*η are not independent. The probablibility of transition from e to Ir and Id has to add up to 1
        η = - log(-expm1(-κ*α))/(γ+1.0e-8) 
        n_c = size(coupling_matrix_,1)
        scaler_ = softplus.(p_[4:3+n_c])
        cm_ = scaler_ .* coupling_matrix_[:,:,Int32(round(t+1.0))]
        p_nnet = p_[4+n_c:end]
        β = nn_(mobility_[:,:,Int32(round(t+1.0))], p_nnet)[1,:]
        i = id1+id2+id3+ir1+ir2+ir3+ir4+ir5
        c1 = (reshape(i,1,:)*transpose(cm_))[1,:]
        c2 = cm_*i
        a = β .* s .* i + β .* s .* (c1+c2)
        @. ds = -a
        @. de = a - κ*α*e - γ*η*e
        @. did1 = κ*(α*e-id1)
        @. did2 = κ*(id1-id2)
        @. did3 = κ*(id2-id3)
        @. did4 = κ*(id3-id4)
        @. did5 = κ*(id4-id5)
        @. did6 = κ*(id5-id6)
        @. did7 = κ*(id6-id7)
        @. dd = κ*id7
        
        @. dir1 = γ*(η*e-ir1)
        @. dir2 = γ*(ir1-ir2)
        @. dir3 = γ*(ir2-ir3)
        @. dir4 = γ*(ir3-ir4)
        @. dir5 = γ*(ir4-ir5)
        @. dr = γ*ir5
    end
end

where each component i.e. ds, de, dir1 etc has a dimension of 104. Initial conditions are constructed:

using RecursiveArrayTools
ifr = 0.007
n_counties = size(coupling_matrix,1) 
n = ones(n_counties)
ic0 = epi_array[1,:,1]
d0 = epi_array[1,:,2]
r0 = d0/ifr
s0 = n-ic0-r0-d0
e0 = ic0
id10=id20=id30=id40=id50=id60=id70=ic0*ifr/7.0
ir10=ir20=ir30=ir40=ir50=ic0*(1.0-ifr)/5.0
u0 = [s0, 
      e0,
      id10,id20,id30,id40,id50, id60, id70, d0,
      ir10,ir20,ir30,ir40,ir50,r0];
u0 = VectorOfArray([s0, 
                    e0,
                    id10,id20,id30,id40,id50, id60, id70, d0,
                    ir10,ir20,ir30,ir40,ir50, r0]);
u0 = convert(Array,u0);

When I try to calculate the gradient

Zygote.gradient(loss,p_init)

I am getting:
Need an adjoint for constructor VectorOfArray{Float64,3,Array{Array{Float64,2},1}}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.