GithubHelp home page GithubHelp logo

gher-uliege / dincae.jl Goto Github PK

View Code? Open in Web Editor NEW
22.0 7.0 7.0 894 KB

DINCAE (Data-Interpolating Convolutional Auto-Encoder) is a neural network to reconstruct missing data in satellite observations.

License: GNU General Public License v3.0

Julia 100.00%
earth-observation julia machine-learning satellite-data

dincae.jl's People

Contributors

alexander-barth avatar ctroupin avatar keduba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dincae.jl's Issues

Dimension of variable `dates` in input dara file

If I understand well the doc at

DINCAE.jl/src/points.jl

Lines 492 to 508 in ef26b38

netcdf all-sla.train {
dimensions:
track = 9628 ;
time = 7445528 ;
variables:
int64 size(track) ;
size:sample_dimension = "time" ;
double dates(track) ;
dates:units = "days since 1900-01-01 00:00:00" ;
float sla(time) ;
float lon(time) ;
float lat(time) ;
int64 id(time) ;
double dtime(time) ;
dtime:long_name = "time of measurement" ;
dtime:units = "days since 1900-01-01 00:00:00" ;
}

the variable size, with dimension track, indicates the number of observations in each track (altimetry).

But then dates is defined as the time instance of the gridded field and has also track as a dimension.

It's not clear to me why it's the case, i.e. why the time instances of the gridded field should correspond to the tracks?
On a given day there are several tracks?

DINCAE_utils.addcvpoint error

Hello, I already tested successfully the tutorial, but when i tried my own data i found this error when i run: "DINCAE_utils.addcvpoint(fname,varname; mincvfrac = 0.05);" do you have any idea about it?

DINCAE_utils.addcvpoint(fname,varname; mincvfrac = 0.05);
ERROR: IOError: sendfile: Unknown system error -954851718 (Unknown system error -954851718)
Stacktrace:
[1] uv_error
@ .\libuv.jl:100 [inlined]
[2] sendfile(dst::Base.Filesystem.File, src::Base.Filesystem.File, src_offset::Int64, bytes::Int64)
@ Base.Filesystem .\filesystem.jl:153
[3] sendfile(src::String, dst::String)
@ Base.Filesystem .\file.jl:998
[4] cp(src::String, dst::String; force::Bool, follow_symlinks::Bool)
@ Base.Filesystem .\file.jl:384
[5] cp
@ .\file.jl:376 [inlined]
[6] #addcvpoint#5
@ C:\Users\Zhou laoshi.julia\packages\DINCAE_utils\f4oUy\src\data.jl:101 [inlined]
[7] top-level scope
@ REPL[145]:1
[8] top-level scope
@ C:\Users\Zhou laoshi.julia\packages\CUDA\35NC6\src\initialization.jl:190

ERROR: MethodError: Cannot `convert` an object of type NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, …}} to an object of type String

The parameter auxdata_files cannot be written to the parameter file (netCDF).

Error message

1-element ExceptionStack:
LoadError: MethodError: Cannot `convert` an object of type NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, }} to an object of type String

Closest candidates are:
  convert(::Type{T}, ::PyObject) where T<:AbstractString
   @ PyCall ~/.julia/packages/PyCall/ilqDX/src/conversions.jl:92
  convert(::Type{T}, ::Union{InitialValues.SpecificInitialValue{typeof(*)}, InitialValues.SpecificInitialValue{typeof(Base.mul_prod)}}) where T<:Union{AbstractString, Number}
   @ InitialValues ~/.julia/packages/InitialValues/OWP8V/src/InitialValues.jl:258
  convert(::Type{S}, ::CategoricalArrays.CategoricalValue) where S<:Union{AbstractChar, AbstractString, Number}
   @ CategoricalArrays ~/.julia/packages/CategoricalArrays/0yLZN/src/value.jl:92
  ...

Stacktrace:
  [1] setindex!(A::Vector{String}, x::NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, …}}, i1::Int64)
    @ Base ./array.jl:969
  [2] _unsafe_copyto!(dest::Vector{String}, doffs::Int64, src::Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, …}}}, soffs::Int64, n::Int64)
    @ Base ./array.jl:250
  [3] unsafe_copyto!
    @ ./array.jl:304 [inlined]
  [4] _copyto_impl!
    @ ./array.jl:327 [inlined]
  [5] copyto!
    @ ./array.jl:314 [inlined]
  [6] copyto!
    @ ./array.jl:339 [inlined]
  [7] copyto_axcheck!
    @ ./abstractarray.jl:1182 [inlined]
  [8] Vector{String}(x::Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, …}}})
    @ Base ./array.jl:621
  [9] (::DINCAE.var"#123#124"{Int64, Int64, Bool, Vector{Int64}, UnitRange{Int64}, Float64, Int64, Int64, StepRange{Int64, Int64}, Symbol, Float64, Tuple{Float32, Float32}, Int64, Float64, Float64, Float64, Tuple{Float64}, Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, }}}, Bool, Int64, Int64, Vector{Float64}})(ds_::NCDataset{Nothing})
    @ DINCAE ~/.julia/packages/DINCAE/SH3PW/src/points.jl:714
 [10] NCDataset(::DINCAE.var"#123#124"{Int64, Int64, Bool, Vector{Int64}, UnitRange{Int64}, Float64, Int64, Int64, StepRange{Int64, Int64}, Symbol, Float64, Tuple{Float32, Float32}, Int64, Float64, Float64, Float64, Tuple{Float64}, Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, }}}, Bool, Int64, Int64, Vector{Float64}}, ::String, ::Vararg{String}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ NCDatasets ~/.julia/packages/NCDatasets/st9Jz/src/dataset.jl:255
 [11] NCDataset
    @ ~/.julia/packages/NCDatasets/st9Jz/src/dataset.jl:252 [inlined]
 [12] reconstruct_points(T::Type, Atype::Type, filename::String, varname::String, grid::Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L1_beta::Int64, regularization_L2_beta::Int64, save_epochs::StepRange{Int64, Int64}, upsampling_method::Symbol, probability_skip_for_training::Float64, jitter_std_pos::Tuple{Float32, Float32}, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64}, auxdata_files::Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, …}}}, paramfile::String, savesnapshot::Bool, laplacian_penalty::Int64, laplacian_error_penalty::Int64)
    @ DINCAE ~/.julia/packages/DINCAE/SH3PW/src/points.jl:695
 [13] top-level scope
    @ ~/Projects/CPR-DINCAE/src/run_DINCAE_testparams.jl:114
 [14] include(fname::String)
    @ Base.MainInclude ./client.jl:478
 [15] top-level scope
    @ REPL[10]:1
 [16] top-level scope
    @ ~/.julia/packages/CUDA/35NC6/src/initialization.jl:190
in expression starting at /home/ctroupin/Projects/CPR-DINCAE/src/run_DINCAE_testparams.jl:100

Where the code fails

My bad: I misunderstood how the conversion from the list to a string should be performed.

ds_.attrib["auxdata_files"] = Vector{String}(collect(auxdata_files))

and

julia> typeof(auxdata_files)
Vector{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, }}} (alias for Array{NamedTuple{(:filename, :varname, :errvarname), Tuple{String, String, }}, 1})
``

LoadError: ArgumentError: Chunk sizes must be strictly positive

  • Got this error during the automated tests after a git push.
  • Not sure how to reproduce.
  • Mentionned for documentation purpose.
...
reconstruct gridded data: Error During Test at /home/runner/work/DINCAE.jl/DINCAE.jl/test/runtests.jl:6
  Got exception outside of a @test
  LoadError: ArgumentError: Chunk sizes must be strictly positive
  Stacktrace:
    [1] RegularChunks
      @ ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:26 [inlined]
    [2] #24
      @ ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:133 [inlined]
    [3] map (repeats 3 times)
      @ ./tuple.jl:318 [inlined]
    [4] DiskArrays.GridChunks(a::Tuple{Int64, Int64, Int64}, chunksize::Tuple{Int64, Int64, Int64}; offset::Tuple{Int64, Int64, Int64})
      @ DiskArrays ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:132
    [5] GridChunks
      @ ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:131 [inlined]
    [6] #GridChunks#18
      @ ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:130 [inlined]
    [7] DiskArrays.GridChunks(a::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}}, chunksize::Tuple{Int64, Int64, Int64})
      @ DiskArrays ~/.julia/packages/DiskArrays/QlfRF/src/chunks.jl:130
    [8] eachchunk(v::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}})
      @ NCDatasets ~/.julia/packages/NCDatasets/vRl1m/src/variable.jl:407
    [9] _writeblock!(::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}}, ::Array{Float32, 3}, ::Base.OneTo{Int64}, ::Vararg{AbstractVector})
      @ DiskArrays ~/.julia/packages/DiskArrays/QlfRF/src/batchgetindex.jl:187
   [10] writeblock!(::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}}, ::Array{Float32, 3}, ::Base.OneTo{Int64}, ::Vararg{AbstractVector})
      @ NCDatasets ~/.julia/packages/DiskArrays/QlfRF/src/batchgetindex.jl:213
   [11] setindex_disk!(::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}}, ::Matrix{Float32}, ::Function, ::Vararg{Any})
      @ DiskArrays ~/.julia/packages/DiskArrays/QlfRF/src/diskarray.jl:57
   [12] setindex!(::NCDatasets.Variable{Float32, 3, NCDataset{Nothing}}, ::Matrix{Float32}, ::Function, ::Function, ::Int64)
      @ NCDatasets ~/.julia/packages/DiskArrays/QlfRF/src/diskarray.jl:229
   [13] savesample(ds::NCDataset{Nothing}, varnames::Vector{String}, xrec::Array{Float64, 4}, meandata::Array{Float32, 3}, ii::Int64, offset::Int64; output_ndims::Int64)
      @ DINCAE ~/work/DINCAE.jl/DINCAE.jl/src/data.jl:573
   [14] macro expansion
      @ ~/work/DINCAE.jl/DINCAE.jl/src/model.jl:536 [inlined]
   [15] macro expansion
      @ ./timing.jl:273 [inlined]
   [16] macro expansion
      @ ~/work/DINCAE.jl/DINCAE.jl/src/model.jl:526 [inlined]
   [17] macro expansion
      @ ./timing.jl:273 [inlined]
   [18] reconstruct(Atype::Type, data_all::Vector{Vector{NamedTuple{(:filename, :varname, :obs_err_std, :jitter_std, :isoutput), Tuple{String, String, Int64, Float64, Bool}}}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L1_beta::Int64, regularization_L2_beta::Int64, save_epochs::Vector{Int64}, is3D::Bool, upsampling_method::Symbol, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64}, cycle_periods::Tuple{Float64}, output_ndims::Int64, direction_obs::Nothing, remove_mean::Bool, paramfile::String, laplacian_penalty::Int64, laplacian_error_penalty::Int64)
      @ DINCAE ~/work/DINCAE.jl/DINCAE.jl/src/model.jl:511
   [19] top-level scope
      @ ~/work/DINCAE.jl/DINCAE.jl/test/test_DINCAE_SST.jl:63
   [20] include(fname::String)
      @ Base.MainInclude ./client.jl:478
   [21] macro expansion
      @ ~/work/DINCAE.jl/DINCAE.jl/test/runtests.jl:7 [inlined]
   [22] macro expansion
      @ /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Test/src/Test.jl:1498 [inlined]
   [23] top-level scope
      @ ~/work/DINCAE.jl/DINCAE.jl/test/runtests.jl:7
   [24] include(fname::String)
      @ Base.MainInclude ./client.jl:478
   [25] top-level scope
      @ none:6
   [26] eval
      @ ./boot.jl:370 [inlined]
   [27] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:280
   [28] _start()
      @ Base ./client.jl:522
  in expression starting at /home/runner/work/DINCAE.jl/DINCAE.jl/test/test_DINCAE_SST.jl:52
Test Summary:            | Error  Total     Time
reconstruct gridded data |     1      1  1m47.0s
ERROR: LoadError: Some tests did not pass: 0 passed, 0 failed, 1 errored, 0 broken.
in expression starting at /home/runner/work/DINCAE.jl/DINCAE.jl/test/runtests.jl:6
ERROR: LoadError: Package DINCAE errored during testing
Stacktrace:
 [1] pkgerror(msg::String)
   @ Pkg.Types /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/Types.jl:69
 [2] test(ctx::Pkg.Types.Context, pkgs::Vector{Pkg.Types.PackageSpec}; coverage::Bool, julia_args::Cmd, test_args::Cmd, test_fn::Nothing, force_latest_compatible_version::Bool, allow_earlier_backwards_compatible_versions::Bool, allow_reresolve::Bool)
   @ Pkg.Operations /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/Operations.jl:2019
 [3] test
   @ /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/Operations.jl:1900 [inlined]
 [4] test(ctx::Pkg.Types.Context, pkgs::Vector{Pkg.Types.PackageSpec}; coverage::Bool, test_fn::Nothing, julia_args::Vector{String}, test_args::Cmd, force_latest_compatible_version::Bool, allow_earlier_backwards_compatible_versions::Bool, allow_reresolve::Bool, kwargs::Base.Pairs{Symbol, IOContext{Base.PipeEndpoint}, Tuple{Symbol}, NamedTuple{(:io,), Tuple{IOContext{Base.PipeEndpoint}}}})
   @ Pkg.API /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/API.jl:441
 [5] test(pkgs::Vector{Pkg.Types.PackageSpec}; io::IOContext{Base.PipeEndpoint}, kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:coverage, :julia_args, :force_latest_compatible_version), Tuple{Bool, Vector{String}, Bool}}})
   @ Pkg.API /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/API.jl:156
 [6] test(; name::Nothing, uuid::Nothing, version::Nothing, url::Nothing, rev::Nothing, path::Nothing, mode::Pkg.Types.PackageMode, subdir::Nothing, kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:coverage, :julia_args, :force_latest_compatible_version), Tuple{Bool, Vector{String}, Bool}}})
   @ Pkg.API /opt/hostedtoolcache/julia/1.9.4/x64/share/julia/stdlib/v1.9/Pkg/src/API.jl:171
 [7] top-level scope
   @ ~/work/_actions/julia-actions/julia-runtest/v1/test_harness.jl:15
 [8] include(fname::String)
   @ Base.MainInclude ./client.jl:478
 [9] top-level scope
   @ none:1
in expression starting at /home/runner/work/_actions/julia-actions/julia-runtest/v1/test_harness.jl:7
Error: Process completed with exit code 1.
``

regularization_L2_beta error

Hi,

When I initialize DINCAE with regularization_L2_beta = 0.001, I recieve the following error.

ERROR: LoadError: MethodError: no method matching abs2(::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer})

Closest candidates are:
  abs2(!Matched::Complex)
   @ Base complex.jl:281
  abs2(!Matched::ForwardDiff.Dual{T}) where T
   @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/dual.jl:238
  abs2(!Matched::DualNumbers.Dual)
   @ DualNumbers ~/.julia/packages/DualNumbers/5knFX/src/dual.jl:204
  ...

Stacktrace:
  [1] MappingRF
    @ ./reduce.jl:95 [inlined]
  [2] _foldl_impl(op::Base.MappingRF{typeof(abs2), Base.BottomRF{typeof(Base.add_sum)}}, init::Base._InitialValue, itr::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}})
    @ Base ./reduce.jl:58
  [3] foldl_impl
    @ ./reduce.jl:48 [inlined]
  [4] mapfoldl_impl(f::typeof(abs2), op::typeof(Base.add_sum), nt::Base._InitialValue, itr::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}})
    @ Base ./reduce.jl:44
  [5] mapfoldl(f::Function, op::Function, itr::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}}; init::Base._InitialValue)
    @ Base ./reduce.jl:170
  [6] mapfoldl
    @ ./reduce.jl:170 [inlined]
  [7] #mapreduce#292
    @ ./reduce.jl:302 [inlined]
  [8] mapreduce
    @ ./reduce.jl:302 [inlined]
  [9] #sum#295
    @ ./reduce.jl:530 [inlined]
 [10] sum(f::Function, a::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}})
    @ Base ./reduce.jl:530
 [11] loss_function(model::DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, Int64}}, xin::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, xtrue::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer})
    @ DINCAE ~/DINCAE/DINCAE.jl/src/model.jl:220
 [12] reconstruct(Atype::Type, data_all::Vector{Vector{NamedTuple{(:filename, :varname, :obs_err_std, :jitter_std, :isoutput), Tuple{String, String, Int64, Float64, Bool}}}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L1_beta::Int64, regularization_L2_beta::Float64, save_epochs::StepRange{Int64, Int64}, is3D::Bool, upsampling_method::Symbol, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64, Float64}, cycle_periods::Tuple{Float64}, output_ndims::Int64, direction_obs::Nothing, remove_mean::Bool, paramfile::Nothing, laplacian_penalty::Int64, laplacian_error_penalty::Int64)
    @ DINCAE ~/DINCAE/DINCAE.jl/src/model.jl:490
 [13] top-level scope
    @ ~/DINCAE/python/DINCAE/8_3.jl:106

MethodError: no method matching NNlib.DenseConvDims(...)

Hello, I'm running the example (DINCAE_turorial.ipynb) and run into issues after running this cell:

loss = DINCAE.reconstruct(
    Atype,data_all,fnames_rec;
    epochs = epochs,
    batch_size = batch_size,
    enc_nfilter_internal = enc_nfilter_internal,
    clip_grad = clip_grad,
    save_epochs = save_epochs,
    upsampling_method = upsampling_method,
    loss_weights_refine = loss_weights_refine,
    ntime_win = ntime_win,
)

Error message

 Info: Number of threads: 1
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:691
┌ Info: Output variables:  ["sst"]
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:715
┌ Info: Number of filters: [10, 32, 64, 128, 256, 512]
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:720
┌ Info: Input size:        149×106×10×32
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:724
┌ Info: Input sum:         154171.03
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:725
┌ Info: Gamma:             10.0
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:728
┌ Info: Number of filters: [10, 32, 64, 128, 256, 512]
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:730
┌ Info: Number of filters (refinement): [12, 32, 64, 128, 256, 512]
└ @ DINCAE /home/ctroupin/.julia/packages/DINCAE/BdvBN/src/model.jl:740

MethodError: no method matching NNlib.DenseConvDims(::KnetArray{Float32, 4}, ::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}; stride=(1, 1), padding=(1, 1), dilation=(1, 1), flipkernel=false)
Closest candidates are:
  NNlib.DenseConvDims(::AbstractArray, ::AbstractArray; kwargs...) at ~/.julia/packages/NNlib/hydo3/src/dim_helpers/DenseConvDims.jl:47

Stacktrace:
  [1] conv4(w::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, x::KnetArray{Float32, 4}; padding::Int64, stride::Int64, dilation::Int64, mode::Int64, alpha::Int64, group::Int64)
    @ Knet.Ops20 ~/.julia/packages/Knet/YIFWC/src/ops20/conv.jl:39
  [2] forw(::Function, ::Param{CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}}, ::Vararg{Any}; kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:padding,), Tuple{Int64}}})
    @ AutoGrad ~/.julia/packages/AutoGrad/1QZxP/src/core.jl:66
  [3] #conv4#28
    @ ./none:0 [inlined]
  [4] (::Conv)(x::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/BdvBN/src/model.jl:119
  [5] (::Chain)(x::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/BdvBN/src/model.jl:136
  [6] (::DINCAE.StepModel)(xin::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/BdvBN/src/model.jl:328
  [7] reconstruct(Atype::Type, data_all::Vector{Vector{NamedTuple{(:filename, :varname, :obs_err_std, :jitter_std, :isoutput), Tuple{String, String, Int64, Float64, Bool}}}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L2_beta::Int64, save_epochs::StepRange{Int64, Int64}, is3D::Bool, upsampling_method::Symbol, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64, Float64})
    @ DINCAE ~/.julia/packages/DINCAE/BdvBN/src/model.jl:750
  [8] top-level scope
    @ In[21]:1
  [9] eval
    @ ./boot.jl:368 [inlined]
 [10] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
    @ Base ./loading.jl:1277

Installation

Julia 1.8.0-beta3

Status `~/.julia/environments/v1.8/Project.toml`
  [052768ef] CUDA v3.10.1
  [8f4d0f93] Conda v1.7.0
  [0d879ee6] DINCAE v2.0.1 `https://github.com/gher-ulg/DINCAE.jl#main`
  [f57bf84d] DINCAE_utils v0.1.0 `https://github.com/gher-ulg/DINCAE_utils.jl#main`
  [efc8151c] DIVAnd v2.7.7
  [7073ff75] IJulia v1.23.3
  [1902f260] Knet v1.4.10
⌃ [85f8d34a] NCDatasets v0.11.9
⌃ [c3e4b0f8] Pluto v0.19.4
  [d330b81b] PyPlot v2.10.0
⌃ [7243133f] NetCDF_jll v400.802.102+0
  [10745b16] Statistics

I guess it's a type issue as explained here but no clear why.

Flux.jl and CUDA.jl not automatically installed.

In the doc it is mentioned that

DINCAE.jl depends on Flux.jl and CUDA.jl which will automatically be installed.

However if we follow the instructions:

using Pkg
Pkg.add(url="https://github.com/gher-uliege/DINCAE.jl", rev="main")
Pkg.add(url="https://github.com/gher-uliege/DINCAE_utils.jl", rev="main")

neither Flux nor CUDA is installed.

(MyProject) pkg> st
Status `~/MyProject/Project.toml`
  [0d879ee6] DINCAE v2.0.2 `https://github.com/gher-uliege/DINCAE.jl#main`
  [f57bf84d] DINCAE_utils v0.1.0 `https://github.com/gher-uliege/DINCAE_utils.jl#main`

Also cuDNN has to be installed.

julia> using DINCAE
┌ Warning: Package cuDNN not found in current path.
│ - Run `import Pkg; Pkg.add("cuDNN")` to install the cuDNN package, then restart julia.
│ - If cuDNN is not installed, some Flux functionalities will not be available when running on the GPU.
└ @ FluxCUDAExt ~/.julia/packages/Flux/u7QSl/ext/FluxCUDAExt/FluxCUDAExt.jl:56

Shall we remove that from the doc, or rephrase it?

MethodError: no method matching NNlib.DenseConvDims

[Note: I'm trying to solve the issue but open it in case others get the same]

What I'm trying to do?

Have DINCAE running with default parameters on the CPR observations.
The input file CPRdata.nc seems correct, as

DINCAE.loaddata(joinpath(datadir, "CPRdata.nc"), "Calanus_Finmarchicus")

doesn't yield any error message.

So I run the reconstruction following the example DINCAE_tutorial.jl

const F = Float32

if CUDA.functional()
    Atype = KnetArray{F}
else
    @warn "No supported GPU found. We will use the CPU which is very slow. Please check https://developer.nvidia.com/cuda-gpus"
    Atype = Array{F}
end
Knet.atype() = Atype

DINCAE.reconstruct_points(
    F, Atype, joinpath(datadir, "CPRdata.nc"), "Calanus_Finmarchicus", grid, ["../results/run01.nc"]
    )

The code does something...

[ Info: number of provided data points: 250021
[ Info: number of data points within domain: 249972
[ Info: number of provided data points: 249972
[ Info: number of data points within domain: 249972

no refine
size = (4, 2); nvar = 50, ksize = (2, 2), method = nearest
skip connections at level 5
size = (8, 4); nvar = 40, ksize = (2, 2), method = nearest
skip connections at level 4
size = (16, 8); nvar = 30, ksize = (2, 2), method = nearest
skip connections at level 3
size = (31, 15); nvar = 20, ksize = (2, 2), method = nearest
skip connections at level 2
size = (61, 29); nvar = 10, ksize = (2, 2), method = nearest

[ Info: number of variaiables: 22
[ Info: gamma: 10.0

until ⬇️⬇️

Error message

MethodError: no method matching NNlib.DenseConvDims(::KnetArray{Float32, 4}, ::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}; stride=(1, 1), padding=(1, 1), dilation=(1, 1), flipkernel=false)
Closest candidates are:
  NNlib.DenseConvDims(::AbstractArray, ::AbstractArray; kwargs...) at ~/.julia/packages/NNlib/3nHWP/src/dim_helpers/DenseConvDims.jl:47

Stacktrace

  [1] conv4(w::CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, x::KnetArray{Float32, 4}; padding::Int64, stride::Int64, dilation::Int64, mode::Int64, alpha::Int64, group::Int64)
    @ Knet.Ops20 ~/.julia/packages/Knet/YIFWC/src/ops20/conv.jl:39
  [2] forw(::typeof(conv4), ::Param{CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}}, ::Vararg{Any}; kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:padding,), Tuple{Int64}}})
    @ AutoGrad ~/.julia/packages/AutoGrad/1QZxP/src/core.jl:66
  [3] #conv4#28
    @ ./none:0 [inlined]
  [4] (::Conv)(x::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/OlSY0/src/model.jl:119
  [5] (::Chain)(x::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/OlSY0/src/model.jl:136
  [6] (::DINCAE.StepModel)(xin::KnetArray{Float32, 4})
    @ DINCAE ~/.julia/packages/DINCAE/OlSY0/src/model.jl:328
  [7] macro expansion
    @ ./show.jl:1047 [inlined]
  [8] reconstruct_points(T::Type, Atype::Type, filename::String, varname::String, grid::Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L1_beta::Int64, regularization_L2_beta::Int64, save_epochs::StepRange{Int64, Int64}, upsampling_method::Symbol, probability_skip_for_training::Float64, jitter_std_pos::Tuple{Float32, Float32}, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64}, auxdata_files::Vector{Any}, savesnapshot::Bool)
    @ DINCAE ~/.julia/packages/DINCAE/OlSY0/src/points.jl:614
  [9] reconstruct_points(T::Type, Atype::Type, filename::String, varname::String, grid::Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, fnames_rec::Vector{String})
    @ DINCAE ~/.julia/packages/DINCAE/OlSY0/src/points.jl:516
 [10] top-level scope
    @ In[34]:1

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

ERROR: LoadError: GPU compilation of MethodInstance

Situation

I'm trying the run gridding of CPR data, which were previously working.

Environment

  • Version 1.9.3 (2023-08-24)
  • Package status:
Project CPRDINCAE v0.1.0
Status `~/Projects/CPR-DINCAE/Project.toml`
  [8f4d0f93] Conda v1.9.1
  [0d879ee6] DINCAE v2.0.2 `https://github.com/gher-uliege/DINCAE.jl#main`
  [864edb3b] DataStructures v0.18.15
  [8bb1440f] DelimitedFiles v1.9.1
  [587475ba] Flux v0.14.6
  [2fb1d81b] GeoArrays v0.8.3
  [61d90e0f] GeoJSON v0.7.2
  [bb4c363b] GridInterpolations v1.1.2
  [7073ff75] IJulia v1.24.2
⌃ [6a3955dd] ImageFiltering v0.7.6
  [a98d9a8b] Interpolations v0.14.7
  [85f8d34a] NCDatasets v0.12.17
  [647866c9] PolygonOps v0.1.2
  [438e738f] PyCall v1.96.1
  [d330b81b] PyPlot v2.11.2
⌃ [02a925ec] cuDNN v1.1.1
  [ade2ca70] Dates
  [9a3f8284] Random
  [10745b16] Statistics v1.9.0

Full error

┌ Warning: Performing scalar indexing on task Task (runnable) @0x00007f36b7bfc010.
│ Invocation of getindex resulted in scalar indexing of a GPU array.
│ This is typically caused by calling an iterating implementation of a method.
│ Such implementations *do not* execute on the GPU, but very slowly on the CPU,
│ and therefore are only permitted from the REPL for prototyping purposes.
│ If you did intend to index this array, annotate the caller with @allowscalar.
└ @ GPUArraysCore ~/.julia/packages/GPUArraysCore/uOYfN/src/GPUArraysCore.jl:106
ERROR: LoadError: GPU compilation of MethodInstance for (::GPUArrays.var"#broadcast_kernel#26")(::CUDA.CuKernelContext, ::CuDeviceVector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Float32, Float32}}, 1}, ::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, …}, ::Int64) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, ChainRulesCore.var"#49#50", Tuple{Base.Broadcast.Extruded{CuDeviceVector{ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{Tuple{Float32, Float32}}, NamedTuple{(:elements,), Tuple{Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}}}}}, 1}, Tuple{Bool}, Tuple{Int64}}, Base.Broadcast.Extruded{Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}}, Tuple{Bool}, Tuple{Int64}}}}, which is not isbits:
  .args is of type Tuple{Base.Broadcast.Extruded{CuDeviceVector{ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{Tuple{Float32, Float32}}, NamedTuple{(:elements,), Tuple{Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}}}}}, 1}, Tuple{Bool}, Tuple{Int64}}, Base.Broadcast.Extruded{Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}}, Tuple{Bool}, Tuple{Int64}}} which is not isbits.
    .2 is of type Base.Broadcast.Extruded{Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}}, Tuple{Bool}, Tuple{Int64}} which is not isbits.
      .x is of type Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}} which is not isbits.


Stacktrace:
  [1] check_invocation(job::GPUCompiler.CompilerJob)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/validation.jl:96
  [2] macro expansion
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:123 [inlined]
  [3] macro expansion
    @ ~/.julia/packages/TimerOutputs/RsWnF/src/TimerOutput.jl:253 [inlined]
  [4] codegen(output::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool, parent_job::Nothing)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:121
  [5] codegen
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:110 [inlined]
  [6] compile(target::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:106
  [7] compile
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:98 [inlined]
  [8] #1037
    @ ~/.julia/packages/CUDA/35NC6/src/compiler/compilation.jl:104 [inlined]
  [9] JuliaContext(f::CUDA.var"#1037#1040"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}})
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:47
 [10] compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.julia/packages/CUDA/35NC6/src/compiler/compilation.jl:103
 [11] actual_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link))
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/execution.jl:125
 [12] cached_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::Function, linker::Function)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/execution.jl:103
 [13] macro expansion
    @ ~/.julia/packages/CUDA/35NC6/src/compiler/execution.jl:318 [inlined]
 [14] macro expansion
    @ ./lock.jl:267 [inlined]
 [15] cufunction(f::GPUArrays.var"#broadcast_kernel#26", tt::Type{Tuple{CUDA.CuKernelContext, CuDeviceVector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Float32, Float32}}, 1}, …}}; kwargs::Base.Pairs{Symbol, Union{}, …})
    @ CUDA ~/.julia/packages/CUDA/35NC6/src/compiler/execution.jl:313
 [16] cufunction
    @ ~/.julia/packages/CUDA/35NC6/src/compiler/execution.jl:310 [inlined]
 [17] macro expansion
    @ ~/.julia/packages/CUDA/35NC6/src/compiler/execution.jl:104 [inlined]
 [18] #launch_heuristic#1080
    @ ~/.julia/packages/CUDA/35NC6/src/gpuarrays.jl:17 [inlined]
 [19] launch_heuristic
    @ ~/.julia/packages/CUDA/35NC6/src/gpuarrays.jl:15 [inlined]
 [20] _copyto!
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:65 [inlined]
 [21] copyto!
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:46 [inlined]
 [22] copy
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:37 [inlined]
 [23] materialize(bc::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Nothing, …})
    @ Base.Broadcast ./broadcast.jl:873
 [24] map(f::Function, x::CuArray{ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{Tuple{Float32, Float32}}, NamedTuple{(:elements,), Tuple{Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}}}}}, 1, …}, xs::Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}})
    @ GPUArrays ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:84
 [25] ProjectTo
    @ ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:238 [inlined]
 [26] macro expansion
    @ ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:343 [inlined]
 [27] _project_namedtuple(f::NamedTuple{(:pos, :x), Tuple{ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:elements, :axes), Tuple{CuArray{ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{Tuple{Float32, Float32}}, NamedTuple{(:elements,), Tuple{Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}}}}}, 1, …}, Tuple{Base.OneTo{Int64}}}}}, ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}}}}, x::NamedTuple{(:pos, :x), Tuple{Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}}, ChainRulesCore.NoTangent}})
    @ ChainRulesCore ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:342
 [28] (::ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}, NamedTuple{(:pos, :x), Tuple{ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:elements, :axes), Tuple{CuArray{ChainRulesCore.ProjectTo{ChainRulesCore.Tangent{Tuple{Float32, Float32}}, NamedTuple{(:elements,), Tuple{Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}}}}}, 1, …}, Tuple{Base.OneTo{Int64}}}}}, ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float32, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}}}}})(dx::NamedTuple{(:pos, :x), Tuple{Vector{ChainRulesCore.Tangent{Tuple{Float32, Float32}, Tuple{Int64, Int64}}}, ChainRulesCore.NoTangent}})
    @ ChainRulesCore ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:335
 [29] ProjectTo
    @ ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:317 [inlined]
 [30] _project
    @ ~/.julia/packages/Zygote/4SSHS/src/compiler/chainrules.jl:189 [inlined]
 [31] (::Zygote.var"#back#302"{:pos, Zygote.Context{true}, …})(Δ::Vector{Tuple{Int64, Int64}})
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/lib/lib.jl:234
 [32] #2184#back
    @ ~/.julia/packages/ZygoteRules/OgCVT/src/adjoint.jl:71 [inlined]
 [33] Pullback
    @ ./none:0 [inlined]
 [34] (::Zygote.Pullback{Tuple{DINCAE.var"#121#124"{CuArray{Float64, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}, Int64}, Tuple{Zygote.ZBack{ChainRules.var"#hcat_pullback#1399"{Tuple{ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}, ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}}, Tuple{Tuple{Int64}, Tuple{Int64}}, …}}, Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:pos, Zygote.Context{true}, …}}, …}})(Δ::CuArray{Float64, 2, …})
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [35] #680
    @ ~/.julia/packages/Zygote/4SSHS/src/lib/array.jl:215 [inlined]
 [36] (::Base.var"#4#5"{Zygote.var"#680#685"})(a::Tuple{Tuple{CuArray{Float64, 2, …}, Zygote.Pullback{Tuple{DINCAE.var"#121#124"{CuArray{Float64, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}, Int64}, Tuple{Zygote.ZBack{ChainRules.var"#hcat_pullback#1399"{Tuple{ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}, ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}}, Tuple{Tuple{Int64}, Tuple{Int64}}, …}}, Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:pos, Zygote.Context{true}, …}}, …}}}, CuArray{Float64, 2, …}})
    @ Base ./generator.jl:36
 [37] iterate
    @ ./generator.jl:47 [inlined]
 [38] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{CuArray{Float64, 2, …}, Zygote.Pullback{Tuple{DINCAE.var"#121#124"{CuArray{Float64, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}, Int64}, Tuple{Zygote.ZBack{ChainRules.var"#hcat_pullback#1399"{Tuple{ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}, ChainRulesCore.ProjectTo{AbstractArray, NamedTuple{(:element, :axes), Tuple{ChainRulesCore.ProjectTo{Float64, NamedTuple{(), Tuple{}}}, Tuple{Base.OneTo{Int64}}}}}}, Tuple{Tuple{Int64}, Tuple{Int64}}, …}}, Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:pos, Zygote.Context{true}, …}}, …}}}}, Vector{CuArray{Float64, 2, …}}}}, Base.var"#4#5"{Zygote.var"#680#685"}})
    @ Base ./array.jl:782
 [39] map
    @ ./abstractarray.jl:3385 [inlined]
 [40] (::Zygote.var"#map_back#682"{DINCAE.var"#121#124"{CuArray{Float64, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}, 1, …})(Δ::Vector{CuArray{Float64, 2, …}})
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/lib/array.jl:215
 [41] (::Zygote.var"#collect_pullback#720"{Zygote.var"#map_back#682"{DINCAE.var"#121#124"{CuArray{Float64, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}, 1, …}, Nothing})(ȳ::Vector{CuArray{Float64, 2, …}})
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/lib/array.jl:249
 [42] Pullback
    @ ~/.julia/packages/DINCAE/5Y5oF/src/points.jl:420 [inlined]
 [43] (::Zygote.Pullback{Tuple{DINCAE.var"##costfun#120", Int64, …}, Any})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [44] Pullback
    @ ~/.julia/packages/DINCAE/5Y5oF/src/points.jl:375 [inlined]
 [45] (::Zygote.Pullback{Tuple{typeof(Core.kwcall), NamedTuple{(:laplacian_penalty, :laplacian_error_penalty), Tuple{Int64, Int64}}, …}, Any})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [46] Pullback
    @ ~/.julia/packages/DINCAE/5Y5oF/src/model.jl:172 [inlined]
 [47] (::Zygote.Pullback{Tuple{DINCAE.var"#53#57"{Bool, Int64, …}, CuArray{Float64, 4, …}, …}, Tuple{Zygote.Pullback{Tuple{Type{NamedTuple{(:laplacian_penalty, :laplacian_error_penalty)}}, Tuple{Int64, Int64}}, Tuple{Zygote.Pullback{Tuple{Type{NamedTuple{(:laplacian_penalty, :laplacian_error_penalty), Tuple{Int64, Int64}}}, Tuple{Int64, Int64}}, Tuple{Zygote.var"#2224#back#315"{Zygote.Jnew{NamedTuple{(:laplacian_penalty, :laplacian_error_penalty), Tuple{Int64, Int64}}, Nothing, …}}}}}}, Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:truth_uncertain, Zygote.Context{true}, …}}, …}})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [48] Pullback
    @ ~/.julia/packages/DINCAE/5Y5oF/src/model.jl:201 [inlined]
 [49] (::Zygote.Pullback{Tuple{typeof(DINCAE.loss_function), DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, …}, Any})(Δ::Float32)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [50] #291
    @ ~/.julia/packages/Zygote/4SSHS/src/lib/lib.jl:206 [inlined]
 [51] #2173#back
    @ ~/.julia/packages/ZygoteRules/OgCVT/src/adjoint.jl:71 [inlined]
 [52] Pullback
    @ ~/.julia/packages/DINCAE/5Y5oF/src/flux.jl:26 [inlined]
 [53] (::Zygote.Pullback{Tuple{DINCAE.var"#4#5"{DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, Tuple{CuArray{Float32, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}}}, Tuple{Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:samples, Zygote.Context{true}, …}}, Zygote.var"#2173#back#293"{Zygote.var"#291#292"{Tuple{Tuple{Nothing}, Tuple{Nothing, Nothing}}, Zygote.Pullback{Tuple{typeof(DINCAE.loss_function), DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, …}, Any}}}, …}})(Δ::Float32)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface2.jl:0
 [54] (::Zygote.var"#122#123"{Zygote.Params{Zygote.Buffer{Any, Vector{Any}}}, Zygote.Pullback{Tuple{DINCAE.var"#4#5"{DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, Tuple{CuArray{Float32, 4, …}, Vector{NamedTuple{(:pos, :x), Tuple{CuArray{Tuple{Float32, Float32}, 1, …}, CuArray{Float32, 2, …}}}}}}}, Tuple{Zygote.var"#2184#back#303"{Zygote.var"#back#302"{:samples, Zygote.Context{true}, …}}, Zygote.var"#2173#back#293"{Zygote.var"#291#292"{Tuple{Tuple{Nothing}, Tuple{Nothing, Nothing}}, Zygote.Pullback{Tuple{typeof(DINCAE.loss_function), DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, …}, Any}}}, …}}, …})(Δ::Float32)
    @ Zygote ~/.julia/packages/Zygote/4SSHS/src/compiler/interface.jl:419
 [55] train_epoch!(::Tuple{DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, Zygote.Params{Zygote.Buffer{Any, Vector{Any}}}, …}, dl::DINCAE.PointCloud{CuArray{Float32}, Float32, …}, learning_rate::Float64; clip_grad::Float64)
    @ DINCAE ~/.julia/packages/DINCAE/5Y5oF/src/flux.jl:29
 [56] kwcall(::NamedTuple{(:clip_grad,), Tuple{Float64}}, ::typeof(DINCAE.train_epoch!), ::Tuple{DINCAE.StepModel{DINCAE.var"#52#56"{Float64}, DINCAE.var"#53#57"{Bool, Int64, …}}, Zygote.Params{Zygote.Buffer{Any, Vector{Any}}}, …}, dl::DINCAE.PointCloud{CuArray{Float32}, Float32, …}, learning_rate::Float64)
    @ DINCAE ~/.julia/packages/DINCAE/5Y5oF/src/flux.jl:20
 [57] macro expansion
    @ ~/.julia/packages/DINCAE/5Y5oF/src/points.jl:657 [inlined]
 [58] macro expansion
    @ ./timing.jl:273 [inlined]
 [59] reconstruct_points(T::Type, Atype::Type, filename::String, varname::String, grid::Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, …}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, …}}, fnames_rec::Vector{String}; epochs::Int64, batch_size::Int64, truth_uncertain::Bool, enc_nfilter_internal::Vector{Int64}, skipconnections::UnitRange{Int64}, clip_grad::Float64, regularization_L1_beta::Int64, regularization_L2_beta::Int64, save_epochs::StepRange{Int64, Int64}, upsampling_method::Symbol, probability_skip_for_training::Float64, jitter_std_pos::Tuple{Float32, Float32}, ntime_win::Int64, learning_rate::Float64, learning_rate_decay_epoch::Float64, min_std_err::Float64, loss_weights_refine::Tuple{Float64}, auxdata_files::Vector{Any}, savesnapshot::Bool, laplacian_penalty::Int64, laplacian_error_penalty::Int64)
    @ DINCAE ~/.julia/packages/DINCAE/5Y5oF/src/points.jl:651
 [60] kwcall(::NamedTuple{(:epochs,), Tuple{Int64}}, ::typeof(DINCAE.reconstruct_points), T::Type, Atype::Type, filename::String, varname::String, grid::Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, …}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, …}}, fnames_rec::Vector{String})
    @ DINCAE ~/.julia/packages/DINCAE/5Y5oF/src/points.jl:529
 [61] top-level scope
    @ ~/Projects/CPR-DINCAE/src/run_DINCAE_0.jl:58
 [62] include(fname::String)
    @ Base.MainInclude ./client.jl:478
 [63] top-level scope
    @ REPL[6]:1
in expression starting at /home/ctroupin/Projects/CPR-DINCAE/src/run_DINCAE_0.jl:58

How to reproduce?

run_DINCAE_0.jl (private repos)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.