GithubHelp home page GithubHelp logo

exatron.jl's People

Contributors

frapac avatar kibaekkim avatar michel2323 avatar youngdae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

exatron.jl's Issues

Integration with ProxAL

We aim at integrating ExaTron in the ProxAL decomposition solver. That would require to define a new AbstractBlockModel in ProxAL, named for instance ExaTronBlockModel. The generic interface of block models is defined in Proxal.

The main issue is to develop an API for the batch ExaTron solver (which currently uses a hand-coded optimization problem for the OPF). If we follow the future ExaTronBlockModel interface in ProxAL, we would need:


ProxAL.init!: definition of the optimization problem, with the active and reactive loads specified by ProxAL.

  • In admm_rect_gpu, replace the data object in this line by a OPFData structure defined directly inside ProxAL (I think ProxAL's format is closed to the format used by ExaTron, anyway).
  • Modify the object OPFData in src/admm/opfdata.jl to take into account the multipliers, the penalty and the ramping reference used by ProxAL (could be Nothing as it is not needed by default)

ProxAL.set_objective!: update the objective function with a new multiplier mul::Vector{Float64}, a new penalty tau::Float64 and a new ramping reference pg_ramp::Vector{Float64}

  • I think here, we don't need to store the ExaTron structure between two optimization solves. This function will just create a new OPFData with the new values.

ProxAL.set_start_values!: set an initial point to the optimization problem

  • if we don't store the structure between two optimization solves, this function can remain empty.

ProxAL.optimize!: solve the optimization problem

  • Call the ExaTron algorithm

ProxAL.get_solution: return the optimal solution found by the solver

  • This function will require some works in ExaTron. ExaTron is working with two arrays v_curr and u_curr, specifying all optimization variables in the problem. Once the problem solved, we should return the physical values corresponding to the solution (voltage magnitudes, power generations ...). Maybe we could define a new structure OPFSolution to do so?

GPU compatibility of data types

I'm not sure if I'm not doing wrong, but user-defined data types, such as TronSparseMatrixCSC and TronDenseMatrix, seem to cause issues in passing them to a kernel function; compilation fails with complaints that it is a non-bitstype argument.

For example, the following call generates an error.

function kernel(p)
    @cuprintln(p.diag_vals) # It does not matter if I call @cuprintln or not. Compiling this will fail.
    return
end

d = ExaTron.TronSparseMatrixCSC(I, J, V, 8) # I, J, and V are CuArrays of types Int (I,J) and Float64 (V)
@cuda threads=128 blocks=1 kernel(d)

I was trying to implement a batch-solve of Tron, and I was thinking of doing this by calling dtron routines for each thread block in a kernel. However, dtron takes aforementioned user-defined data types, leading to failures. I guess all the internal routines using user-defined data types would cause issues in this case.

I'm a bit concerned that we might have to go back to the original implementation where bitstype arrays were used, which means dismantling Matrix into array components.

What do you think? Am I doing something wrong?

Improving performance on GPUs (1) - replace dsel2() with a GPU friendly top-k algorithm

dsel2(), which computes the top-k values from a column, takes a significant computation time in incomplete Cholesky factorization routine. We need to improve its performance.

Candidate algorithms:

  • Sort and select top-k: We could use existing sorting routines, merge sort, quicksort, radix sort, bitonic sort etc, and select the top-k. This could be useful for small matrices.
  • Bitonic top-k: For large matrices, reduce unnecessary computation by focusing on identifying the top-k results.

Documentation

We need the documentation for

  • what this package is about
  • how to set the arguments for solving problem using ExaTron.jl

dgpnorm for CPU vs. GPU

I think the CPU and GPU codes are not using the same norm.

function dgpnorm(n::Int, x::Array{Float64}, xl::Array{Float64},
xu::Array{Float64}, g::Array{Float64})
inf_norm = 0.0
for i=1:n
if xl[i] != xu[i]
if x[i] == xl[i]
v = (min(g[i], 0.0))^2
elseif x[i] == xu[i]
v = (max(g[i], 0.0))^2
else
v = g[i]^2
end
v = sqrt(v)
inf_norm = (inf_norm > v) ? inf_norm : v
end
end
return inf_norm
end

@inline function ExaTron.dgpnorm(n::Int, x::CuDeviceArray{Float64,1}, xl::CuDeviceArray{Float64,1},
xu::CuDeviceArray{Float64,1}, g::CuDeviceArray{Float64,1})
tx = threadIdx().x
v = 0.0
if tx <= n
@inbounds begin
if xl[tx] != xu[tx]
if x[tx] == xl[tx]
v = min(g[tx], 0.0)
elseif x[tx] == xu[tx]
v = max(g[tx], 0.0)
else
v = g[tx]
end
v = abs(v)
end
end
end
# shfl_down_sync() will automatically sync threads in a warp.
offset = 16
while offset > 0
v = max(v, CUDA.shfl_down_sync(0xffffffff, v, offset))
offset >>= 1
end
v = CUDA.shfl_sync(0xffffffff, v, 1)
return v
end

@inline function ExaTron.dgpnorm(n::Int, x, xl,
xu, g,
tx)
@synchronize
res = 0.0
inf_norm = @localmem Float64 (1,)
v = 0.0
if tx == 1
inf_norm[1] = 0.0
for i in 1:n
@inbounds begin
if xl[i] != xu[i]
if x[i] == xl[i]
v = min(g[i], 0.0)
v = v*v
elseif x[i] == xu[i]
v = max(g[i], 0.0)
v = v*v
else
v = g[i]*g[i]
end
v = sqrt(v)
if inf_norm[1] > v
inf_norm[1] = inf_norm[1]
else
inf_norm[1] = v
end
end
end
end
end
@synchronize
res = inf_norm[1]
return res
end

ExaTron.jl/src/driver.jl

Lines 42 to 67 in 3ebc6bc

function gpnorm(n, x, x_l, x_u, g)
two_norm = 0.0
inf_norm = 0.0
for i=1:n
if x_l[i] != x_u[i]
if x[i] == x_l[i]
val = (min(g[i], 0.0))^2
elseif x[i] == x_u[i]
val = (max(g[i], 0.0))^2
else
val = g[i]^2
end
two_norm += val
val = sqrt(val)
if inf_norm < val
inf_norm = val
end
end
end
two_norm = sqrt(two_norm)
return two_norm, inf_norm
end

I am not sure if this is intended. We need @youngdae to confirm this.

Registrator Release

Release the fp/proxal branch as the first release of ExaTron that can be run with the incoming release of ProxAL.

Improving performance on GPUs (2) - reduce getindex() and setindex!() time

In the incomplete Cholesky factorization routine, a significant amount of time is being used in getindex() and setindex!() routine. This seems related to computing the entries of L. We need to investigate a more GPU friendly way of doing this.

Candidate algorithms:

  • We could just use NVIDIA's incomplete Cholesky factorization routine (with zero fill-in), but its performance could be very different from ours, because their implementation is quite different from ours.
  • We could investigate a GPU friendly way of performing computation, especially focusing on small matrices.

Allow ExaTron to use Hessian-vector product

An idea is to implement Hessian-vector product as a linear operator, such as

struct HessVecProd <: AbstractTronMatrix 

(maybe we could rename AbstractTronMatrix as AbstractTronLinearOperator). Then by overloading the * operator,
we could apply Hessian-vector product as with the other types (TronDenseMatrix and TronSparseMatrixCSC), simply as H * v. In that case, we would keep the clean splitting we have between the evaluators (called only in driver.jl) and the Tron algorithm.

Would require to merge #12 first, as we won't be able to use ICFS in this case

Test failure

The package failed the test on moonshot.

Settings

Libraries

kimk@moonshot:~/REPOS/ExaTron.jl  youngdae/multiperiod ✔
▶ module list

Currently Loaded Modules:
  1) cuda/10.2   2) openmpi/4.0.2-llvm   3) gcc/11.1.0-5ikoznk

Julia

               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.7.2 (2022-02-06)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

(ExaTron) pkg> st
     Project ExaTron v0.1.0
      Status `~/REPOS/ExaTron.jl/Project.toml`
  [6e4b80f9] BenchmarkTools v1.3.1
  [052768ef] CUDA v3.9.0
  [da04e1cc] MPI v0.19.2
  [8bb1440f] DelimitedFiles
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [de0858da] Printf

Output

Precompiling project...
  4 dependencies successfully precompiled in 4 seconds (44 already precompiled)
     Testing Running tests...
┌ Warning: @cuDynamicSharedMem is deprecated, please use the CuDynamicSharedArray function
│   caller = ip:0x0
└ @ Core :-1
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 16.120204711253976 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 12.968285012957987 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 12.74063432624637 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 11.543021871927508 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 9.378608983486506 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 14.395284336240643 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 13.240465891595974 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 9.235856050380894 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 14.361745158610717 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
dnrm2: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:693
  Expression: norm(xnorm .- h_out) <= 1.0e-10
   Evaluated: 12.848802308388285 <= 1.0e-10
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:693 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:665 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
driver_kernel: Error During Test at /home/kimk/REPOS/ExaTron.jl/test/gputest.jl:1146
  Got exception outside of a @test
  Failed to compile PTX code (ptxas received signal 11)
  If you think this is a bug, please file an issue and attach /tmp/jl_nc5kAL.ptx
  Stacktrace:
    [1] error(s::String)
      @ Base ./error.jl:33
    [2] cufunction_compile(job::GPUCompiler.CompilerJob, ctx::LLVM.Context)
      @ CUDA ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:405
    [3] #260
      @ ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:325 [inlined]
    [4] JuliaContext(f::CUDA.var"#260#261"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{var"#driver_kernel_test#24"{var"#driver_kernel#23"{var"#eval_h#22", var"#eval_g#21", var"#eval_f#20"}}, Tuple{Int64, Int64, Int64, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceMatrix{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}}}}})
      @ GPUCompiler ~/.julia/packages/GPUCompiler/1FdJy/src/driver.jl:74
    [5] cufunction_compile(job::GPUCompiler.CompilerJob)
      @ CUDA ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:324
    [6] cached_compilation(cache::Dict{UInt64, Any}, job::GPUCompiler.CompilerJob, compiler::typeof(CUDA.cufunction_compile), linker::typeof(CUDA.cufunction_link))
      @ GPUCompiler ~/.julia/packages/GPUCompiler/1FdJy/src/cache.jl:90
    [7] cufunction(f::var"#driver_kernel_test#24"{var"#driver_kernel#23"{var"#eval_h#22", var"#eval_g#21", var"#eval_f#20"}}, tt::Type{Tuple{Int64, Int64, Int64, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceMatrix{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}}}; name::Nothing, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
      @ CUDA ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:297
    [8] cufunction(f::var"#driver_kernel_test#24"{var"#driver_kernel#23"{var"#eval_h#22", var"#eval_g#21", var"#eval_f#20"}}, tt::Type{Tuple{Int64, Int64, Int64, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceMatrix{Float64, 1}, CuDeviceVector{Float64, 1}, CuDeviceVector{Float64, 1}}})
      @ CUDA ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:291
    [9] macro expansion
      @ ~/.julia/packages/CUDA/Uurn4/src/compiler/execution.jl:102 [inlined]
   [10] macro expansion
      @ ~/.julia/packages/CUDA/Uurn4/src/utilities.jl:25 [inlined]
   [11] macro expansion
      @ ~/REPOS/ExaTron.jl/test/gputest.jl:1378 [inlined]
   [12] macro expansion
      @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
   [13] macro expansion
      @ ~/REPOS/ExaTron.jl/test/gputest.jl:1147 [inlined]
   [14] macro expansion
      @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
   [15] top-level scope
      @ ~/REPOS/ExaTron.jl/test/gputest.jl:45
   [16] include(fname::String)
      @ Base.MainInclude ./client.jl:451
   [17] top-level scope
      @ ~/REPOS/ExaTron.jl/test/runtests.jl:10
   [18] include(fname::String)
      @ Base.MainInclude ./client.jl:451
   [19] top-level scope
      @ none:6
   [20] eval
      @ ./boot.jl:373 [inlined]
   [21] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:268
   [22] _start()
      @ Base ./client.jl:495
Test Summary:   | Pass  Fail  Error  Total
GPU             |  280    10      1    291
  dicf          |   30                  30
  dicfs         |   40                  40
  dcauchy       |   20                  20
  dtrpcg        |   20                  20
  dprsrch       |   20                  20
  daxpy         |   10                  10
  dssyax        |   10                  10
  dmid          |   10                  10
  dgpstep       |   10                  10
  dbreakpt      |   30                  30
  dnrm2         |         10            10
  nrm2          |   10                  10
  dcopy         |   10                  10
  ddot          |   10                  10
  dscal         |   10                  10
  dtrqsol       |   10                  10
  dspcg         |   10                  10
  dgpnorm       |   10                  10
  dtron         |   10                  10
  driver_kernel |                 1      1
LoadError("/home/kimk/REPOS/ExaTron.jl/test/gputest.jl", 44, Some tests did not pass: 280 passed, 10 failed, 1 errored, 0 broken.)
Tron: Julia: Test Failed at /home/kimk/REPOS/ExaTron.jl/test/qptest.jl:46
  Expression: (prob.f, obj♯, atol = 1.0e-8)
   Evaluated: -175.02836904181595  -193.05853878066543 (atol=1.0e-8)
Stacktrace:
 [1] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:445 [inlined]
 [2] macro expansion
   @ ~/REPOS/ExaTron.jl/test/qptest.jl:46 [inlined]
 [3] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [4] macro expansion
   @ ~/REPOS/ExaTron.jl/test/qptest.jl:44 [inlined]
 [5] macro expansion
   @ ~/software/julia-1.7.2/share/julia/stdlib/v1.7/Test/src/Test.jl:1283 [inlined]
 [6] top-level scope
   @ ~/REPOS/ExaTron.jl/test/qptest.jl:33
Test Summary:        | Pass  Fail  Total
PosDef QP            |    3     1      4
  Problem definition |    3            3
  Tron: Julia        |          1      1
ERROR: LoadError: Some tests did not pass: 3 passed, 1 failed, 0 errored, 0 broken.
in expression starting at /home/kimk/REPOS/ExaTron.jl/test/qptest.jl:32
in expression starting at /home/kimk/REPOS/ExaTron.jl/test/runtests.jl:15
ERROR: Package ExaTron errored during testing

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.