GithubHelp home page GithubHelp logo

bileveljump.jl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bileveljump.jl's Issues

Simple fix: get_optimizer_attribute

Dear developer,

In the implementation of JuMP.set_optimizer_attribute(bm::BilevelModel, name::String, value),

The line: return JuMP.set_optimizer_attribute(bm.solver, MOI.RawParameter(name), value)

should be changed in

return JuMP.set_optimizer_attribute(bm, MOI.RawParameter(name), value)

I get the following error with the first formulation, but with the little changeit works fine.
error ERROR: MethodError: no method matching get_optimizer_attribute(::MathOptInterface.Bridges.LazyBridgeOptimizer{Gurobi.Optimizer}, ::MathOptInterface.RawParameter)

Query duals

  • from lower level is easier in most KKT solve methods.
  • from upper level could be done when solving problems as NLPs.
  • and politely block getting duals from the upper level

Check lower level objective sense

The lower level objective sense must be MAX or MIN. FEASIBILITY_SENSE is not supported because its is not straightforward to define what should happen in Dualization.jl

Hence, we should test and throw an error.

Bug in using set_start_value

I am experiencing problems in setting initial values for the optimizations, aiming to speed up the calculations.

ERROR: LoadError: In MathOptInterface.ScalarAffineFunction{Float64}-in-MathOptInterface.EqualTo{Float64} constraint: Constant 1.0 of the function is not zero. The function constant should be moved to the set. You can use MOI.Utilities.normalize_and_add_constraint which does this automatically.

This error occurs as in the attached file.
report_error.txt

Change formulations dispatch

comps = get_canonical_complements(lower, lower_primal_dual_map)
    for comp in comps
        if mode == SOS1Mode # SOS1
            # create a constrained slack - depends on set
            add_complement_constraint_sos1(m, comp, lower_idxmap, lower_dual_idxmap)
            # TODO add directly to the primal equality
            # (will require set change - loses dual mapping)
        elseif mode == ComplementMode
            add_complement_constraint(m, comp, lower_idxmap, lower_dual_idxmap)
        elseif mode == ComplementWithSlackMode
            add_complement_constraint_with_slack(m, comp, lower_idxmap, lower_dual_idxmap)
        elseif mode == ProductMode
            add_complement_constraint_product(m, comp, lower_idxmap, lower_dual_idxmap)
        elseif mode == ProductWithSlackMode
            add_complement_constraint_product_with_slack(m, comp, lower_idxmap, lower_dual_idxmap)
        end
    end

Make types for SOS1Mode, ProductMode, etc. This allows the function to pass parameters.

Error using value with expressions (appeared with new release)

Dear developers,

In the use of the new version I discovered that the value function does not work anymore with expressions, like the ones shown in the following code.
In the version 0.1.0 of BilevelJuMP they were working perfectly, now that I updated my repository it doesn't work anymore (it may be related also to the upgrade of JuMP). I hope it should be easy and fast to solve.

error_source.txt

I realized that this is the same issue of #70 ; the attached source code can be a detailed example

The problem is caused mainly at line 155 of file aff_expr of JuMP because the returned value of function promote_op becomes "union" instead of Float64. I crosschecked it with a similar example developed in JuMP only.

Refuse s2->s1)

Dear developer,

I am not sure whether in the cited line below, there occurs a refuse: might s2 be s1? Although s1 has the same value of s2.

c1 = MOIU.normalize_and_add_constraint(m, f1, s2)

support value(::GenericAffExpr{Float64, BilevelJuMP.BilevelVariableRef})

When trying to get the value of a variable with type GenericAffExpr{Float64,BilevelJuMP.BilevelVariableRef} I get the following:

ERROR: MethodError: convert(::Type{Union{}}, ::Float64) is ambiguous. Candidates:
  convert(::Type{Union{}}, x) in Base at essentials.jl:169
  convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7
  convert(::Type{T}, arg) where T<:VecElement in Base at baseext.jl:8
  convert(::Type{T}, x::Number) where T<:AbstractChar in Base at char.jl:179
Possible fix, define
  convert(::Type{Union{}}, ::Number)
Stacktrace:
 [1] value(::GenericAffExpr{Float64,BilevelJuMP.BilevelVariableRef}, ::JuMP.var"#35#36"{Int64}) at /Users/nlaws/.julia/packages/JuMP/YXK4e/src/aff_expr.jl:157
 [2] value(::GenericAffExpr{Float64,BilevelJuMP.BilevelVariableRef}; result::Int64) at /Users/nlaws/.julia/packages/JuMP/YXK4e/src/aff_expr.jl:345
 [3] value(::GenericAffExpr{Float64,BilevelJuMP.BilevelVariableRef}) at /Users/nlaws/.julia/packages/JuMP/YXK4e/src/aff_expr.jl:345

I'm guessing that this could be fixed with a definition similar to the existing:

function JuMP.value(v::BilevelVariableRef)::Float64
    m = owner_model(v)
    solver = m.solver
    ref = solver_ref(v)
    return MOI.get(solver, MOI.VariablePrimal(), ref)
end

in BilevelJuMP/src/jump.jl

Possible readibility/usage improvement: simplify notation of FortunyAmatMcCarl reformulation

Dear Developers,

I may suggest the developers a reformulation of the names of set/get_primal_upper/lower_bound_hint and get/set_dual_upper/lower_bound in a maybe more straightforward formulation such as set/get_(FAMC)bound or similar with maybe the optional values upper/lower or both to specify the bound to set. I think it may be simpler to use and also to describe in the documentation.

From the code point of view, the changes should be limited. I may try to propose a version if I have time and you agree on the point.

FortunyAmatMcCarlMode : primal/dual_big_M not implemented

I realized that although the parameters can be set, they are not used in the code and in fact the following code returns the following error:
KeyError: key MathOptInterface.VariableIndex(8) not found

The error occurs in function get_bounds called from add_complement.

using JuMP, BilevelJuMP, Gurobi

m = BilevelModel(Gurobi.Optimizer, mode = BilevelJuMP.FortunyAmatMcCarlMode(primal_big_M = 1000, dual_big_M = 1000))

@variable(Upper(m), 0 <= x <= 3 )
@variable(Lower(m), 0 <= y[1:2])
@variable(Lower(m), 0 <= z[1:2])

a = JuMP.Containers.DenseAxisArray([y[i]+ z[i] for i = 1:2], 1:2)
d = JuMP.Containers.SparseAxisArray(Dict((1, i) => y[i] for i in 1:2))

c = y[1] + y[2]

@constraint(Upper(m), x >=1)

@constraint(Lower(m), y[1] >= 0.5)
@constraint(Lower(m), y[2] >= 0.5)

@objective(Upper(m), Min, x)

@objective(Lower(m), Min, y[1]+z[2])

optimize!(m)


_a = value.(a)
_c = value.(c)
_d = value.(d)

Weird bug, maybe problem somewhere in build_bilevel

Dear developer,

In the past period I have significant problems in finding a result for my application and developed the following short piece of code that reproduce my issue.
The problem described above represent the case of two people needing fruits (i.e. 3 fruit the first and 6 the second one) that cost 10 cents to the global market. However, they could also buy and sell fruits between them at price p.
m_P/N represent the fruits exchanged between the users and p_N are the fruits bought from the external market by each user.
Since no user produce fruits, the only solution of this problem is that each user buys the requested fruits from the market.
Therefore: the only possible solution is m_P/N = 0, p_N = [3, 6]. That holds at any price p since users do not produce fruits.

The problem can be represented as

(Upper)
Max sum(0.01*m_N[i] + 0.003*m_P[i] for i in 1:2)
s.t.
sum(m_P[i] - m_N[i] for i in 1:2) == 0
0 <= p <= 0.1

(Lower)
s.t
 max sum( - p_N[i]*0.1 - m_N[i]*(p + 0.01) + m_P[i]*(p - 0.003) for i in 1:2)
 s.t.
   -p_N[i] + m_P[i] - m_N[i] == - 3*i for i=1:2
   

0 <= p/m_N/P <= 10

The proposed code is aimed to represent the problembelow works and find the solution m_P/N = 0, p_N = [3, 6] correctly. However, the solution comes with the value of p = about 0.09.
Theoretically, this problem should be working for every value of p, as explained above, however, when I change the bounds of p, infeasibilities occur.

using JuMP, BilevelJuMP, Gurobi

m = BilevelModel(Gurobi.Optimizer, mode = BilevelJuMP.FortunyAmatMcCarlMode(primal_big_M = 1000, dual_big_M = 1000))

@variable(Upper(m), 0 <= p <= 0.1)
@variable(Lower(m), 0 <= p_N[1:2] <= 10)
@variable(Lower(m), 0 <= m_P[1:2] <= 10)
@variable(Lower(m), 0 <= m_N[1:2] <= 10)

@constraint(Upper(m), con_a, sum(m_P[i] - m_N[i] for i in 1:2) == 0)
@objective(Upper(m), Max, sum(0.01*m_N[i] + 0.003*m_P[i] for i in 1:2))

@constraint(Lower(m), con_b[i=1:2], - p_N[i] + m_P[i] - m_N[i] == - 3*i)
@objective(Lower(m), Max, sum( - p_N[i]*0.1 - m_N[i]*(p + 0.01) + m_P[i]*(p - 0.003) for i in 1:2))


optimize!(m, bilevel_prob = "prova.lp", problem_format = MOI.FileFormats.FORMAT_AUTOMATIC)

_p = value.(p)
_p_N = value.(p_N)
_m_P = value.(m_P)
_m_N = value.(m_N)

@show _p
@show _p_N
@show _m_P
@show _m_N

For example, if after the previous code, these lines are added:

set_upper_bound(p, 0.05)
optimize!(m, bilevel_prob = "prova2.lp") # error

The new problem becomes infeasible, which is very weird.

To confirm that the situation is weird, if also the bounds of variables m_P/N are set to 0, then the optimization works.
However, as stated in the beginning, m_P/N==0 was a result of the first optimization, so the fact that by imposing these additional constraints, the problem works suggest that there might be a bug.

for i = 1:2
     set_upper_bound(m_P[i], 0)
     set_upper_bound(m_N[i], 0)
end
set_upper_bound(p, 0.05)
optimize!(m, bilevel_prob = "prova3.lp") # works but the solution hasn't changed

I am trying to investigate the issue, because I believe that this hidden bug may have significant silent effects in the optimality of the proposed problems.

P.S. I found it more useful to have the lp format. To do so, I changed the function print_lp as follows

function print_lp(m, name, problem_format = MOI.FileFormats.FORMAT_AUTOMATIC)
    dest = MOI.FileFormats.Model(format = problem_format, filename = name)
    MOI.copy_to(dest, m)
    MOI.write_to_file(dest, name)
end

File containing all code
test_bug.txt

Set solver attribute

I thank the developers for the efforts put in the new release.
I kindly ask them to implement the "set_optimizer_attribute" function or a similar method (I see it is commented in the source code, so I believe it is an open issue).
Unlikely without that feature I cannot run my code.
Currently I am trying to make it work with set_mode and set_optimizer functions; as I find a workaround I write here.

I solved with this workaround:
model = BilevelModel()
model.mode = BilevelJuMP.SOS1Mode()
set_optimizer(model, () -> Gurobi.Optimizer(...{parameters}...))

I don't know why but "set_mode" wasn't recognized as a function, but maybe it is because of the updates, however model.mode = ... is equivalent to set_mode (in the current version)

I don't know why but the workaround doesn't work in debug mode

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.