anriseth / multijump.jl Goto Github PK
View Code? Open in Web Editor NEWMultiJuMP enables the user to easily run multiobjective optimisation problems and generate Pareto fronts.
License: Other
MultiJuMP enables the user to easily run multiobjective optimisation problems and generate Pareto fronts.
License: Other
I downgraded jump to 0.21.2 and get
LoadError: LoadError: UndefVarError: JuMPTypes not defined
I even tried a clean reinstall of jump to 0.21.2 and still get this issue. When I search online I see you guys made the library work with 0.21.2 to fix this exact same error message, but it seems to maintain for me. Any tips guys?
The Pareto front plotting logic should be rewritten as Plots recipes.
Hello,
Is it possible to set a time limit?
In that case does it limit:
If the time limit is reached do we obtain:
Thanks in advance.
Hey guys,
I aim to use MultiJuMP in combination with any NLP-Solver (prob Juniper). But I do not get the package to work.
Same as Orhanabar I tried to run the example first (#35) and got the exact same error:
`UndefVarError: multi_model not defined
Stacktrace:
[1] top-level scope at In[38]:3`
So basically none of the MultiJuMP methods are recognized by Julia, even tho it looks like the package is rdy to work with.
Further downgrading to JuMP 0.18.5 and MultiJuMPO 0.5.0, which was the solution in #35, did not help me.
I use Julia 1.8.2 and JuMP v0.21.2 + MultiJuMP v0.6.0 (as suggested).
Any help is highly appreciated!
julia> using JuMP, MultiJuMP [ Info: Precompiling JuMP [4076af6c-e467-56ae-b986-b466b2749572] [ Info: Precompiling MultiJuMP [f6097e2c-3ba3-5605-a9a8-3a277acb490f] WARNING: could not import JuMP.JuMPTypes into MultiJuMP ERROR: LoadError: LoadError: UndefVarError: JuMPTypes not defined Stacktrace: [1] top-level scope at /Users/oscar/.julia/packages/MultiJuMP/49onf/src/types.jl:13 [2] include(::Function, ::Module, ::String) at ./Base.jl:380 [3] include at ./Base.jl:368 [inlined] [4] include(::String) at /Users/oscar/.julia/packages/MultiJuMP/49onf/src/MultiJuMP.jl:3 [5] top-level scope at /Users/oscar/.julia/packages/MultiJuMP/49onf/src/MultiJuMP.jl:15 [6] include(::Function, ::Module, ::String) at ./Base.jl:380 [7] include(::Module, ::String) at ./Base.jl:368 [8] top-level scope at none:2 [9] eval at ./boot.jl:331 [inlined] [10] eval(::Expr) at ./client.jl:467 [11] top-level scope at ./none:3 in expression starting at /Users/oscar/.julia/packages/MultiJuMP/49onf/src/types.jl:13 in expression starting at /Users/oscar/.julia/packages/MultiJuMP/49onf/src/MultiJuMP.jl:15 ERROR: Failed to precompile MultiJuMP [f6097e2c-3ba3-5605-a9a8-3a277acb490f] to /Users/oscar/.julia/compiled/v1.5/MultiJuMP/KOgs7_EU21N.ji. Stacktrace: [1] error(::String) at ./error.jl:33 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1290 [3] _require(::Base.PkgId) at ./loading.jl:1030 [4] require(::Base.PkgId) at ./loading.jl:928 [5] require(::Module, ::Symbol) at ./loading.jl:923
Moved from README.md:
Do you think it would be possible to support Benson's algorithm (in a similar way to bensolve) in MultiJuMP
when the model is detected to be linear ? It could be done using Polyhedra.jl. Benson's algorithm has a nice application for the Entropic Cone :-P
Howdy,
Tried to run the test programs to make sure that everything is good (I have upgraded all my packages)
I keep getting seg faults when running any of the test programs, and it isn't clear why i.e
jl_apply at /Users/julia/buildbot/worker/package_macos64/build/src/./julia.h:1700 [inlined]
do_call at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:369
eval_body at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:0
jl_interpret_toplevel_thunk at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:911
jl_toplevel_eval_flex at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:819
jl_toplevel_eval_flex at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:769
jl_toplevel_eval at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:828 [inlined]
jl_toplevel_eval_in at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:848
eval at ./boot.jl:331
eval_user_input at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
macro expansion at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:118 [inlined]
#26 at ./task.jl:358
jl_apply at /Users/julia/buildbot/worker/package_macos64/build/src/./julia.h:1700 [inlined]
start_task at /Users/julia/buildbot/worker/package_macos64/build/src/task.c:687
Allocations: 173746270 (Pool: 173710063; Big: 36207); GC: 195
Segmentation fault: 11
Seems to me like the program is for some reason requesting more threads than available? Has anybody else encountered this issue/knows the solution?
Add the possibility to add penalty terms. Useful if the user wants to control constraint-violation themselves.
Hi,
I would like to run an ILP model with three objectives.
the problem in the types.jl, the Phi is fixed at two dimensions, as using WeightedSum method, i can deal with more than two objectives,
So, I could not check the value of the third dimension in Phi.
thanks
Even though @recipe function f(m::JuMP.Model)
is practical to have in some cases, it should not be there since we are hijacking the behaviour of JuMP.Model (type piracy)
const exp_obj1 = @expression(mmodel, -y +0.05 * z)
const exp_obj2 = @expression(mmodel, 0.05 * y - z)
const obj1 = SingleObjective(exp_obj1)
const obj2 = SingleObjective(exp_obj2)
# setting objectives in the data
const multim = get_multidata(mmodel)
multim.objectives = [obj1, obj2]
Represents too many steps.
We should have a set_objectives(::MultiData, ::Vector{SingleObjective})
and the same with
set_objectives(::MultiData, ::Vector{JuMP.GenericAffExpr})
which would construct each objective from JuMP.GenericAffExpr
We're trying to apply a consistent policy on the use of the JuMP name in unofficial packages. Would you mind adding a disclaimer to README stating that this package is not developed or maintained by the JuMP developers?
Moved from README.md.
References:
Moved from README.md:
The picture of the Pareto points from the linear example in the README is not showing on Github as it links to a pdf. @matbesancon do you have a svg or other file type that Github's markdown viewer supports?
Line 54 of README:
![Linear pareto front](./img/linear.pdf)
As of this writing, 19 Feb 2022, the latest version of JuMP is v0.22.3. The current version of JuMP required by MultiJuMP conflicts with the more recent version of JuMP required by many other packages that we use.
May I request that MultiJuMP be updated for compatibility with the most recent version of JuMP? Thank you.
Moved from README.md
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
Any reason why we hard-coded betatree range here? instead of making it configurable? maybe it can be something like 0.0:(1/mdata.pointsperdim):1.0
.
Lines 107 to 109 in 0921721
I'm using the Weighted Sum method of the the package MultiJuMP.
Normally I generate a set on non-dominated solutions. However, in running the my ILP model with the weighted sum method, I got solutions which are dominated by ither solutions.
I can't understand this issue
Example I obtain for ILP model with 3 bjectives these trade-offs
[0.683333, 0.0178571, 0.428571];[0.683333, 0.0535714, 0.285714] [0.683333, 0.0580357, 0.285714];
So, the second tradeoff dominates the third trade-off. So, i don't kniw if it's is an issue or what ?
thanks
Hello,
I executed an ILP model with three-objectives using the weighted sum method and i got a paretofront.
The problem that i would like to show the used weights for each objective to generate each point of the paretoFront.
Did you have an idea how to show that ?
Thanks
The tag name "v0.1" is not of the appropriate SemVer form (vX.Y.Z).
cc: @anriseth
Hello,
How to get the solvertime of each computed point into generated pareto front set.
PS: sing the weighted Sum method
Thanks
Hello,
I’m using MultiJuMP.jl and Juniper to solve an MINLP problem. It takes more than 2 hours to optimize the problem using one core. I wonder how can I accelerate the calculation process. Can I run my code with multithread or multi-cores with MultiJuMP.jl? How to do that?
Not sure this is already doable, more of an enhancement. In the meantime, we should freeze JuMP version in Project.toml
I am planning to start working on multi objective optimization and found MultiJuMP. But the example on the readme page is throwing an error. It looks like it doesn’t recognize the main function of the optimization. Is it working with the newest version of JuMP after they made significant change?
using MultiJuMP, JuMP
using Clp: ClpSolver
const mmodel = multi_model(solver = ClpSolver(), linear = true)
const y = @variable(mmodel, 0 <= y <= 10.0)
const z = @variable(mmodel, 0 <= z <= 10.0)
@constraint(mmodel, y + z <= 15.0)
# objectives
const exp_obj1 = @expression(mmodel, -y +0.05 * z)
const exp_obj2 = @expression(mmodel, 0.05 * y - z)
const obj1 = SingleObjective(exp_obj1)
const obj2 = SingleObjective(exp_obj2)
# setting objectives in the data
const multim = get_multidata(mmodel)
multim.objectives = [obj1, obj2]
solve(mmodel, method = WeightedSum())
# Get the Utopia and Nadir points
utopiapoint = getutopia(multim)
nadirpoint = getnadir(multim)
Error is :
UndefVarError: multi_model not defined
Stacktrace:
[1] top-level scope at In[38]:3
Does the package work with linear objectives? The examples only use Ipopt as a solver and non-linear problems.
using MultiJuMP: SingleObjective, MultiModel, getMultiData
const mmodel = MultiModel(solver = ClpSolver())
const y = @variable(mmodel, 0 <= y <= 10.0)
const z = @variable(mmodel, 0 <= z <= 10.0)
@constraint(mmodel, y + z <= 15.0)
# objectives
const exp_obj1 = @expression(mmodel, -y +0.05 * z)
const exp_obj2 = @expression(mmodel, 0.05 * y - z)
const obj1 = SingleObjective(exp_obj1)
const obj2 = SingleObjective(exp_obj2)
# # setting objectives in the data
const multim = getMultiData(mmodel)
multim.objectives = [obj1, obj2]
solve(mmodel)
Error:
Unexpected object -y + 0.05 z in nonlinear expression.
Stacktrace
If it does, a linear / integer example might be interesting. If not, then a note could be added in the README (happy to make a PR in both cases)
Moved from README.md:
t
before the individual optimisations?
t
caused the algorithm to find a worse, local optimumHello,
I have Integer Linea rmode with two variables and three objective.
I'm using Weighted-Sum method to generate the pareto front points.
So, I would like to show the value for each variable of each generated pareto front's point.
Thanks
Related to the solverhook method doing dispatch manually over symbols, a simpler design could leverage Julia dispatch and types for methods:
abstract type MultiOptMethod end
"""
multi_solve(mt::MultiOptMethod) solves the multi-objective problem with method mt
"""
function multi_solve end
"""
epsilon-constraint method, parametrized over the epsilon value
"""
struct EpsMethod <: MultiOptMethod
eps::Float64
end
mutable struct MultiData
# ...
mt::MultiOptMethod
end
function solvehook(m::Model,
inequalityconstraint::Bool = false, kwargs...)
# dispatches over method from MultiData
end
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.