Comments (2)
Thank you for trying out this new package!
The computational complexity of LP is not proportional to the number of variables.
The result of HiGHS on my computer:
100 vars: 3.10 ms
1000 vars: 11.37 ms
10000 vars: 457.82 ms
20000 vars: 1790.67 ms
50000 vars: 11237.34 ms
The result of Gurobi on my computer:
100 vars: 0.93 ms
1000 vars: 3.99 ms
10000 vars: 220.75 ms
20000 vars: 863.75 ms
50000 vars: 5061.72 ms
The time of HiGHS is nearly 2x of the time of Gurobi and the ratio is consistent when the size of problem increases.
By the way, you can use generator directly without turning it into list. This is my test code:
import math
import time
import pyoptinterface as poi
from pyoptinterface import highs, gurobi
def run(n_vars: int):
model = highs.Model()
t1 = time.perf_counter()
model.set_raw_parameter("log_to_console", False)
model.set_raw_parameter("output_flag", False)
x = model.add_variables(range(n_vars), lb=0.0, ub=1.0)
model.add_linear_constraint(poi.quicksum(x), poi.Eq, 1.0)
objective = poi.quicksum(i * x[i] for i in range(n_vars))
model.set_objective(objective, poi.ObjectiveSense.Maximize)
model.optimize()
assert math.isclose(model.get_value(objective), n_vars - 1)
t2 = time.perf_counter()
print(f"{n_vars} vars:\t{(t2 - t1) * 1000:.2f} ms")
for n_vars_ in (100, 1_000, 10_000, 20_000, 50_000):
run(n_vars_)
from pyoptinterface.
Ah yeah I think I was kind of mixing different solver settings when comparing with other libraries.
With presolve disabled it's much more like I was expecting, much more in line with other libraries - though still nonlinear as you pointed out is expected.
Thanks!
import math
import time
import pyoptinterface as poi
from pyoptinterface import highs
def run(n_vars: int):
model = highs.Model()
t1 = time.perf_counter()
model.set_raw_parameter("log_to_console", False)
model.set_raw_parameter("output_flag", False)
model.set_raw_parameter("presolve", "off")
x = model.add_variables(range(n_vars), lb=0.0, ub=1.0)
model.add_linear_constraint(poi.quicksum(x), poi.Eq, 1.0)
objective = poi.quicksum(i * x[i] for i in range(n_vars))
model.set_objective(objective, poi.ObjectiveSense.Maximize)
model.optimize()
assert math.isclose(model.get_value(objective), n_vars - 1)
t2 = time.perf_counter()
print(f"{n_vars} vars:\t{(t2 - t1) * 1000:.2f} ms")
for n_vars_ in (10, 10, 100, 1_000, 10_000, 20_000, 50_000, 100_000):
run(n_vars_)
10 vars: 1.41 ms <-- warmup
10 vars: 0.09 ms
100 vars: 0.24 ms
1000 vars: 2.13 ms
10000 vars: 52.38 ms
20000 vars: 171.64 ms
50000 vars: 959.09 ms
100000 vars: 3662.42 ms
from pyoptinterface.
Related Issues (1)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pyoptinterface.