GithubHelp home page GithubHelp logo

Comments (23)

odow avatar odow commented on June 11, 2024 1

It might help if I explain my workflow for diagnosing this:

  1. Encapsulate everything in a function, so I can run it with @time main()
  2. Comment out all constraints in the model.
  3. Run @time main(). Does it seem reasonable? Or is the bottleneck in creating the data
  4. Block by block, uncomment a new set of constraints and run @time main()
  5. If the time increases by so much (from seconds to minutes) that you can't profile, look for $O(N^2)$ or worse scaling behavior. Try running on a smaller data set. Can you re-formulate the model to something with better scaling?
  6. If the time increases by more than expected (say, from 1 sec to 10 sec):
    a. Run using ProfileView; @profview main()
    b. Analyse the flamegraph. do any methods show up as taking a long time (e.g., in the first post, it was promote_operation_fallback)? If so, open an issue. This is a little subjective if you don't know some of the internal details, so if in doubt, open an issue

The NonlinearExpr syntax is still fairly new, and these complementarity models have "unique" structural forms that we didn't see in our testing, so there's still some room for improvement in JuMP. They tricky part is that they're often very small changes that have a big different (the first example was fixed by #3730).

from jump.jl.

odow avatar odow commented on June 11, 2024

I've made two small changes to #3730 and #3731.

The biggest issue is terms like (RK[r,s] * (1 + tk[r,s]) / (1 + tk0[r]))

This ends up making a bunch of intermediate quadratic expressions:

a::VariableRef = RK[r,s]
b::AffExpr = 1 + tk[r,s]   # Allocates
c::QuadExpr = a * b  # Allocates
d::Float64 = 1 + tk0[r]
e::QuadExpr = c / d  # Allocates

We do this by default because the usual call is some large summation, where we will keep building in-place.

But in nonlinear expressions, we're probably building up some expression graphs that are relatively simple.

I wonder if we need a way of opting out of affine and quadratic simplifications.
x-ref #3498 (comment)

from jump.jl.

odow avatar odow commented on June 11, 2024

sum(PY[r,g] * ys0[r,g,s] * (1 - ty[r,s]) for g in G)

There are also possible changes like: sum(PY[r,g] * ys0[r,g,s] for g in G) * (1 - ty[r,s]).

This sum can no efficiently be built as an AffExpr, which is then multiplied by an AffExpr to get one QuadExpr.

from jump.jl.

odow avatar odow commented on June 11, 2024

That brings it down to

julia> include("/tmp/mitch/household_model_mcp.jl");
  0.936928 seconds (1.37 M allocations: 138.243 MiB, 6.48% gc time, 72.37% compilation time)
  0.229130 seconds (1.24 M allocations: 129.766 MiB, 16.21% gc time)

from jump.jl.

odow avatar odow commented on June 11, 2024

Now the pprof is
image
which is more reasonable.

from jump.jl.

odow avatar odow commented on June 11, 2024

@mitchphillipson, yor model contains lines like: I_PL_Y[r in R, s in S], ld0[r,s] * PI_Y_VA[r,s] / PL[r] and then I_PL_Y[r,s]*PL[r].

Why divide by PL[r] only to then multiply?

I might write your Y production block as:

for r in R, s in S
        if sum(ys0[r, s, g] for g in G)  0
            continue
        end
        theta_PL_Y_VA = ifelse(
            ld0[r,s] != 0,
            ld0[r,s] / (ld0[r,s] + kd0[r,s] * (1 + tk0[r])),
            0.0,
        )
        PI_Y_VA = PL[r]^theta_PL_Y_VA * (RK[r,s] * (1 + tk[r,s]) / (1 + tk0[r]))^(1 - theta_PL_Y_VA)
        @constraint(
            household,
            sum(PA[r,g] * id0[r,g,s] for g in G) +
            ld0[r,s] * PI_Y_VA +
            kd0[r,s] * (1 + tk0[r]) * PI_Y_VA -
            sum(PY[r,g] * ys0[r,g,s] for g in G) * (1 - ty[r,s])  Y[r,s]
        )
    end

from jump.jl.

odow avatar odow commented on June 11, 2024
using JuMP
import PATHSolver

using JuMP
import PATHSolver

function main()
    R = [Symbol("R_$i") for i in 1:51]
    S = G = [Symbol("S_$i") for i in 1:71]
    ys0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ys0_n = length(ys0)
    ys0.data[rand(1:ys0_n, floor(Int, 0.8 * ys0_n))] .= 0.0
    id0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ld0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    ld0.data[rand(1:length(ld0), 20)] .= 0.0
    kd0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    tk0 = Containers.DenseAxisArray(fill(0.32, length(R)), R)
    household = Model(PATHSolver.Optimizer)
    @variables(household, begin
        ty[r in R, s in S], (start = 0.5)
        tk[R, S]
        Y[R, S] >= 0,	(start = 1)
        PA[R, G] >= 0,	(start = 1)
        PY[R, G] >= 0,	(start = 1)
        RK[R, S] >= 0,	(start = 1)
        PL[R] >= 0,	    (start = 1)
    end)
    @expressions(household, begin
        theta_PL_Y_VA[r in R, s in S],
            ifelse(
                ld0[r,s] != 0,
                ld0[r,s] / (ld0[r,s] + kd0[r,s] * (1 + tk0[r])),
                0.0,
            )
        PI_Y_VA[r in R, s in S],
            PL[r]^theta_PL_Y_VA[r,s] * (RK[r,s] * (1 + tk[r,s]) / (1 + tk0[r]))^(1 - theta_PL_Y_VA[r,s])
        I_PL_Y[r in R, s in S],
            ld0[r,s] * PI_Y_VA[r,s] / PL[r]
        I_RK_Y[r in R, s in S],
            kd0[r,s] * (1 + tk0[r]) * PI_Y_VA[r,s] / (RK[r,s] * (1 + tk[r,s]))
    end)
    @constraint(
        household,
        [r in R, s in S; sum(ys0[r, s, g] for g in G) > 0],
        sum(PA[r,g] * id0[r,g,s] for g in G) +
        I_PL_Y[r,s] * PL[r] +
        I_RK_Y[r,s] * RK[r,s] * (1 + tk[r,s]) -
        sum(PY[r,g] * ys0[r,g,s] * (1 - ty[r,s]) for g in G)  Y[r,s]
    )
    return household
end

function main2()
    op_div = NonlinearOperator(/, :/)
    R = [Symbol("R_$i") for i in 1:51]
    S = G = [Symbol("S_$i") for i in 1:71]
    ys0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ys0_n = length(ys0)
    ys0.data[rand(1:ys0_n, floor(Int, 0.8 * ys0_n))] .= 0.0
    id0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ld0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    ld0.data[rand(1:length(ld0), 20)] .= 0.0
    kd0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    tk0 = Containers.DenseAxisArray(fill(0.32, length(R)), R)
    household = Model(PATHSolver.Optimizer)
    @variables(household, begin
        ty[r in R, s in S], (start = 0.5)
        tk[R, S]
        Y[R, S] >= 0,	(start = 1)
        PA[R, G] >= 0,	(start = 1)
        PY[R, G] >= 0,	(start = 1)
        RK[R, S] >= 0,	(start = 1)
        PL[R] >= 0,	    (start = 1)
    end)
    @expressions(household, begin
        theta_PL_Y_VA[r in R, s in S],
            ifelse(
                ld0[r,s] != 0,
                ld0[r,s] / (ld0[r,s] + kd0[r,s] * (1 + tk0[r])),
                0.0,
            )
        PI_Y_VA[r in R, s in S],
            PL[r]^theta_PL_Y_VA[r,s] * op_div(RK[r,s] * (1 + tk[r,s]), 1 + tk0[r])^(1 - theta_PL_Y_VA[r,s])
        I_PL_Y[r in R, s in S],
            ld0[r,s] * PI_Y_VA[r,s]
        I_RK_Y[r in R, s in S],
            kd0[r,s] * (1 + tk0[r]) * PI_Y_VA[r,s]
    end)
    @constraint(
        household,
        [r in R, s in S; sum(ys0[r, s, g] for g in G) > 0],
        sum(PA[r,g] * id0[r,g,s] for g in G) +
        I_PL_Y[r,s] +
        I_RK_Y[r,s] -
        sum(PY[r,g] * ys0[r,g,s] for g in G)* (1 - ty[r,s])  Y[r,s]
    )
    return household
end

GC.gc(); @time main();
GC.gc(); @time main();
GC.gc(); @time main();

GC.gc(); @time main2();
GC.gc(); @time main2();
GC.gc(); @time main2();

which is:

julia> include("household_model_mcp.jl");
  1.072836 seconds (5.07 M allocations: 432.596 MiB, 9.38% gc time, 58.08% compilation time)
  0.408449 seconds (4.94 M allocations: 424.035 MiB, 16.76% gc time)
  0.454981 seconds (4.94 M allocations: 424.209 MiB, 15.82% gc time)
  0.777681 seconds (1.12 M allocations: 123.002 MiB, 3.35% gc time, 74.64% compilation time)
  0.181090 seconds (992.88 k allocations: 115.049 MiB, 8.62% gc time)
  0.205074 seconds (992.92 k allocations: 115.103 MiB, 9.65% gc time)

So I think some of this is just writing a "better" model.

from jump.jl.

mitchphillipson avatar mitchphillipson commented on June 11, 2024

There are a couple of reasons the model is written the way it is,

  1. Laziness, I'll admit I wrote this very quickly to test another thing I'm working on. I wanted to know if it was my fault, or something with JuMP.
  2. Terms like $(X_1/X_2)^{\sigma_1}\cdot(X_2/X_3)^{\sigma_2}\cdot X_3$ are very common in CGE models. In this specific situation many of the $\sigma$'s happen to be one so we could optimize. The bigger issue is that I plan for these equations to be auto-generated, this is the purpose of MPSGE, and then these optimizations become harder.

This is pretty great though and I'll implement these changes on the larger model. A couple question-comments:

  1. Is op_div faster, or preferred, over /?
  2. When summing, should add the smallest "like" type, so prefer adding affine expressions over quadratic over non-linear? I can definitely do this in certain situations.

I'll work to implement these suggestions in both my MCP code and the backend of MPSGE. In general, writing the explicit MCP code is tedious and error prone, but can lead to faster code due to optimizations. We plan to have people interact with MPSGE due to simplicity of modelling.

Thanks for the suggestions!

Mitch

from jump.jl.

odow avatar odow commented on June 11, 2024

I would factor out some common helper functions that let you write the simplifications once:

utility(x, y, a) = x^a * y^(1 - a)
function utility(x1, x2, x3, s1, s2)
    if s2 isa Real && isone(s2)
        return utility(x1, x2, s1)
    end
    return (x1 / x2)^s1 * (x2 / x3)^s2 * x3
end

Questions:

  1. No, keep / for now. I was just playing around.
  2. Yes

from jump.jl.

odow avatar odow commented on June 11, 2024

Using #3732

function main3()
    Random.seed!(1234)
    R = [Symbol("R_$i") for i in 1:51]
    S = G = [Symbol("S_$i") for i in 1:71]
    ys0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ys0_n = length(ys0)
    ys0.data[rand(1:ys0_n, floor(Int, 0.8 * ys0_n))] .= 0.0
    id0 = Containers.DenseAxisArray(rand(length(R), length(S), length(G)), R, S, G)
    ld0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    ld0.data[rand(1:length(ld0), 20)] .= 0.0
    kd0 = Containers.DenseAxisArray(rand(length(R), length(S)), R, S)
    tk0 = Containers.DenseAxisArray(fill(0.32, length(R)), R)
    household = Model(PATHSolver.Optimizer)
    @variables(household, begin
        ty[r in R, s in S], (start = 0.5)
        tk[R, S]
        Y[R, S] >= 0,	(start = 1)
        PA[R, G] >= 0,	(start = 1)
        PY[R, G] >= 0,	(start = 1)
        RK[R, S] >= 0,	(start = 1)
        PL[R] >= 0,	    (start = 1)
    end)
    pow_utility(x, y, a) = x^a * y^(1 - a)
    for r in R, s in S
        if sum(ys0[r, s, g] for g in G) == 0
            continue
        end
        theta_PL_Y_VA = ifelse(
            ld0[r,s] != 0,
            ld0[r,s] / (ld0[r,s] + kd0[r,s] * (1 + tk0[r])),
            0.0,
        )
        PI_Y_VA = pow_utility(
            PL[r],
            RK[r,s] * @nl(1 + tk[r,s]) / (1 + tk0[r]),
            theta_PL_Y_VA,
        )
        I_PL_Y = ld0[r,s] * PI_Y_VA / PL[r]
        I_RK_Y = kd0[r,s] * (1 + tk0[r]) * PI_Y_VA / (RK[r,s] * @nl(1 + tk[r,s]))
        @constraint(
            household,
            sum(PA[r,g] * id0[r,g,s] for g in G) +
            I_PL_Y * PL[r] +
            I_RK_Y * RK[r,s] * @nl(1 + tk[r,s]) -
            sum(PY[r,g] * ys0[r,g,s] for g in G) * @nl(1 - ty[r,s])  Y[r,s]
        )
    end
    return household
end

yields

  0.410601 seconds (1.08 M allocations: 89.329 MiB, 7.26% gc time, 53.34% compilation time)
  0.147260 seconds (1.07 M allocations: 88.947 MiB)
  0.154814 seconds (1.07 M allocations: 88.947 MiB)

from jump.jl.

odow avatar odow commented on June 11, 2024

So it's a bit annoying that small changes to the model can have a big impact on runtime. But I don't really see a way to avoid this.

Nonlinear is pretty tightly coupled to the expression graph you generate, so there is (and perhaps should be) a big difference between sum(PY[r,g] * ys0[r,g,s] * (1 - ty[r,s]) for g in G) and sum(PY[r,g] * ys0[r,g,s] for g in G) * (1 - ty[r,s]).

If profiling throws up missing methods, we should obviously fix that.

Otherwise, one option is #3732.

This is kinda like the GAMS summation issue again. JuMP asks more of the developer to get performance, but it also offers more ways of writing the same thing.

Here's the pprof from the original main(), which shows no obvious bottleneck (apart from just constructing a bunch of expressions)

image

Here's main3:

image

I think getindex features prominently because there is:

julia> R * S * (2G + 1 + 2 + 2G + 1)
1046469

accesses of a DenseAxisArray in the @constraint.

from jump.jl.

odow avatar odow commented on June 11, 2024

We should try profiling with a call to optimize! to see the full impact.

from jump.jl.

odow avatar odow commented on June 11, 2024

image

The culprit is https://github.com/jump-dev/MathOptInterface.jl/blob/cab197cc755904d8e7704a67df8280e458c486e8/src/Nonlinear/parse.jl#L265-L274

If there are lots of small AffExpr, then the cost of converting them to a ScalarNonlinearFunction and then parsing using a stack is high. We could write out directly.

from jump.jl.

odow avatar odow commented on June 11, 2024

A big improvement here: jump-dev/MathOptInterface.jl#2487

Now the much more reasonable:

image

from jump.jl.

odow avatar odow commented on June 11, 2024

I think the big block of Array is just the GC choosing to run during:

https://github.com/jump-dev/MathOptInterface.jl/blob/68e0c7cececbcabd14d548f461819157d5d69c81/src/Nonlinear/evaluator.jl#L248

from jump.jl.

odow avatar odow commented on June 11, 2024

Part of the problem is this horrific block:

@expression(household, betaks[r=R,s=S],
    kd0[r,s]/sum(kd0[rr,ss] for rr∈R,ss∈S)
)

@expressions(household, begin
    PI_KS, sum(betaks[r,s]*RK[r,s]^(1+etaK) for r∈R,s∈S)^(1/(1+etaK))
    O_KS_RK[r=R,s=S], kd0[r,s]*(RK[r,s]/PI_KS)^etaK
    I_RKS_KS, sum(kd0[r,s] for r∈R,s∈S)
end)

@constraint(household, prf_KS,
    RKS*I_RKS_KS - sum(RK[r,s]*O_KS_RK[r,s] for r∈R,s∈S) ⟂ KS
)

|R| is 51 and |S| is 71. So PI_KS contains $O(|R| \times |S|)$ terms. This appears in the denominator of O_KS_RK, and O_KS_RK appears prf_KS $O(|R| \times |S|)$ times. So the function has $O(|R|^2 \times |S|^2) = 13,111,641$ terms. Ouch.

from jump.jl.

odow avatar odow commented on June 11, 2024

@mitchphillipson if I do:

@variable(household, __PI_KS__)
@constraint(household, sum(betaks[r,s]*RK[r,s]^(1+etaK) for rR,sS)^(1/(1+etaK)) - __PI_KS__  __PI_KS__)
@expressions(household, begin
    # PI_KS, sum(betaks[r,s]*RK[r,s]^(1+etaK) for r∈R,s∈S)^(1/(1+etaK))
    O_KS_RK[r=R,s=S], kd0[r,s]*(RK[r,s]/__PI_KS__)^etaK
    I_RKS_KS, sum(kd0[r,s] for rR,sS)
end)

Then your original model runs in 50 seconds, which seems more in line with GAMS.

The wider issue is that we don't have a common subexpression elimination phase yet, so PI_KS appears many many times.

from jump.jl.

mitchphillipson avatar mitchphillipson commented on June 11, 2024

It's an interesting solution, introducing a dummy variable/constraint.

I haven't been able to test your changes, but by "runs" do you mean "builds the constraints" or "optimizes"?

from jump.jl.

odow avatar odow commented on June 11, 2024

do you mean "builds the constraints" or "optimizes"?

I wrapped your entire script in function, including

set_attribute(household, "cumulative_iteration_limit", 0)
optimize!(household)

results in

julia> include("household_model_mcp.jl")
Reading options file /var/folders/bg/dzq_hhvx1dxgy6gb5510pxj80000gn/T/jl_mTrTb0
 > cumulative_iteration_limit 0
Read of options file complete.

Path 5.0.03 (Fri Jun 26 09:58:07 2020)
Written by Todd Munson, Steven Dirkse, Youngdae Kim, and Michael Ferris
Preprocessed size   : 23502

Major Iteration Log
major minor  func  grad  residual    step  type prox    inorm  (label)
    0     0     1     1 nan           I 0.0e+00 1.0e+05 (mkt_PF)

Major Iterations. . . . 0
Minor Iterations. . . . 0
Restarts. . . . . . . . 0
Crash Iterations. . . . 0
Gradient Steps. . . . . 0
Function Evaluations. . 1
Gradient Evaluations. . 1
Basis Time. . . . . . . 0.001942
Total Time. . . . . . . 1.436610
Residual. . . . . . . . 1.000000e+20
Postsolved residual:   nan
 48.493215 seconds (88.03 M allocations: 12.604 GiB, 45.03% gc time, 16.20% compilation time)

I'll email it to you

from jump.jl.

mitchphillipson avatar mitchphillipson commented on June 11, 2024

Wow. That running in less than a minute is wild, I don't think GAMS does it that fast. I'll be digging into this tomorrow.

Truly impressive work.

from jump.jl.

odow avatar odow commented on June 11, 2024

GAMS will probably similarly benefit from the __PI_KS__ trick. That function is just truly nasty.

I think we need to pay more attention to the computational form of the functions, rather than their mathematical form.

A good example, even if it doesn't really make a runtime difference (because it's just Float64), is:

@expression(household, betaks[r=R,s=S],
    kd0[r,s]/sum(kd0[rr,ss] for rrR,ssS)
)

This sums the denominator sum(kd0[rr,ss] for rr∈R,ss∈S) separately for every r in R and s in S. That's the same quadratic runtime in $(|R|\times|S|)$.

You could instead do:

betaks_denominator = sum(kd0)
@expression(household, betaks[r=R,s=S], kd0[r,s] / betaks_denominator)

or even

betaks_denominator = sum(kd0)
betaks = kd0 ./ betaks_denominator

from jump.jl.

mitchphillipson avatar mitchphillipson commented on June 11, 2024

from jump.jl.

odow avatar odow commented on June 11, 2024

Closing because I don't know if there's much else to do here. @jd-foster added a note to #2348 (comment)

@mitchphillipson can comment below if he finds anything else, or we can keep chatting via an email thread that has been running in parallel.

from jump.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.