GithubHelp home page GithubHelp logo

Anderson acceleration about delaydiffeq.jl HOT 8 CLOSED

sciml avatar sciml commented on June 14, 2024
Anderson acceleration

from delaydiffeq.jl.

Comments (8)

ChrisRackauckas avatar ChrisRackauckas commented on June 14, 2024

As addressed here: #12 , using NLsolve.jl is probably a bad idea for this. We should have this internal to the integrator in order to make this efficient since NLsolve.jl doesn't allow for saving the cache/re-initializing.

from delaydiffeq.jl.

devmotion avatar devmotion commented on June 14, 2024

As noted here JuliaNLSolvers/NLsolve.jl#101:

A better implementation would use updates to qr factors, not implemented in julia yet (JuliaLang/julia#20791). This would enable a better scaling with the history size, and more stability (through condition number monitoring).

from delaydiffeq.jl.

antoine-levitt avatar antoine-levitt commented on June 14, 2024

It would be very good to have a common codebase with NLSolve. The current code is very naive and simple, but any optimized implementation would be more complex and non-trivial. What exactly is the problem with NLSolve? If it's avoiding the allocs, I would suggest profiling to show it's a non-trivial effect first; I would think repeated allocations of the same size should be OK for the GC to handle (but I have not measured it).

from delaydiffeq.jl.

ChrisRackauckas avatar ChrisRackauckas commented on June 14, 2024

What exactly is the problem with NLSolve? If it's avoiding the allocs, I would suggest profiling to show it's a non-trivial effect first; I would think repeated allocations of the same size should be OK for the GC to handle (but I have not measured it).

The allocs do matter here. Test problems use like size 10 arrays, but it's trivial for real problems to be like a system of 1,000 or 10,000 DDEs, in which case allocating that many large arrays does pretty bad. Even just reducing allocations in the Picard method when it was first created was a pretty big deal. NLsolve will need a way for us to pre-allocate for it to be viable here. NLsolve also only accepts real numbers.

But also, this code is quite a bit more complicated to handle the complexities of the interpolation.

https://github.com/JuliaDiffEq/DelayDiffEq.jl/blob/master/src/integrator_interface.jl#L121

Though there's probably a way to put it together with NLsolve.jl though. I would love to not double up on code, but I also don't want to take a performance and feature hit to do so. (Edit: #12 shows how to do it, so that could be revived in some form if we find out this direction works well).

from delaydiffeq.jl.

antoine-levitt avatar antoine-levitt commented on June 14, 2024

Are you concerned about the total amount of memory NLSolve needs, or the fact that it allocates it at one iteration, frees it and allocs it again next iteration? There's not really anything you can do about the first one except reduce m, and implementing it in DiffEq directly will not change that. The second one I'm a bit surprised that it costs a lot, but don't have any hard numbers on that.

About complex numbers, it would be good to have in NLSolve, but Anderson is the only solver implemented at the moment that does not require jacobian information, so it is the only one that would benefit from a complex NLSolve.

from delaydiffeq.jl.

ChrisRackauckas avatar ChrisRackauckas commented on June 14, 2024

Second, and not surprised when the arrays are large and the function calls are cheap.

What's needed for Anderson to support complex numbers?

from delaydiffeq.jl.

antoine-levitt avatar antoine-levitt commented on June 14, 2024

Ok, interesting, I'm mostly used to expensive function evaluations.

Talking out of ignorance here, but is it unfeasible to imagine an automatic caching mechanism, with a macro that would analyse a function that's called multiple times and replace it with one that pre allocates and uses that memory?

Anderson itself works the same for real and complex, so it's just a question of typing.

from delaydiffeq.jl.

ChrisRackauckas avatar ChrisRackauckas commented on June 14, 2024

Talking out of ignorance here, but is it unfeasible to imagine an automatic caching mechanism, with a macro that would analyse a function that's called multiple times and replace it with one that pre allocates and uses that memory?

Julia could add something like this. It would have to be done through the GC and how it allocates (and instead recycles) memory. In fact, ArrayFire has something like it.

from delaydiffeq.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.