Comments (18)
Some updates:
I ended up with adding an abstract supertype AbstractContinuousCallback
to DiffEqBase, and then made both ContinuousCallback
and DiscontinuityCallback
a subtype of it. With some minor changes in DiffEqBase (replacing ContinuousCallback
with AbstractContinuousCallback
in type annotations of some methods) and implementing find_callback_time
and apply_callback!
for DiscontinuityCallback
, I then could get a working example for dependent delays by just adding one DiscontinuityCallback
to the callback set. I got similar results for our standard example with constant delays and with a dependent delay (t,u) -> a
where a
is just that constant delay. I will just gather all changes needed in DiffEqBase and OrdinaryDiffEq, add more DDE examples to DiffEqProblemLibrary and then create a PR for dependent delays in the next days.
However, first I will push a separate PR that tries to improve and fix some problems with lazy interpolants and minimizes the number of temporary arrays since I discovered some problems when I tried to benchmark lazy interpolants. I want to keep these changes separate.
from delaydiffeq.jl.
After playing around with it, the solution I grew to like is the following. Everything is a DDEProblem
. DDEProblem
has two lag inputs: constant lags and dependent lags. Additionally, it has a boolean for if it's a neutral equation or not. Thus the old ConstantLagDDEProblem
is just a DDEProblem
with no dependent lags. Because of this, dependent_lags
is an optional argument since that's a special case and constant lag problems are likely more standard.
DelayDiffEq is setup for the deprecation. Neutral problems extend the lag tree all the way to the end of the tspan
. There's checks for empty constant_lags
so that way state-dependent delay problems with undeclared delays are allowed. residual_control.jl
tests to show that this works okay (but not great. To plotting accuracy as Shampine says). Updated DiffEqBase.jl, DiffEqProblemLibrary.jl, and this repo to these changes. @devmotion can you take a look and see if you have any qualms with this setup before it's released? It ended up not being too much of a change.
The next step is to implement the callbacks for discontinuity tracking. We need to have a storage of every previous discontinuity and check for zero-crossings against that. We can use d_discontinuities
but need to make sure that further discontinuities not only get added to tspan
but also this vector. We can check for zero-crossings by seeing if .>
changes between two timepoints.
from delaydiffeq.jl.
I don't have any serious objections, I guess, since it seems we do not have to specialize upon these different problems too often and can handle them within one framework.
If all values are Number, we have a constant-lags and we know how to deal with that. If some (or all) of the values are not Number, then we assume that they are functions (to allow callable types) of the form ai(t,u).
I liked this idea, so maybe we could add a convenient constructor that automatically filters lags, and assigns numbers to constant_lags
and other lags to dependent_lags
.
from delaydiffeq.jl.
I liked this idea, so maybe we could add a convenient constructor that automatically filters lags, and assigns numbers to constant_lags and other lags to dependent_lags.
The issue with that is that it would never be inferrable, and I'd always like to have some inferrable route. If the lags were tuples we could do something though, but I'd like to keep the number of constructors down if its not super helpful.
from delaydiffeq.jl.
Ah, I see. I'm fine with the proposed constructor anyway.
from delaydiffeq.jl.
The next step is to implement the callbacks for discontinuity tracking. We need to have a storage of every previous discontinuity and check for zero-crossings against that. We can use d_discontinuities but need to make sure that further discontinuities not only get added to tspan but also this vector. We can check for zero-crossings by seeing if .> changes between two timepoints.
I'm already working on a (hopefully) almost complete prototype which will include discontinuity tracking of dependent delays. I haven't uploaded my working branch, but I think it is time to discuss some issues I already discovered first.
Regarding the current implementation:
- It is currently not done, but I think we should differentiate between discontinuities of order 0 and 1 at initial time point
tspan[1]
, i.e. whetherh(t) == u0
or not. This is easy to check and for non-neutral discontinuities it can sometimes decrease the amount of discontinuities we have to track. - Discontinuities that are added after
u
was modified by a callback are only added tod_discontinuities
in lines
https://github.com/JuliaDiffEq/DelayDiffEq.jl/blob/master/src/callbacks.jl#L16-L18
but we also have to add them totstops
, otherwise they are not respected by our time stepping algorithm. - The computation of the discontinuity tree is inefficient since
- it also calculates discontinuities after
end_val
for non-neutral problems - the implementation of the iterator in Combinatorics.jl is inefficient for our use-case:
in every step in lines
https://github.com/JuliaMath/Combinatorics.jl/blob/master/src/combinations.jl#L229-L251
it creates two arrays, one for tracking the current state of the iterator and one of the resulting combination - but we are only interested in the sum of the resulting combinations! - the length of the iterator becomes huge very quickly
- we create an array of discontinuities even though they have to be added to the heap of discontinuities later
- it also calculates discontinuities after
- The degree of the interpolation is not considered in the calculation of discontinuities, although according to e.g. Bellen and Zennaro (Numerical methods for delay differential equations, 2005; theorem 4.1.2 and similar ones for time-dependent and state-dependent DDEs) our algorithm has global order
q' = min{p,q+1}
wherep
is the order of the ODE method andq
is the order of the interpolant. - User specified discontinuities before the initial time point (I have no idea whether this actually is useful) are added to the the heap of discontinuities (so always
integrator.t != top(integrator.opts.d_discontinuities)
in line
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L350
, and hence no element ofd_discontinuities
is ever removed from the heap) but not included in the computation of the discontinuity tree. - Discontinuities are only removed from the heap
d_discontinuities
in line
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L350
if the algorithm is FSAL.
What I suggest:
- Create a type
since (except of neutral DDEs) the information currently saved in
struct Discontinuity{tType} t::tType order::Int end
d_discontinuities
is not sufficient. To make this work with the current setup ofd_discontinuities
we just have to add conversion of discontinuities to numbers (i.e. just return the time point in that case), propagating>
,<
,==
,isless
, and other desired methods to the time point, add convertx
totType(x)
in the anonymous function in
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L74
remove the conversion of discontinuities in line
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L166
and define an own type ford_discontinuities
in
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/type.jl#L20
This seems straightforward and should also not harm the current implementation since after initializationd_discontinuities
is only used in the ODE algorithm in lines
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L350-L351
so we would already discover the only possible problem that elements ofd_discontinuities
can not compared to objects of typetType
by the proposed conversion ofx
totType(x)
in line
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L74
during initialization; but to be completely safe we could also there just add a conversion totType
. - Reimplement the method
apply_step!
in lines
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L323-L363
i.e. mostly copy it but change the handling of discontinuities - we should handle them regardless whether the algorithm is FSAL or not - Change the computation of discontinuities caused by constant delays to one of the following options:
- Create an iterator
WithReplacementSums
that is mostly a copy of the implementation of Combinatorics.jl but does compute the sum of combinations instead of arrays of combinations (this already gives a huge boost regarding time and memory allocations). Instead of creating an array of this different discontinuity time points we then can just create a generatorwhereIterators.flatten((Discontinuity(t₀ + Δt, order + Δorder) for Δt in with_replacement_sums(constant_lags, Δorder) if Δt < maxΔt) for Δorder in 1:maxΔorder)
t₀
is the current time point,order
is the current order,maxΔt
is the maximal duration we want to track (usually up to the final time point of the integrator, i.e.tspan[2] - t₀
), andorder + maxΔorder
is the maximal order up to which we want to track discontinuities. I think it is not worth to handle duplicates here (we should do that inapply_steps!
) since even when we filter them here we can not ensure thatd_discontinuities
does not contain any duplicates. The nice thing is that we then can loop over this lazy iterator and discontinuities will be calculated just before we push them to the heap, without allocating an additional intermediate array. However, there is still a new array created in every step in line
https://github.com/JuliaMath/Combinatorics.jl/blob/master/src/combinations.jl#L234
and I'm not sure whether we could get rid of this (I guess yes, but I'm not sure...). However, we still can get problems both when collecting all these discontinuities and when adding them to the heap since the number of discontinuities increases very fast for increasing number of delays and order of algorithms. - Only add discontinuities of next order to the heap if we reach a discontinuity, and, of course, add discontinuities of first order originating from initial discontinuities during initialization, i.e. change our iterator to
This would keep both the size of the iterator and
(Discontinuity(t₀ + lag, order + 1) for lag in constant_lags if lag < maxΔt)
d_discontinuities
to a minimum, and since we also always add these discontinuities totstops
we still can not miss any discontinuity. So far I can not see any disadvantages of this approach, it seems both the easiest and fastest approach.
- Create an iterator
- When handling discontinuities in
apply_step!
we should take care of duplicates and discontinuities that are very close to eachother. An idea would be the following basic algorithm:And then we can compute new discontinuities, originating from discontinuity of order# Handle discontinuities if !isempty(integrator.opts.d_discontinuities) && top(integrator.opts.d_discontinuities).t == integrator.t # remove all discontinuities at current time point and calculate minimal order # of these discontinuities d = pop!(integrator.opts.d_discontinuities) order = d.order while !isempty(integrator.opts.d_discontinuities) && top(integrator.opts.d_discontinuities) == integrator.t d2 = pop!(integrator.opts.d_discontinuities) order = min(order, d2.order) end # remove all discontinuities close to the current time point as well and # calculate minimal order of these discontinuities # these discontinuities are all caused by constant delays and no callbacks to # discontinuities originating from them by dependent delays are created yet while !isempty(integrator.opts.d_discontinuities) && abs(top(integrator.opts.d_discontinuities).t - integrator.t) < 10eps(integrator.t) d2 = pop!(integrator.opts.d_discontinuities) order = min(order, d2.order) end # we then treat current time point as discontinuity of order `order` # in case of option 1 only callbacks to dependent discontinuities are added # in case of option 2 both callbacks to dependent discontinuities of next order are added and constant discontinuities of next order are added to d_discontinuities and tstops
order
at time pointintegrator.t
- if we always calculate the whole discontinuity tree (option 1 above) then we would just add callbacks to discontinuities originating from it by dependent delays (new discontinuity trees for dependent discontinuities are already created separately, see below), if we only calculate discontinuities of next order (option 2 above) we would add both constant discontinuities and callbacks to dependent discontinuities originating from the current discontinuity. Of course, for non-neutral DDEs only if the discontinuity was not already of maximal order. I thought we could use the same threshold10eps
as in line
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L265
for time steps close totstops
. I'm just not sure whether multiple closetstops
are currently taken care of in OrdinaryDiffEq - if not, we would also have to modifytstops
in the same way. - For the discontinuity callbacks the structure currently is as follows: they are saved in a vector of continuous callbacks in
integrator.discontinuity_callbacks
, and created asI'm not sure yet whether we should those additional options# condition that checks for discontinuity originating from discontinuity `d` by a dependent # delay `lag` struct DiscontinuityCondition{tType,F} d::Discontinuity{tType} lag::F end (f::DiscontinuityCondition)(t, u, integrator) = f.lag(t, u) - f.d.t # discontinuity callbacks just step back to discontinuities without further changes struct DiscontinuityAffect end (f::DiscontinuityAffect)(integrator) = nothing DiscontinuityCallback(d, lag, interp_points, abstol, reltol) = ContinuousCallback(DiscontinuityCondition(d, lag), DiscontinuityAffect(); save_positions=(false,false), interp_points=interp_points, abstol=abstol, reltol=reltol) DiscontinuityCallback(d, lag, integrator::DDEIntegrator) = DiscontinuityCallback(d, lag, integrator.discontinuity_interp_points, integrator.discontinuity_abstol, integrator.discontinuity_reltol)
discontinuity_interp_points
,discontinuity_abstol
, anddiscontinuity_reltol
tosolve
and the DDE integrator. Maybe we could also rename them to something shorter... Then a mostly copy ofhandle_callbacks!
in OrdinaryDiffEq allows to handle these callbacks after continuous and discrete callbacks:This parts needs some additional work. I'm not sure if the saving is done correctly, and whether the discontinuity callbacks should be handled before all other callbacks or after them. If we handle them after all other callbacks, we can be sure that other callbacks do not modify the step size but on the other hands user callbacks then maybe use wrong time points for calculation...# handle continuous and discrete callbacks... discontinuity_modified = false # handle discontinuity callbacks, in order to avoid any changes to time step by user callbacks if !isempty(discontinuity_callbacks) # find next dependent discontinuity time,upcrossing,idx,counter = find_first_continuous_callback(integrator, discontinuity_callbacks[1]) for callback in @view(discontinuity_callbacks[2:end]) time,upcrossing,idx,counter = find_first_continuous_callback(integrator,time,upcrossing,idx,counter,callback) end # add discontinuities originating from current discontinuity by constant delays if time != zero(typeof(integrator.t)) && upcrossing != 0 # if not, then no events # add new discontinuity of increased order to heap of discontinuities # assures that callbacks to discontinuities originating from current discontinuity # by dependent delays are added in `apply_step!` callback = discontinuity_callbacks[idx] d = Discontinuity(integrator.t, callback.condition.d.order + 1) push!(integrator.opts.d_discontinuities, d) # in case of option 1 add discontinuities of discontinuity tree starting at current discontinuity to d_discontinuities and tstops # in case of option 2 constant and dependent discontinuities are added in `apply_step!` saved_in_cb = false discontinuity_modified = true integrator.reeval_fsal = true # since we skip handle_callback_modifiers! end # prevent duplicate calculation and addition of next discontinuities if !discontinuity_modified integrator.u_modified = continuous_modified || discrete_modified if integrator.u_modified handle_callback_modifiers!(integrator) end end
handle_callback_modifiers!
is changed in a similar way but just assumes a discontinuity of order0
at the current time point:integrator.reeval_fsal = true # recalculate fsalfirst after applying step .... # add new discontinuity of order 1 (i.e. derivative x' of solution x is discontinuous) # to heap of discontinuities # assures that callbacks to discontinuities originating from current discontinuity # by dependent delays are added in `apply_step!` d = Discontinuity(integrator.t, 1) push!(integrator.opts.d_discontinuities, d) # in case of option 1 add discontinuities of discontinuity tree to d_discontinuities and tstops # in case of option 2 constant discontinuities and callbacks to dependent discontinuities of next order are both added in `apply_steps!`
So, I know this is a lot and probably a bit confusing. Hence I just quickly summarise some of my questions:
- Which method should we use to compute constant discontinuities? Option 1 or option 2?
- Should we handle discontinuities before the initial time point, i.e. include constant and dependent discontinuities caused by them? This would mean we have to calculate at least the first discontinuities caused by them after the initial time step and add callbacks for dependent discontinuities.
- Should we check the order of discontinuity at the initial time point?
- Should we consider the order of the interpolation used during integration, i.e. change the implementation of
alg_order
forMethodOfSteps
algorithms? - What do you think about concept with objects of type
Discontinuity
? Or would you rather use tuples - but then we would have to handle that in OrdinaryDiffEq a bit differently? - What do you think about the handling of duplicate and close discontinuities? Does OrdinaryDiffEq take care of multiple close time stops, or do we have to remove those time stops as well?
- What do you think about the outline of the discontinuity callbacks? Should saving be done differently? Should callbacks be handled in another order?
from delaydiffeq.jl.
- Option 2 sounds good. I like the idea of pairing the order. It's much more natural than the "assumed orders" I had going on.
- Let's add discontinuities before the initial time point later. I think the infrastructure with the order tracking will make it an easy addition, so there shouldn't be a worry about adding it in the future.
- Yes, that's a good idea for doing the order propagation correctly and it's cheap so why not.
- We can make a trait for that in OrdinaryDiffEq.jl. For now let's just assume it matches
alg_order
for each algorithm since that'll be true for the recommended choices anyways, but then we can swap this out. - I like
Discontinuity
and don't see any objection to the idea.
Does OrdinaryDiffEq take care of multiple close time stops, or do we have to remove those time stops as well?
It has special handling if you step close to a tstop
then because of this being <
:
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L316
it goes back to handle_tstops!
and just pops it off. So I think it should handle duplicates just fine, but I think that's a case I forgot to add tests for.
Should callbacks be handled in another order?
For continuous callbacks, ordering doesn't matter. What will happen is that each of the callbacks will rootfind the timepoint, and only the one that is first in time will be used. This is because it's assumed that some kind of discontinuity or change will take place, and so the ODE step is only correct up to the first callback's time. So there shouldn't be an issue here: it should already handle this correctly.
If they are just appended to the list of continuous callbacks with a trivial affect!
, then if save_positions=(false,false)
they should just act like a tstop
then. Would this not do everything that's needed?
from delaydiffeq.jl.
Thanks for your feedback!
If they are just appended to the list of continuous callbacks with a trivial affect!, then if save_positions=(false,false) they should just act like a tstop then. Would this not do everything that's needed?
That was my first idea as well and it seems like a good way to get rid of the first/last problem. But it is not possible to append anything to integrator.opts.callbacks.continuous_callbacks
since types then do not match. That's why I ended up with a separate vector of callbacks that are all exactly of the same type. Actually I think we can just mix those two ideas: calculate the time etc. of the first continuous callback (if existent) and then just use that as start for the iteration over the vector of discontinuity callbacks. We then just apply the callback which occurs first, hence either a continuous callback or a discontinuity callback. Just have to figure out the implementation details...
It has special handling if you step close to a tstop
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L263
then because of this being <:
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L316
it goes back to handle_tstops! and just pops it off. So I think it should handle duplicates just fine, but I think that's a case I forgot to add tests for.
I'm still not sure whether this does the same thing I wanted to do with the discontinuities. It seems it adjusts steps close to a tstop to the tstop in order to avoid a tiny step in the next step. Then it will loop and get rid of additional time stops at tstop (if there are any). It will then calculate the step to tstop - but thereafter it seems possible that the integrator steps to another time stop that comes just a tiny bit after tstop. So it prevents the creation of additional tiny time steps but it does not prevent tiny time steps caused by existing time stops, I guess.
from delaydiffeq.jl.
But it is not possible to append anything to integrator.opts.callbacks.continuous_callbacks since types then do not match.
Why not add it to the set of callbacks before calling init
?
from delaydiffeq.jl.
I'm still not sure whether this does the same thing I wanted to do with the discontinuities. It seems it adjusts steps close to a tstop to the tstop in order to avoid a tiny step in the next step. Then it will loop and get rid of additional time stops at tstop (if there are any). It will then calculate the step to tstop - but thereafter it seems possible that the integrator steps to another time stop that comes just a tiny bit after tstop. So it prevents the creation of additional tiny time steps but it does not prevent tiny time steps caused by existing time stops, I guess.
I see. Then I guess it needs to be addressed in solve!
.
from delaydiffeq.jl.
Why not add it to the set of callbacks before calling init?
That's only possible for the initial callbacks. But, at least with the current design, we have to add additional callbacks at every discontinuity we pass to cover all combinations of old discontinuities and dependent delays.
from delaydiffeq.jl.
Oh, hmm... can there be one callback that's checking against the past discontinuities and the lag vector, looking for all zeros at once?
from delaydiffeq.jl.
I would like to use only one callback... But I had no immediate idea how to do it and even more how to do it in an efficient way. Even just multiplying all individual functions t - ai(t,u)
might not work if the signs of the different factors cancel out.
from delaydiffeq.jl.
I see what you mean. Hmmm... I guess you do need something special here.
from delaydiffeq.jl.
What we could do is just move the whole logic of finding the minimal time point into a condition
. We could set up a type which just updates fields such as time
and idx
when it is executed. I'm just not sure how good that works and whether we would have to rewrite the whole logic of continuous callbacks such as root finding etc.
from delaydiffeq.jl.
We will want to have a lot of the details of the continuous callback because there are a lot of subtle floating point issues (and interpolation points issues) that come up (see the old threads on this for why it has like prevfloat(prevfloat(...
. If it has to be copied, I guess that's what we can do for now while we wait for a better solution to come up.
I think generally large changes like this are accomplished by making something that's testable, works, and is programmed in a way that we know it's quite efficient. Then overtime the code is improved and consolidated as we understand it more. So if you do need to do a copy, I am not against it if we don't have a better solution for now (I certainly would not want to block the development of this with a code styling issue, for better or worse).
from delaydiffeq.jl.
That sounds great.
from delaydiffeq.jl.
Closed by #35
from delaydiffeq.jl.
Related Issues (20)
- behavior of step! with tstop HOT 6
- Discontinuity tracking can miss discontinuity at 0
- TagBot trigger issue HOT 58
- DDE too stiff? radau method required? HOT 6
- SavingCallback does not seem to save the right values HOT 1
- In DDEs constant_lags break PresetTimeCallback
- Performance regressions since Julia v1.4 HOT 1
- More efficient evaluation of the history function for multiple time points
- Significant allocations in DDE interpolation for multiple time points HOT 3
- Call to HistoryFunction gets replaced by call to ODEFunction HOT 5
- initial conditions not decaying according to dynamics function HOT 1
- missing cache.alg causing runtime dispatch in LinearSolve HOT 7
- Incorrect jacobian with Zygote + ReverseDiffAdjoint HOT 4
- Precompilation issue (DelayDiffEq v5.40.6) HOT 2
- `Dopri5` solver doesn't work for DDEs
- JuliaCon Proceedings Review: Example fails HOT 3
- DDE does not provide any solution (example from docs) HOT 6
- constant_lags contains 0.0 HOT 3
- Cannot resize DDE problem using resize! HOT 8
- MethodError: no method matching OrdinaryDiffEq.InterpolationData HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from delaydiffeq.jl.