Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ParallelDynamicalSystem for StroboscopicMap and other API #225

Merged
merged 9 commits into from
Jan 8, 2025

Conversation

rusandris
Copy link
Contributor

As mentioned here, there is specialized parallel integrator for CoupledODEs and a generic fallback that copies systems for anything else. But Stroboscopicmap is actually just an ODE under the hood.

This PR attempts to implement the same specialized parallel ODEProblem that is built when ParallelDynamicalSystem(ds::CoreDynamicalSystem, states::Vector{<:AbstractArray{<:Real}}) is called on a CoupledODEs system, but for the StroboscopicMap . This is done by a new method that creates a parallel rule from the ODEs behind the StroboscopicMap (parallel_rule(smap.ds, states)).

Example:

function duffing_drift(u0 = [0.1, 0.25]; ω = 1.0, β = 0.2, ε0 = 0.4, α=0.00045)
  return CoupledODEs(duffing_drift_rule, u0, [ω, β, ε0, α])
end

@inbounds function duffing_drift_rule(x, p, t)
   ω, β, ε0, α = p
   dx1 = x[2]
   dx2 = (ε0+α*t)*cos*t) + x[1] - x[1]^3 - 2β * x[2]
   return SVector(dx1, dx2)
end 

duffing = duffing_drift()
duffing_map = StroboscopicMap(duffing,2π)

pds = ParallelDynamicalSystem(duffing_map,[[0.1,0.2],[0.3,0.4]])

which creates a ParallelDynamicalSystemAnalytic.

The problem is that

pds = ParallelDynamicalSystem(duffing_map,StateSpaceSet(rand(3,2)))

still dispatches to the ParallelDiscreteTimeDynamicalSystem generic fallback. Which is weird, right?
I think it shouldn't depend on the type of the states, I need to think about this more.

Also, this PR contains a small fix for the set_parameter! method that dispatches to the generic fallback system (see #223 ).

@Datseris
Copy link
Member

Hi @rusandris I see that this PR is still a draft? Perhaps you want to finish it (i.e., open it for review if it is ready)?

@codecov-commenter
Copy link

codecov-commenter commented Nov 4, 2024

Codecov Report

Attention: Patch coverage is 93.75000% with 1 line in your changes missing coverage. Please review.

Project coverage is 84.31%. Comparing base (6c98239) to head (61af697).
Report is 40 commits behind head on main.

Files with missing lines Patch % Lines
src/derived_systems/parallel_systems.jl 93.75% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #225      +/-   ##
==========================================
+ Coverage   82.00%   84.31%   +2.30%     
==========================================
  Files          15       17       +2     
  Lines         717      950     +233     
==========================================
+ Hits          588      801     +213     
- Misses        129      149      +20     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@rusandris
Copy link
Contributor Author

This can be extended to PoincareMap as well.

@rusandris rusandris marked this pull request as ready for review November 4, 2024 17:06
@rusandris
Copy link
Contributor Author

And I still need to figure out how to solve the dispatching problem: creation of ParallelDynamicalSystemAnalytic from an smap only works if the states are states::Vector{<:AbstractArray{<:Real}}.

@Datseris
Copy link
Member

This can be extended to PoincareMap as well.

Let's leave this for another PR please because I just did a massive correctness improvement of the interface of PoincareMap in another PR.

@Datseris
Copy link
Member

Is there anything outstanding for this PR?

@rusandris
Copy link
Contributor Author

And I still need to figure out how to solve the dispatching problem: creation of ParallelDynamicalSystemAnalytic from an smap only works if the states are states::Vector{<:AbstractArray{<:Real}}.

I just simply left it to be general. Now both

ParallelDynamicalSystem(duffing_map,[[0.1,0.2],[0.3,0.4]])
ParallelDynamicalSystem(duffing_map,StateSpaceSet(rand(3,2)))

create ParallelDynamicalSystemAnalytic (see in previous comments).

Let's leave this for another PR please because I just did a massive correctness improvement of the interface of PoincareMap in another PR

I've seen it, good job btw. Since smap and pmap both are just ODEs in the background, I think they could be handled exactly the same way when building a ParallelDynamicalSystemAnalytic.

Is there anything outstanding for this PR?

I'm not sure if I understand your question, but the problem description and solution is in the first comment. Everything is related to this feature request, since this would allow an efficient implementation of the ensemble approach for nonautonomous systems (EAPD).

@Datseris
Copy link
Member

Okay, so I don't understand: does this PR work? I see there are no changes at all to the step! function for this new object. It is not required? Are there any tests in the current test suite for parallel stroboscopic maps? I don't remember.

@Datseris
Copy link
Member

I am really surprised how little changes you had to do in this PR for things to work, if that's the case!

@rusandris
Copy link
Contributor Author

Are there any tests in the current test suite for parallel stroboscopic maps? I don't remember.

Yes. I think the "parallel stroboscopic" testset inside test/parallel.jl :

@testset "IIP=$iip" for (ds, iip) in zip((pds_cont_oop, pds_cont_iip,), (true, false))
    test_dynamical_system(ds, u0, p0; idt = true, iip = true, test_trajectory = false)
end

covers that.

@rusandris
Copy link
Contributor Author

I see there are no changes at all to the step! function for this new object. It is not required?

step! does seem to work with stroboscopic maps. trajectory on the other hand does not, but that might be part of bigger problem, because

ds = Systems.lorenz()
pds = ParallelDynamicalSystem(ds,[[0.1,0.2,3],[0.3,0.4,5]])
trajectory(pds,10)

doesn't work either. Or was this fixed by v3.12.0 as well? I am still using v3.11.2

@Datseris
Copy link
Member

Unfortunately only relying on this test:

@testset "IIP=$iip" for (ds, iip) in zip((pds_cont_oop, pds_cont_iip,), (true, false))
    test_dynamical_system(ds, u0, p0; idt = true, iip = true, test_trajectory = false)
end

is not enough. Because the test_dynamical_system function only tests current_state , so only the first of the parallel states. Please add some tests in there for parallel stroboscopic maps, whatever you think makes sense.

I don't think there is much meaning in calling trajectory with parallel dynamical systems. It should just throw an error saying "just loop the trajectory over the different initial conditions". Because the type of the output for trajectory simply cannot be the same as its current output, and we want a consistent type.

@rusandris
Copy link
Contributor Author

Some benchmarks to justify this change:

using DynamicalSystemsBase
using BenchmarkTools

function duffing_drift(u0 = [0.1, 0.25]; ω = 1.0, β = 0.2, ε0 = 0.4, α=0.00045)
    return CoupledODEs(duffing_drift_rule, u0, [ω, β, ε0, α])
end

@inbounds function duffing_drift_rule(x, p, t)
    ω, β, ε0, α = p
    dx1 = x[2]
    dx2 = (ε0+α*t)*cos*t) + x[1] - x[1]^3 - 2β * x[2]
    return SVector(dx1, dx2)
end

duffing = duffing_drift()
duffing_map = StroboscopicMap(duffing,2π)
states = [rand(2) for i in 1:1000]

Creating and stepping large parallel systems from duffing_map:

Old version:

@btime pds = ParallelDynamicalSystem(duffing_map,states)

65.611 ms (81002 allocations: 4.53 MiB)

@btime for _ in 1:100
   step!(pds)
 end

1.598 s (0 allocations: 0 bytes)

After this change:

@btime pds = ParallelDynamicalSystem(duffing_map,states)

256.839 μs (4618 allocations: 397.44 KiB)

@btime for _ in 1:100
   step!(pds)
 end

570.075 ms (0 allocations: 0 bytes)

Copy link
Member

@Datseris Datseris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rusandris thanks. As this PR doesn't change any API, it can go in as a patch change. Please increase patch version in Project.toml and I'll release a new version right after merging.

@Datseris Datseris merged commit 0ac9509 into JuliaDynamics:main Jan 8, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants