-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can solvers falsely make divergent trajectories converge? #396
Comments
Those methods converge insanely slow, and averages converge like sqrt(n). So just double checking you did this with like dt=1e-8 and at least a million trajecotires 6 digits is going to be really tough with those.
Indeed, you should not discard diverged trajectories if you want accurate estimates of statistics. Instead those should be run with tighter tolerances to make them convergent.
What tolerance did you use? If you use
It should not make divergent solutions convergent. |
Ok, thanks for your answer. It seems that there were two different explanations for the deviation between the SDE output and the 'true' solution: either it was something 'deeper' that renders eqs (5) slightly inaccurate for the system of interest, or the reason was a bad handling of divergent trajectories. Note that the paper itself mentions the possibility of trajectories being divergent (fundamentally, not because of numerics) as an indication that the method would be out of its regime of validity. With the manual matlab code, I only went to timesteps of 1e-3 and 1e-4. The amount of divergent trajectories decreases with a fewfold upon decreasing the steps, but the predicted output value over the remaining ones was unaffected by this. Now, with the Julia solvers, it is now clear that by going to sufficient order, fundamentally all trajectories should be convergent (or at least for ensemble sizes considered) and the deviation remains; so this seems to suggest that the second possible explanation is not the case. |
Ok, it seems that this is an issue of too wide tolerances. Just decreasing abstol and reltol can give rise to instabilities as well. |
If it's a non-diagonal noise problem, yes.
That just means it's too stiff for them.
I haven't been able to prove it, but that should make it into commutative noise. I do want to make that easier to exploit. |
Thank you for the quick answer. Does it makes sense that occasionally, numerical errors occuring when the tolerance is made less strict, actually avoid divergences? Naively, I would expect that for any diagonal scheme based on reals, it is actually possible by something like analytic continuation to generalize it to one diagonal in complex numbers directly (although that would probably be quite some work)? |
If the true solution actually diverged.
If you look at it as a function of real and complex parts, it becomes a commutative noise problem (at least, 1D is easy to show, higher D is harder but "it's clear"). So the diagonal noise methods do not get the order they are expecting and thus their error control isn't quite correct. |
Thanks for your answer
Yeah I got that part, but my point (but again, I'm no expert) is rather that if the equation has e.g. a single complex variable \alpha (f(\alpha) and g(\alpha) for drift and diffusion coefficients are holomorphic functions, say--and no dependence on conj(alpha)), translating everything to a pair of two independent reals Re[alpha] and Im[alpha] is a very inefficient way of handling the problem, I'd say |
I agree with you we don't have the best solution right now. |
Hello, it's been a while since I've been trying this package before and I'm happy to try it again, now that I see that there are many new solvers etc.
The issue I'm facing is the following (rather a generic question than a bug report).
I'm trying to simulate eqs. (5) from this paper for the case of a single mode; i.e. the subindex drops and no coupling terms$J$ . $\alpha$ and $\tilde{\alpha}$ are complex variables, but the noises $\xi dt$ , $\tilde{\xi} dt$ are real. A set of parameters of interest that I'm looking at for example is $\Delta=\gamma=1;U=0.05;F=2.4$ .
An observable of interest is the particle number$N=Re[<\tilde{\alpha}^* \alpha>]$ in the steady state. Through other methods, we know that the exact result should be $N=20.817$ .$\alpha=\tilde{\alpha}=0$ state (the vacuum) and waiting until relaxation, it seems that the results converge with statistical significance to a lower value around $N=20.37$
When solving eqs. (5) however, by starting from an
This lower result for$N$ is obtained through a number of methods:
*manual implementations of Euler-Mayurama and Milstein in MATLAB with a pretty small timestep. Typically a small amount of trajectories (less than 0.1% depending on the timestep) will diverge and is thus disregarded
*I contacted the author of the paper, he used some kind of semi-implicit midpoint method after transformation to the Stratonovich sense, and also saw numerical instabilities. His belief is that is the omission of the diverged trajectories introduces a bias that gives the offset in the results.
*Meanwhile, I tried using the DiffEq. solvers here. Ensemblesimulation didn't work easily either (it gave warnings about stability indeed-I didn't try to optimize any further-, plus I also didn't quite figure out how to obtain N from the ensemble-output). However, if I just solve the equations with Julia in a for loop and use the default solver, all results (out of 10^5) remain nicely finite, no warnings about tolerances or so. And it leads again to the biased value for N above.
To understand the discrepancy, I'm wondering if a stiff solver could have been selected as a default, and if it could have kept trajectories artificially in check that should have diverged in reality? Or must any converged result be genuine?
The text was updated successfully, but these errors were encountered: