I am implementing adaptive step-size Runge–Kutta methods using embedded pairs (orders ( p ) and ( p+m )). The standard structure is:
Compute two solutions: $$ y^{[p]} \text{ and } y^{[p+m]} $$
Estimate local error: $$ \text{err} = \left| y^{[p+m]} - y^{[p]} \right|$$
Compute scaling factor:
$$ s = \beta \left( \frac{\text{tol}}{\text{err}} \right)^{1/p} $$Update step:
$$ h_{\text{new}} = s \cdot h $$
My question is about the accept/reject criterion:
Two alternatives I found:
- Compare using ( s ):
if (s >= 1) accept step
else reject
- Compare using the error directly:
if (err <= tol) accept step
else reject
From what I understand, these should be mathematically equivalent if ( s ) is defined as above.
However:
- In Shampine-style implementations, it seems the comparison is done directly on the error.
- In some lecture notes and sources, the acceptance is expressed using: $$ s \ge 1 $$
My concern (practical behavior):
I tested both approaches on the double pendulum problem, counting rejected steps. Using:
- $$ \text{tol} = 10^{-5} $$
- $$ h_{\max} = 5 \times 10^{-3} $$
I observed:
Using ( s $\ge$ 1 ):
- ~183,000 rejected steps
- ~6,000 accepted steps
Using ( $ \text{err} \le \text{tol} $):
- < 100 rejected steps
- Same number of accepted points (~6,000)
Both solutions remain within the requested tolerance band.
Questions:
- Are these two acceptance criteria strictly equivalent in practice, or can numerical/implementation details make them behave differently?
- Is one approach considered more robust or standard in modern implementations?
- Are there known situations (e.g., stiff/chaotic systems) where comparing via ( s ) is preferable?
If anyone can clarify whether there is a theoretical or practical reason to prefer one over the other, I would really appreciate it.
Thanks!