Hello, everyone!
I think a common issue that many people may encounter during the computation process is the “RuntimeError: Minimum sized time step rejected.”
My boundary conditions are set to “char-riem-inv,” and the original settings for the solver-time integrator are as follows:
[solver-time-integrator]
formulation = std
scheme = rk45
controller = pi
tstart = 0.0
tend = 260.0
dt = 0.00001
atol = 0.000001
rtol = 0.000001
errest-norm = l2
safety-fact = 0.9
min-fact = 0.3
max-fact = 2.5
In fact, many of the settings mentioned above were configured based on the official documentation.
When the “Minimum sized time step rejected” error occurs, my first approach is usually to try reducing dt, atol, and rtol. Sometimes, this works, but other times it doesn’t.
I believe I might need to carefully adjust the parameters like safety-fact, min-fact, and max-fact, but apart from that, I haven’t thought of any better methods.
Unfortunately, the documentation provides only vague descriptions:
safety factor for step size adjustment (suitable range 0.80-0.95)
minimum factor by which the time-step can change between iterations (suitable range 0.1-0.5)
maximum factor by which the time-step can change between iterations (suitable range 2.0-6.0)
I might still be unsure where to start when it comes to adjusting these factors carefully to address the “Minimum sized time step rejected” issue because I don’t understand its underlying principles.
Can you provide me with some advice? Or, what should I refer to in order to gain a deeper understanding of it?
Best Regards.