A deeper understanding of the solver-time integrator

Hello, everyone!

I think a common issue that many people may encounter during the computation process is the “RuntimeError: Minimum sized time step rejected.”

My boundary conditions are set to “char-riem-inv,” and the original settings for the solver-time integrator are as follows:

[solver-time-integrator]
formulation = std
scheme = rk45
controller = pi
tstart = 0.0
tend = 260.0
dt = 0.00001
atol = 0.000001
rtol = 0.000001
errest-norm = l2
safety-fact = 0.9
min-fact = 0.3
max-fact = 2.5

In fact, many of the settings mentioned above were configured based on the official documentation.

When the “Minimum sized time step rejected” error occurs, my first approach is usually to try reducing dt, atol, and rtol. Sometimes, this works, but other times it doesn’t.
I believe I might need to carefully adjust the parameters like safety-fact, min-fact, and max-fact, but apart from that, I haven’t thought of any better methods.
Unfortunately, the documentation provides only vague descriptions:
safety factor for step size adjustment (suitable range 0.80-0.95)
minimum factor by which the time-step can change between iterations (suitable range 0.1-0.5)
maximum factor by which the time-step can change between iterations (suitable range 2.0-6.0)

I might still be unsure where to start when it comes to adjusting these factors carefully to address the “Minimum sized time step rejected” issue because I don’t understand its underlying principles.
Can you provide me with some advice? Or, what should I refer to in order to gain a deeper understanding of it?

Best Regards.

Many people have asked about getting “minimum sized time step rejected”, it might be worth having a look through those posts if you haven’t already.

Normally, if you are facing this issue, tweaking atol, rtol, min-fact, and max-fact shouldn’t be your first port of call. It is normally indicative that there is something else wrong. Typically, and in order of likeliness:

  • A poor quality mesh or a mesh that isn’t fine enough for the case
  • Illconfigured boundary conditions
  • Illconfigured initial conditions
  • The initial dt is large enough that the simulation can initially develop some features in the solution that it can’t recover from
  • Some other misconfiguration, ie no shock capturing for a flow that has shocks

Once exhausting all these options, it is probably worth thinking about changing atol, rtol, min-fact, and max-fact.

Thank you for your prompt response.

In fact, my case is largely based on a published paper, and I haven’t made significant changes to the configuration. However, I encountered this issue when trying to reproduce it, and the problem often arises not at the beginning but rather midway through the computation.

I would still like to learn more about the factors.

safety-fact = 0.5
min-fact = 0.3
max-fact = 1.2

For example, if I set these factors to extreme values as shown above, how should I analyze their impact on convergence compared to the defaults?

Regards.

The dtstats plugin is probably what you want, it will show you errors the dt taken and the rejection rate.