I’m trying to do LESes of flows past a large number (~100) of different bluff bodies using the ac-navier-stokes
solver at around Re=400. My BCs are periodic in the z-direction (the bluff bodies are extruded along the z-direction), slip BCs above and below the object, ac-in-fv
at the inlet with (u, v, w) = (1.0, 0.0, 0.0) and ac-out-fp
at the outlet with p=1.0. My meshes have ~70k elements and I use the 3rd order methods (tried 4th as well but those NaN even quicker)
However, I cannot seem to find a good set of solver settings, as the settings I explored so far either lead to NaNs about half way into the simulation, or greatly baloon the runtime. Here’s what I tried so far:
- A ‘standard’ setup with the RK45 integrator. This takes a while to ‘settle’ in the beginning, and goes quicker once the residuals die down but still takes ~24hrs on really beefy hardware like two A100 GPUs
[solver-time-integrator]
formulation = dual
scheme = sdirk33
pseudo-scheme = rk45
controller = none
pseudo-controller = local-pi
tstart = 0.0
tend = 60.0
dt = 5e-3
pseudo-dt = 2.5e-4
pseudo-niters-min = 1
pseudo-niters-max = 250
pseudo-resid-norm = l2
pseudo-resid-tol = 5e-4
pseudo-resid-tol-p = 2.5e-2
atol = 1e-1
pseudo-dt-max-mult = 2.5
[solver-dual-time-integrator-multip]
pseudo-dt-fact = 1.75
cycle = [(3, 1), (2, 2), (1, 4), (0, 8), (1, 4), (2, 2), (3, 1)]
-
I noticed that things go a lot quicker if I raise the dt/pseudo-dt ratio to ~40, especially with the
none
pseudo-controller. Predicted runtime was about 3-4 hours, especially if I up the # of iterations in the lower p levels of the multigrid method. However, this would cause NaNs about 20-30 seconds of physical time into the simulation, which corresponds to the onset of instability in the z-direction I believe. In the snapshots saved just before the NaN occurred, I noticed that the pressure in some cells spiked up to extreme levels. (I also experimented with using other pseudo-schemes likevermeire
andtvd-rk3
but these did not really change anything) -
If I lowered the dt/pseudo-dt ratio with the
none
pseudo-controller, the simulation would go very quickly until, again, 20-30 seconds, but after that it’d slow down to a crawl, requiring the max number of iterations in every time step to progress. Going by the progress bar, this’d mean the simulation would take days even at this relatively modest Re. -
I went back to the
local-pi
pseudo controller with RK45, increased the ratio of dt/pseudo-dt to 40, and also massively increased the # of allowed iterations per timestep to 10000. This of course was glacial in the beginning, though it seemed to go very quickly later on. However, like the previous case, the runtime ballooned to over a day at around the 20 second mark.
Finally, these trends seem to persist even when I try to change the value of ac-zeta
; for instance, the slowdown with attempt #3 above at around t=20 occurs with both large (8-12) and small (2-4) values of ac-zeta
.
So, at least within the space of settings I tried, there seems to be only two possibilities: things go fast but eventually NaN, or they take a very long time for the Reynolds #, and it all seems to be connected to the onset of instability along the z-direction at around the 20 second mark. Is there a suggested way to make things go faster while not compromising numerical stability? I know PyFR’s numerical scheme can be sensitive to solver settings, so I’d appreciate any help.
Here are some images of my mesh. I can share my mesh file if needed.
(As a final note, these observations are all from runs using single precision - I tried double precision too, but in the cases of settings getting NaNs it did not fix the issue, and in the cases of settings that took too long it simply doubled the runtime)