Getting either NaNs or extremely slow convergence

I’m trying to do LESes of flows past a large number (~100) of different bluff bodies using the ac-navier-stokes solver at around Re=400. My BCs are periodic in the z-direction (the bluff bodies are extruded along the z-direction), slip BCs above and below the object, ac-in-fv at the inlet with (u, v, w) = (1.0, 0.0, 0.0) and ac-out-fp at the outlet with p=1.0. My meshes have ~70k elements and I use the 3rd order methods (tried 4th as well but those NaN even quicker)

However, I cannot seem to find a good set of solver settings, as the settings I explored so far either lead to NaNs about half way into the simulation, or greatly baloon the runtime. Here’s what I tried so far:

  1. A ‘standard’ setup with the RK45 integrator. This takes a while to ‘settle’ in the beginning, and goes quicker once the residuals die down but still takes ~24hrs on really beefy hardware like two A100 GPUs
[solver-time-integrator]
formulation = dual
scheme = sdirk33
pseudo-scheme = rk45
controller = none
pseudo-controller = local-pi
tstart = 0.0
tend = 60.0
dt = 5e-3
pseudo-dt = 2.5e-4
pseudo-niters-min = 1
pseudo-niters-max = 250
pseudo-resid-norm = l2
pseudo-resid-tol = 5e-4
pseudo-resid-tol-p = 2.5e-2
atol = 1e-1
pseudo-dt-max-mult = 2.5
 
[solver-dual-time-integrator-multip]
pseudo-dt-fact = 1.75
cycle = [(3, 1), (2, 2), (1, 4), (0, 8), (1, 4), (2, 2), (3, 1)]
  1. I noticed that things go a lot quicker if I raise the dt/pseudo-dt ratio to ~40, especially with the none pseudo-controller. Predicted runtime was about 3-4 hours, especially if I up the # of iterations in the lower p levels of the multigrid method. However, this would cause NaNs about 20-30 seconds of physical time into the simulation, which corresponds to the onset of instability in the z-direction I believe. In the snapshots saved just before the NaN occurred, I noticed that the pressure in some cells spiked up to extreme levels. (I also experimented with using other pseudo-schemes like vermeire and tvd-rk3 but these did not really change anything)

  2. If I lowered the dt/pseudo-dt ratio with the none pseudo-controller, the simulation would go very quickly until, again, 20-30 seconds, but after that it’d slow down to a crawl, requiring the max number of iterations in every time step to progress. Going by the progress bar, this’d mean the simulation would take days even at this relatively modest Re.

  3. I went back to the local-pi pseudo controller with RK45, increased the ratio of dt/pseudo-dt to 40, and also massively increased the # of allowed iterations per timestep to 10000. This of course was glacial in the beginning, though it seemed to go very quickly later on. However, like the previous case, the runtime ballooned to over a day at around the 20 second mark.

Finally, these trends seem to persist even when I try to change the value of ac-zeta; for instance, the slowdown with attempt #3 above at around t=20 occurs with both large (8-12) and small (2-4) values of ac-zeta.

So, at least within the space of settings I tried, there seems to be only two possibilities: things go fast but eventually NaN, or they take a very long time for the Reynolds #, and it all seems to be connected to the onset of instability along the z-direction at around the 20 second mark. Is there a suggested way to make things go faster while not compromising numerical stability? I know PyFR’s numerical scheme can be sensitive to solver settings, so I’d appreciate any help.

Here are some images of my mesh. I can share my mesh file if needed.

(As a final note, these observations are all from runs using single precision - I tried double precision too, but in the cases of settings getting NaNs it did not fix the issue, and in the cases of settings that took too long it simply doubled the runtime)

The first thing I would do is check that you do not have too much resolution in the boundary. If you start resolving the viscous sub-layer this can cause the (pseudo) time step to scale as ~h^2.

Next, you may also want to see if anti-aliasing helps with the instabilities. Note here that you’ll want different amounts of anti-aliasing at each multigrid level.

Regards, Freddie.

Yeah, just to second what freddie is saying, looking at your mesh, there are some quite low quality elements near the wall.

The process I would follow is:

  • On a 2D plane, try extruding normal to the cylinder wall a few steps to get an O mesh.
  • After this yo can do an unstructured mesh with a source term to keep the mesh spacing what you want in the wake.
  • Once you have a reasonable 2d mesh you can then extrude it in z. This is the same process I used to make a mesh for a similar case for Re 500 that we are currently using.

Dear @fdw and @WillT , thank you, I’ll try both options. For anti-aliasing, are there any specific options you’d recommend? I tried the anti-alias = flux option, which didn’t really fix the issues, and whereas anti-alias = flux, surf-flux seems to lead to greater stability, there seems to be a huge cost in terms of runtime. In both cases, I’m adding the quad-deg = 10 option to the [solver-interfaces-***] and [solver-elements-***] fields, going by the examples in the PyFR documentation

Yeah, these operations can be quite expensive. Because of this, it is probably best to try and fix the mesh first.

You will want to use different levels of anti-aliasing at each multigrid level. This will greatly reduce the cost. As a starting point consider picking quadrature rules which have the same number of nodes as a simulation which is one order higher.

For example, in a hex at p = 3 there are 64 solution points and so consider a quadrature rule with 125 nodes.

Regards, Freddie.

Hi @WillT , for the past days I’ve been trying to change my automated meshing script to first do an O-mesh like you described, however gmsh is struggling with concave regions and sharp corners.

To do the O-mesh I’m using the boundary layer field option in gmsh with quadrilateral elements. At sharp corners, gmsh wants to keep the element size at the object boundary constant, which translates to overly large elements at the edge of the boundary layer field near such corners. Meanwhile concave regions of the geometry cause the ordered lines in the boundary layer mesh to collide with each other, causing the generation of poor quality triangular elements to fix the collisions. Is there another feature/tool you use to generate these O-meshes? (These issues of course do not exist with triangular elements, but that defeats the purpose of the O-mesh I guess)

Additionally, I noticed that reducing the solution order to p=2 somewhat reduces the instability issues. Aside from the inevitable reduction in solution accuracy, is there any reason to not do this?

1 Like

In our experience Pointwise is substantially better than Gmsh when it comes to making grids of geometries with any complexity. I can not say I’ve ever got the boundary layer functionality in Gmsh to work. Rather, I’ve always had to do it by hand with a structured region.

Dropping to p = 2 can increase stability since it is more dissipative. However, if the accuracy is acceptable to you, it may be more efficient to go with a coarser grid at p = 3.

Regards, Freddie.

@fdw I’ll check it out, although I’d stick to free open source software where possible.

I had one more question about the anti aliasing options. I’ve been trying to do per-multigrid-level antialiasing like you suggested. But I’m not sure if I’m introducing it in the config file correctly. Why do I need to include quad-deg in e.g. [solver-elements-hex] and in every single [solver-elements-hex-p#] field?

This is what I have currently (I eyeballed the quad-deg values from the options available here):

[solver-elements-tri]
quad-deg = 10
soln-pts = williams-shunn

[solver-elements-tri-mg-p2]
quad-deg = 36
soln-pts = williams-shunn

[solver-elements-tri-mg-p1]
quad-deg = 15
soln-pts = williams-shunn

[solver-elements-quad]
quad-deg = 9 
soln-pts = gauss-legendre

[solver-elements-quad-mg-p2]
quad-deg = 36
soln-pts = gauss-legendre

[solver-elements-quad-mg-p1]
quad-deg = 16
soln-pts = gauss-legendre

[solver-elements-hex]
quad-deg = 8
soln-pts = gauss-legendre

[solver-elements-hex-mg-p2]
quad-deg = 64
soln-pts = gauss-legendre

[solver-elements-hex-mg-p1]
quad-deg = 27
soln-pts = gauss-legendre

The non-suffixed version is used for the highest polynomial degree. So if you are running p = 2 you do not need to provide an mg-p2 version, just mg-p1 and mg-p0 (assuming your cycle goes to p = 0).

Regards, Freddie.

PS Those orders seem a little high. In a hex shoot for a quadrature rule with (p + 2)^3 points (compare with the (p + 1)^3 solution points). Only go higher if needed.

So, for p=2 that’d be (2+2)^3=64. But when I try to do

[solver-elements-hex]
quad-deg = 64
soln-pts = gauss-legendre

I get an error (ValueError: No suitable quadrature rule found) even though a 64 point quadrature rule for hex elements with Gauss-Legendre points exists. What’s the problem here?

Here deg refers to the formal degree of the rule. As such you want:

quad-pts = gauss-legendre
quad-deg = 7

which will select the rule gauss-legendre-n64-d7-pstu.txt

Regards, Freddie.

1 Like

Just to answer some of your questions about meshing, I haven’t used gmsh very much as it’s a pretty terrible tool in my view. I would only be able to tell you how to do it in Pointwise, where you’d make/import the surface of the object and extrude a grid normal to it.

@WillT Regardless of the meshing tool, how would you assess the quality of a particular element? Having a small aspect ratio, so being as close to a square as possible for quads, or being as close to an equilateral for triangular elements?

There are a number of different metrics that Pointwise can provide, such as skew angle, length ratio, etc. I don’t know of any detailed studies for high order FR looking at the effect for mesh quality for these metrics. But for the cases you’re talking about, it should be fairly straightforward to produce a good quality mesh. Depending on the Re you are looking at I might even be able to send you one.

Dear @WillT and @fdw , thanks for all the help throughout this thread. I actually noticed that my issue was not related to the mesh, but rather my choice of BCs (although I am trying to adapt my mesh generation algo in line with @WillT 's recommendations)

I noticed that, whenever I was getting a NaN result, just a few timesteps before the occurrence of NaNs a large anomalous spike of pressure was occurring on my outlet BCs, which was an ac-out-fp BC with p=1.0. I converted that to an ac-char-riem-inv BC and the NaNs disappeared. I’m not 100% sure of the reason, but with this new BC the results still seem physically correct and the simulation is stable even with much larger values of pseudo-dt and dt. Now each run takes under an hour to finish and the results look physically correct, though I’ll carry out a cylinder validation case a bit later.

2 Likes

The characteristics Riemann invariant boundary conditions are a bit dissipative, so you could’ve been getting a pressure reflection previously, which is now being damped out. Either way, I’m glad it’s now working.