I’m trying to run an incompressible 3D cylinder case at about Re=300 with periodic BCs in the z-direction. The solution does not NaN, but some spurious oscillations show up near the inlet which substantially degrade the quality of the solution. These are especially prominent for the pressure field. I noticed that the pressure residuals first decline rather rapidly from about 1.0 to 1e-2 at t=2.5s, but then climb rapidly back up to 1.0 or so at around t=5.0 and stay there.
I’m getting some issues uploading images to this forum at the moment, but here you can find some images of the pressure fields at t=2.5, t=285, plus an image of the mesh. pressure fields - Album on Imgur
I noticed that reducing the dt/pseudo-dt ratio reduces these oscillations, but reducing the dt further is increasing the runtime significantly. As it stands, this combination runs on 2 A100 cards hooked up to a 24 core AMD EPYC in around 7-8 hours. I’ll need to run about 80-100 cases with similar Re, so I’d appreciate your input on how to get rid of these oscillations while keeping the computational cost low.
(I already looked at the 3D cylinder example case from the C&F paper from a few years ago and tried to use the .ini options from the supp. material as much as possible, though it was a compressible case so quite a few options are of course different)
[soln-bcs-sym1]
type = slp-wall
[soln-bcs-sym2]
type = slp-wall
[soln-bcs-front]
type = slp-wall
I presume the top and bottom walls?
Your max pseudo iteration is set to 5, this is probably way too small. Try 80-100. To start with the simulation will probably need quite a few iterations as the initial impulse condition will require quite a lot of smoothing. After a bit the number of pseudo steps will probably drop significantly.
Those are indeed the top and bottom walls. [soln-bcs-front] was a leftover from a previous mesh, I guess PyFR just ignored it since the mesh did not contain a physical group with that name anymore. I removed it now.
Setting pseudo-niters-max seems to result in the solver pretty much always doing 80 iterations per timestep, since the pressure residuals can’t reach the relatively high level of 1e-3. This translates to a runtime in excess of 50 hours. Relaxing this requirement and setting pseudo-resid-tol = 1.5e-2, I started to see some improvements in terms of runtime but the cost is still prohibitive (24hrs+). Is going for an even higher residual tolerance level advisable, as long as solutions look ‘right’?
I tried making the mesh denser and the timestep shorter, but the improvements from doing fewer iterations seem to be largely offset by the higher cost of each timestep. Is there a sweet spot, as far as PyFR is concerned, in terms of mesh density and timestep to minimize the runtime?
Also, do you have a suggested method for forcing the onset of instability/vortex shedding? I see that in the supp. mat. for the C&F paper you used an initial condition with some extra terms. I tried this as well, but what it did was grossly inflating the number of iterations needed at the start while not really causing the onset of vortex shedding. Now I’m trying a time-decaying forcing function in the beginning, but I’d appreciate if you have a method you can recommend as best practice in PyFR. In other solvers, I used to do this with some random noise, but this seems like it’s not possible with PyFR based on what I read on these forums.
For example when I run the TGV with ACM it might initially say that the runtime is going to be ~30hours but once the initial transients have died down this reduced to ~40mins. The issue with increasing the tolerance too high or the number of iterations too low is that the answer will be wrong.
You are trying to run this case for 375 non-dimensional time units so this is going to take a long time. Do you need that many? A further point is that Re 300 is sort of on the cusp of when 3D effects start to occur, so depending on what you are doing you might be able to just use 2D.
At Re=300 the flow should naturally start shedding no problem. But if it you are finding that it isn’t then you can always add a small sin wave to the initial condition.
Thanks, I did notice that after increasing the amplitude of the sine waves in the initial cond. and letting the simulation run for a while the instability started. The final issue I’m seeing is that, I seem to get NaN values out of the blue well into the simulation, e.g. at t=70 or so. Here are the residuals from the timestep that produced the NaN values:
69.7499999999987
1
0.0494241812370873
0.0515838546632246
0.0936614854210909
0.0408700174058779
69.7499999999987
2
0.0774525680782504
0.0608271127318629
0.094247801440653
0.0338774867175983
69.7499999999987
3
0.0880618568239903
0.0683706304215541
0.100542592156139
0.0284091404863626
69.7499999999987
4
0.0975263583999787
0.0679062370731629
0.123981344861728
0.0269728279245928
69.7499999999987
5
0.12491182241776
0.0803970892051085
0.175486374118962
0.0288949468426052
69.7499999999987
6
1.59295923388774
0.0846915051373968
0.21401148412774
0.0332676422612407
69.7499999999987
7
nan
nan
nan
nan
Any suggestions for avoiding this issue, other than reducing the timestep?
Finally, regarding the long simulation time, I am aware I’m running this for a relatively long time period, this is to get lots of data to compute statistics for validation on a cylinder case. For the actual geometries I’m interested in, I’ll probably run it for shorter.
You can try increasing ac-zeta. At this Re you can probably go quite high, say ~8. Much higher and it will be unstable, higher values generally converge faster so that might help.