NaNs for variation of incompressible cylinder

I’ve been trying to run a custom cylinder example, based partly on the .ini file for the incompressible cylinder example on the PyFR website. It seems like the solver gets NaNs within a fixed amount of timesteps - e.g. a dt of 0.01 NaNs reliably at t=0.5s while dt=5e-3 leads to NaNs at t=0.25.

You can find the configuration here and the gmsh .geo file here . To convert the .geo file to the mesh file, it’s sufficient to run gmsh -3 cylinder.geo -o cylinder.msh -format msh2

I validated that this creates the same behaviour on five different platforms/configs, (OpenMP x86-64, AMD ROCm OpenCL x86-64, OpenMP ppc64le, Nvidia OpenCL ppc64le, Nvidia CUDA ppc64le; OS was Linux on all tests) so I’m fairly sure that the issue is not related to one type of backend as was the suggestion in a recent similar thread.

Am I doing something wrong regarding the solver parameters etc?

Thank you very much in advance.

Note: I’m aware that the config I posted says single precision, but I get the same results with double as well.

Hi @Fellshock ,

Could you please try using order = 1 and let us know if the computation does not diverge?

@GonzaloS Thanks for looking into it - the NaNs reappeared at the exact same time with the following changes applied:

[solver]
system = ac-navier-stokes
order = 1

[solver-dual-time-integrator-multip]
pseudo-dt-fact = 1.75
cycle = [(1,1)]

To give a further update, this problem seems to happen regardless of Reynolds number, which reduces the likelihood of a bona fide numerical instability. I increased the viscosity to up to 1.0 from the original 5e-3, and the NaNs occur at the exact same point in time for all values I tried in between these two.

How does the residual history look? According to the config file you’re running the pseudostats plugin which will let you know how the inner iterations are converging. Typically it is necessary to drive the velocity and pressure residuals down by a good few orders of magnitude for the system to be stable.

Regards, Freddie.

Looks like residuals NaN very early on as well

n t i p u v
1 0 1 - - -
2 0 2 - - -
3 0 3 nan nan nan

Regarding the convergence of pressure and velocity, I tried increasing pseudo-niters-max and pseudo-niters-min, but this did not solve the problem.

How do things look before it NaN’s?

Regards, Freddie.

Actually it seems like the NaNs occur from the very first timestep. The ‘inverse relationship’ I wrote about before seems to be simply caused by [soln-plugin-nancheck] having an nsteps value that is relatively large. I set it to 1 and the solution stops immediately. Even if I remove it and plot the solution on the next timestep after the start, the entire domain is full of NaN values.

I tried running this case this morning with the current develop of PyFR (a307e38) and also got NaN with your ini file and mesh on the first time step.

However looking at the mesh it has significantly more resolution than the example. I.e. delta x it much smaller, however in your ini file you use the same dt and pseudo dt as the example cylinder case.

I tried running without p-multigrid (just in case this was the issue) and with dt = 1e-4 and pseudo-dt=1e-5. It seemed to work ok for me.

This is by no means the best combination of dt and pseudo-dt, but I’m at least happy there there doesn’t seem to an issue in PyFR itself.

I hope that helps

1 Like

Looks like the pseudo dt was the issue! I had tried reducing dt before, suspecting an issue related to the timestep, but hadn’t tried altering the pseudo-dt. In fact, it looks like I can keep the physical dt even at the original level so long as the pseudo dt is low enough. I’m new to pyfr, so not entirely sure on which parameters to adjust yet, so thanks a lot!

No problem, using dual time stepping is a bit different from plain explicit methods. If you haven’t already its definitely worth reading Niki’s paper on the subject:

3 Likes

Thanks for the suggestion, I’ll certainly read these.

Had a final question - I noticed that the results have interesting looking artefacts for this case that sort of look like shocks. Is this some sort of numerical artefact? Below you can find images of the velocity magnitude and pressure, respectively:


These occur at start-up as a consequence of the fact that your initial condition is not an instantaneous solution of the Navier-Stokes equations. After you run the simulation for a while they will leave the domain.

Regards, Freddie.

1 Like

In this case they persisted well into the solution (above images are at t=50.0) Is there some sort of option that can be used to accelerate their disappearance?

Like freddie says, they are really ‘meant’ to be in the solution. They result from the impulsive start leading to divergence in the velocity field. They should convect out of the solution as the ACM corrects the velocity field, but if the convection time is too long this really leaves you with one option, dissipation. This dissipation can either come from a filter (the gradients are higher here so a low pass filter might help to remove them), or artificial viscosity, or up nu for the initial stage of the running. I would only really recommend the latter of these strategies.

However, as they are persisting for a long time on this run compared to the case in the examples, I think you may want to consider changing the ratio of dt/pseudo-dt. From what you said earlier, it sounds like you decided to fix dt and change pseudo-dt until it didn’t NaN. This could give a very large ratio of dt/pseudo-dt. It is known that for dual time stepping that, for large values of this ratio, the amplification factor of the dual time stepping becomes close to unity. i.e your system can be under-damped. I normally run with a ratio typically in the range of 4-20.

I hope this helps.