Mass conservation torlerance, convergence speed and solution accuracy of AC method

Dear developers,

For AC method in incompressible flow simulation, although with the accelerator, it seems the convergence speed is quite low, if mass conservation under a certain tolerance would be achieved in each physical time step.

Hence, I wonder whether the pseudo time step should achieve the tolerance in every physical time step regarding mass conservation, if all those time steps would be in post processing.

Best regards,
Will

Dear Will,

It’s true that very low level of divergence is more difficult to achieve with ACM than with methods that rely on a global Poisson solve because explicit smoothers are not as efficient for damping low frequency modes. However, it has other advantages e.g. it is (strong) scalable unlike many Poisson/implicit methods and it has been found to outperform projection based methods in hydroacoustic splitting approach.

The P-multigrid improves the low frequency error damping. Please note that PyFR v1.7.5 is available on Github which introduced P=0 smoothing which further improves the convergence, especially for the continuity equation. Moreover, we are actively developing the ACM solver and other acceleration techniques will be introduced in the next releases.

What is the scale of your simulation? If you are running a very small 2D problem you may not see great speed-ups with the accelerator.

It is hard for me the answer the the mass conservation tolerance question because it is case dependent. If you are simulating a real life problem that is Ma=0.0001, yes you should drive the continuity residual to a very low level. If you are simulating for instance low speed aerodynamics with Ma=0.1 it may be not as crucial.

Regards,
Niki

Dear Niki,

I am wondering what the exact meaning of those residuals in pseudtstats is, since if they are mass conservation residuals, I think that would be only one value instead of u, v ,w ?

If Ma = 0.1 is not crucial, does it mean the values of those residuals are not important to the result accuracy? Can those residuals be over 1.0 or 2.0 ?

Additionally, I think in your FR scheme, each element will provide p and U for the points they owned. That means, for each point, there will be several different p and U at a very same overlapped point. Will averaging those p and U moderate the effect of non-conservative mass flux? Does each element hold its own mass conservation matrix?

Best regards,
Will

Dear Nikki,

What is the meaning of ma = 0.2? Do you mean the max velocity/ac-zeta? I think for ideal incompressible flow, ma = 0.

My pesudo convergence history is attached below. My case is 3d turbulent flow with Re_tau = 180.

I mapped the result from a 2nd order mesh (Originally 0.23 million cells already turbulent) case to the 1st order one (1.3 million cells) and started the simulation at 0.0s ended 1s.

case 1 is 0.23 million cells 2nd order mesh (1.8 million in pyfrm)
case 2 is 1.29 million cells 1st order mesh (1.29 million in pyfrm)

dt = 0.0005 and pseudo dt = 0.00001 with l2 norm tolerance.

Case 1 was run in backward-euler and euler psedo time stepping with 2nd order accerlater. The simulation ended at t = 40.95s. Tolerance is quite low (attached)

In case 2, due to the GPU memory size limit, I couldnt use bdf2 or higher solver or the multip accerlerator. Instead, backward-euler and rk4 are applied.

For case 1, tol finally reached nearly 2e-3. Near wall region matches with paper results, while centerline velocity is a bit of higher.

For case 2, I havent done any post processing yet. However, although the tol is extremely high, the visualized result seems to be reasonable. So, I am confusing the plausible tol threshold.

Best regards,
Will

Dear Will,

Thanks for your message. We have applied the ACM solver to any internal flows, so it is interesting see your progress.

“What is the meaning of ma = 0.2? Do you mean the max velocity/ac-zeta? I think for ideal incompressible flow, ma = 0.”

I meant the Mach number of the real life application that your are trying to simulate. The pressure residual (div u), which is more stiff to converge than velocity residuals, is kind of an indicator how far the pseudo waves which distribute the pressure have travelled. For truly incompressible flows (elliptic p, residual = 0), the information from an arbitrary point would have to propagate everywhere in the domain within every physical time-step. However, since the physical problems are never incompressible, but low-Mach, it may not be necessary to drive the pressure residual all the way to zero as long as the information has propagated over important length scales. I understand if you want to reproduce an incompressible test case you want to be as incompressible as possible, but for low-Mach industrial applications this is not necessarily the case.

The residuals in your cases are still quite high and they would benefit from multip and BDF2. Just to give you some tips, I tend to keep constant number of iterations, set the dt/dtau ratio between 5 and 10, and aim for u,v,w < 1e-4 pressure typically < 1e-3 . For example, for a Taylor-Green vortex Re=1,600, the level of convergence after 3 multigrid cycles

[solver-dual-time-integrator-multip]
pseudo-dt-fact = 1.7
cycle = [(4, 1), (3, 1), (2, 1), (1, 1), (0, 2), (1, 1), (2, 1), (3, 1), (4, 3)]

is

1254,10.002000000000095,3,0.00244778059095,0.000476855091426,0.000476823492017,0.00043626471224.

Using the same cycle and again 3 cycles per time step, the convergence of turbulent Jet at Re=10,000 is

240000,1799.9950000010913,3,0.00204563896311,0.000209207025143,0.000196374204824,0.000183716409294.

I used dt/dtau = ~7 in both cases.

Please also note that it can take considerable amount of time to dissipate initial transient waves. I’m expecting this phenomenon to be highlighted with internal flows because the waves are trapped inside the domain. I would suggest developing the flow with P=1 and restart with higher P after the flow has transitioned to turbulent.

Cheers,

Niki

1 Like

Dear Niki,

If my understanding is right. It seems that for dual-time AC method in pyFR, there are difficulties on the convergence of the diffusion term. Based on my simulations, the pressure residual of the pseudo convergence file increases with the intensity of the flow diffusion. Very low pressure residual (less than 1) only happens when the flow is only in convection dominance but not much diffusion, which makes the flow behave laminar. When the turbulence intensity of the flow occurs, the pressure residual is no longer small (around 10) and shows a strong stiffness (decreases linearly with very small slope). When the turbulence intensity of the flow increases the pressure residual increases (larger than 10) and still shows a very strong stiffness. So the pressure residual in pyFR is more like an indicator of diffusion intensity but not convergence.

So I don’t quite understand how do you converge and meet the mass conservation (div u = 0) in AC method? It seems that pyFR has difficulties on holding mass conservation in turbulence.

Have you ever tested the AC solver on diffusion equations? Does it really show small residuals? As you said “The pressure residual (div u), which is more stiff to converge than velocity residuals”. According to you, “is kind of an indicator how far the pseudo waves which distribute the pressure have travelled”. Actually, I don’t know the connection between the pseudo waves and mass conservation.

In your cases shown above, you have really small pressure residual. How is the turbulence intensity in your cases? Is the flow behaving in chaos or more laminar flow?

Best regards,
Will

Hi Will,

Thanks for your message.

I have only run external flows with a range of Reynolds numbers, including 2D cylinder Re=200, turbulent jet Re=10,000, SD7003 Re=60,000. All these cases converged ok.

In your previous email, you mentioned that you are using the forward-euler pseudo-time scheme and “the pseudo time step is 1 over 10 of the physical time step. 10 iteration per physical time step.” You cannot expect any level of convergence with these numbers! I would encourage you to use RK4 / tvd-RK3 and make the iteration count 10-20 times bigger. Using P-multigrid always helps.

Would you be able to share your case or at least the .ini file? I can try to quickly run it or at least have a look.

Regards,
Niki

Hi Niki,

Excuse me for interrupting you.

I'm afraid that I had doubts about the solution about 2D cylinder with
Re=200. There were frequently errors like "NaNs detected at t = xxx" in
PyFR run when I increased the Reynolds number by only modify the "Uin"
(Re>140, and Re=140 is fine without errors).

The mailing list encouraged me about the section named "shock-capturing"
is not suitable for the AC method.

My question is :

a\. What&#39;s wrong with the inc\_cylinder\_2d case when the Re &gt; 140;

b\. Are there easy way changing the Reynolds number without failure

of NaN check

I attached the diff file ($diff inc_cylinder_2d_my_edit.ini
inc_cylinder_2d.ini) and residual.csv below. By any chance, could you
help check the configuration file ?

Regards,

Lin

inc_cylinder_2d.ini.diff (359 Bytes)

residual.csv (4.44 KB)

Dear Niki,

I think the problem is the stiffness of mass conservation convergence in internal flow cases. Based on my trials, no matter bdf2, euler, tvd-rk3 or rk4, convergence is facing stiffness. I have attached the convergence history files in this email, which show convergence stiffness.

Since my case files are bit of large, I uploaded in Google drive. https://drive.google.com/drive/folders/1kTfODnbc5JDax0fhrwXJts79W79CdmYI?usp=sharing

I added a pressure gradient in the momentum equation as you guided, so the solver will be a little bit different. You have to modify the mako kernel like below so that the case is running correctly. Otherwise there will be no driving source.

tdivtconf[{i}] = -rcpdjac*tdivtconf[{i}] + ${ex};
% endfor

  • tdivtconf[3] = tdivtconf[3] - 4.0;
    </%pyfr:kernel>

The convergence issue only happens when flow is turbulent but not larminar.

Best regards,
Will