Mesh resolution, order, and curved boundaries

Exactly, I’m attempting to try a higher order like 2 or 3 with a more robust machine. Since we’re talking about the order and mesh resolution, I have two questions that have been bothering me for a long time about the issue.

  • A coarser mesh is preferable with a high scheme order, for it means a lower time cost and shows the power of FR algorithms. Of course, measurements on anti-alising would be suggested to take to avoid the nonphysical oscillation for a coarse mesh. But how could we determine if a mesh is too coarse to get wrong calculation results. For instance, if a total number of 100,000 eles is required to get a correct result using FV with order = 2, then is it right that at least 25,000 eles is appropriate for p = 2 and 1,100 eles for p = 3 (Nele~1/p^2)? I was also aware of that the desired wall normal resolution is only needed to be achieved for a certain polynomial order. So as long as we keep the y+ value suitable for the certain polynomial order, the result under ILES should be guaranteed, right?

  • In the common cases like cylinder and airfoil, it’s highly recommended generating a curved boundary mesh using high-order in Gmsh (Ref Bassi, 1997). But does that mean a high-order mesh is less attractive, considering the non-curved shape like a square or a plate, compared with a high-order polynomial order in FR scheme? The latter will subdivide the straight boundary and generate polynomials to achieve a high-order and accurate result.

Thanks for the previous kind and detailed replies. And as a complement to the Keyerror : Kernel mul has no providers I met, it works by adding export PYFR_XSMM_LIBRARY_PATH=/path/to/compiled/libxsmm.so .

Regards, Thatcher

I moved these questions to a new topic that was more fitting.

I’m sure plenty of people will have views on this, it would be great to hear what others have to say.

I see there are really 3 questions here:

  • How do I adapt the resolution for different orders?
  • How do I know when and how much anti-aliasing to use?
  • Should I curve the boundaries?

The first question is probably one of the oldest questions in CFD. A reasonable zeroth-order metric for the resolution of the method might be the points per wavelength. I looked at this for FR in a 2018 AIAA-J paper; however, this is not the whole story. The amount of resolution required will depend on the solution, which will depend on the case. It will also depend on how accurate you want the result to be. If you are doing DNS, then you can look at the time average and instantaneous Kolmogorov scales and make a judgement based on that. (But with DNS the more the merrier). If you are doing ILES, then a mesh convergence study will be what you’ll want to do. You’ll make a few meshes of different resolutions and look at the convergence of one or more functionals. Lots of journals would expect this if presenting LES results. With experience, you will pick up rules of thumb for mesh generation, but this will be problem dependent. For FV, Paul Tucker has a whole chapter in his book about mesh generation more focused on turbomachinery.

On the topic of anti-aliasing, as you say, this will be dependent on the mesh resolution. Lower resolution, will lead to high order modes in your solution being more excited, that coupled to a non-linear flux function will cause more aliasing. Aliasing will redistribute the unresolved modal energy to other modes. The upshot is that although there are plenty of papers on how much AA you should have for Navier–Stokes for example, really it depends on how much energy isn’t getting resolved. So if you have a course mesh then you can try adding some in a similar way to the mesh convergence study. Modal filtering is also an option.

On the topic of mesh curving, and you might spot a theme here, you should use as much or as little as you need. In that Bassi and Rebay paper, if they had discretided the wall with a trillion points they clearly wouldn’t need to curve the mesh. But if you are doing ILES, then you probably will need some curvature. However, for a square cylinder, you probably shouldn’t curve the walls as it makes the mesh a less accurate representation of the geometry, not more. I can come up with counterexamples to this though, for example, the Mach 3 forwards facing step, when solved with Euler, there should be a singularity at the corner. To get an accurate result it is best to round the corner just a little. Curving the mesh can also give rise to aliasing, and non-conservation of linear invariants (although that is a topic for another time). So if you curve the mesh you may need to keep track of aliasing.

That was a long one but I hope it helps. I would be interested to hear from the community if they have anything more to add for specific geometries and cases. E.g. laminar vs. turbulent boundary layers, sharp features in cases like delta wings, and how order plays with cell aspect ratio.

@tdzanic you have great experience in meshing, any wisdom from you?

2 Likes

Hi Will,

Thanks for extracting and extending this new topic. Also, the tip of curving the corner in forward facing step flows is quite a surprise to me.I agree with you that the three questions are basic and essential, while each sub-content is a fairly large theme that can be extended a lot. What you shared was really thought-provoking, I think I might need to reread some papers relating with them before making any further comments. I’ll come back soon.

Sincerely hope more people could join this topic and comment freely.

Regards, Thatcher

1 Like

In the last few days, I ran a few simulations both in 2d and 3d and found a few interesting phenomena during the flow calculations under high Reynolds number. With a higher Re, there existed some place needy to be taken care of, such as the value of ac-zeta, different means of anti-aliasing… And I would like to make some discussions following what @WillT shared, please correct me if I was wrong at somewhere.

  1. Mesh and order
    It’s highly recommended adopting a higher order to solve high Re problem, for you can use a coarser mesh and also accelerate the simulation with P-multigrid. But due to the restriction of yplus in ILES, the mesh resolution shall be determined corresponding to the number of orders you plan to use. Usually, a higher order could produce a more accurate solution, is that applicable to all situations? I ran a square cylinder simulation both in 2d and 3d from p1 to p3, the result shows a decreasing drag coefficient from 2.2 to 2.0 with the increase of the polynomial order. 2d and 3d simulations both met the same situation, so which one is more close to the actual value?

  2. Anti-aliasing in config file
    It’s very interesting to decide whether to take measurements to avoid aliasing. I tested different cases with or without anti-alising, and found that when the flow was complicated with different vortex intertwining, the measurement was always a must. About the two different methods, Park’s paper AIAA Aerospace Research Central is really helpful to better understand them. L2 projection is more accurate but setting the right value seems to be a trick to me, high value is time-costing while a low one produce NaNs. The modal filtering is less accurate but you don’t need to change the dt and dtau, so more convenient. Actually, in my 2d simulation of a bridge girder, I could barely choose a reasonable value of quad-deg (either results in NaNs, or limits my dtau to 1e-8). Also, turning to modal filtering raised a RuntimeError: NaNs detected at t = 50.840000000517. Although it’s a 2d case under high Reynolds number, I think the dimension shall not be the vital cause.

  3. The stiffness of the pressure residuals and ac-zata
    Just as other post tells, pressure residuals should not be a factor in measuring the convergence of the ac-ns case. The simulation can even continue for a long time with a high but stable pressure residuals. Except for switching to 3d or adopting a p-multigrid method, is changing ac-zeta will do any help? The parameter is the only value for us to adjust, and it’s suggested to be in the range of 2~10 times the velocity. For a simulation, you can obtain the desired Reynolds number by simply changing the nu, which means the ref velocity remains the ‘same’ under different cases. Maybe an increase of ac-zeta with the increase of Re would be more in line with reality?

Pardon me for being so wordy, but these are the ones I could not find more details about in the former discussion of the forum. Hope someone can share his/her opinions.

Regards, Thatcher

It depends. Usually yes, but there are many caveats on that. But for now I will say that if you have sufficient resolution to be within the asymptotic regime then, for FR, higher orders generally lead to more accurate solutions. But as you may be getting by now, it depends.

Not trying to be too blunt, but no idea, this is why we run simulations. A mesh convergence study, rather than an order convergence study, will give you a more definitive answer. Order convergence studies are a bit tricky as changing the order changes the numerical properties of the scheme. See the previous question.

If adding the over-integration anti-aliasing makes your simulation fail, then I would suggest you look at your mesh. Over-integration should help things and if you make the numerics more accurate, but the simulation fails this is normally a sign that something was under-resolved. Modal filtering can be viewed as dissipation, and hence why it will often work when over-integration doesn’t, but that doesn’t make it correct. It just means that you are damping out the under-resolved physics that was causing the failure. This physics is likely important.

Yes it can have a reasonable impact, there are a couple of schools of thought on how to set zeta. Some people suggest it should be small, some suggest it should be large. To me large makes sense as this should help convergence the divergence faster. In some some work I am doing with @Sambit, we’ve found that ~7 can often give locally optimal performance. However, what the ‘best’ value is depends on: order, multigrid cycle, pseudo stepping scheme, physical time scheme, the ratio dt/pseudo-dt, and anything else you can think of. Increasing zeta though will increase the stiffness of the problem, as you can see from the largest eigenvalue of the flux:

\lambda_1 = u + \sqrt{u^2 + \zeta}

If the problem is significantly stiffer with a high zeta, then this will lead to slower convergence in pseudo-time. But increasing zeta can improve convergence, so there is a trade off. I have tried the preconditioner by Turkel, I have it on a public branch I think, and that can help, but it is another tunable parameter and if you get it wrong it can slow things down a lot. Hence me and Freddie decided not to add it. If you’re interested hyperbolic-diffusion can about halve your iterations to convergence and increase stability. (Neither of these features are mainline so merge at your own risk).

I think that language is a bit strong, it should be a factor, we definitely care if the pressure field is converging to a sensible value. However, given that most of the time we are interested in the velocity field, a tolerance is usual set that is stricter on these.

Hi, Will

Thanks for your detailed reply. Because what I mentioned earlier covers a lot of things, I’ll pick some of the more puzzling ones to further question.

I did the mesh convergence study, making a comparison between three meshes with different resolutions, in 2d though. And the result were almost the same, so I was able to ensure that the mesh of middle size is capable for the later 3d simulations.

Yeah, I changed the value of quad-deg from 5 to 11, the corresponding pseudo-dt dropped to nearly 1e-8, which is unacceptable for practice. So it’s probably something wrong with my mesh. Is it normally because of the mesh resolution/ quality/ type or anything else that leads to the problem? I reviewed my mesh and thought maybe the corase grid at the very edge is the source. Can you please have a look at my mesh if it’s convenient?

That inspired me a lot, I may try with different ac-zeta. At least right now, for me, a high ac-zeta relates to a high cost of calculation. And this paper https://dspace.lib.cranfield.ac.uk/handle/1826/240 is helpful to better understand the theory.

Sorry for the inappropriate sentence. What I meant was less concern shall be laid on the value as long as it drops to a relatively low value and remains stable. But the countermeasures to a high but stable pressure residuals can be reducing the Reynolds number, increasing the mesh resolution or the value of ac-zeta(be careful), checking the simulation case (maybe not suitable for the current setting).

Regards, Thatcher

Hi @Winxingtf,

You wrote:

I just had a look at that box_2d.msh. It seems to have 6 small triangle elements scattered within 8882 quads? Assuming you want to work with quad elements in 2d (and thus hex in 3d), to simplify your investigation, I’d try to adjust the gmsh [Mesh] settings to produce a mesh with only quad elements. I’ve just returned to a project uses the gmsh/onelab gui to drive PyFR; if you’d like to send a gmsh input “.geo” file, I could try to work out what’s causing those few triangles?

Nigel

Hi Nigel,

Sure, I’ve just refined the mesh and updated the mesh link. But there still existed 2 triangles at the near wake area which is probably caused by the Bump processing to the deck. The initial intention was to generate a tense mesh around the shedding area to capture the details of the vortex.

I had to admit that I noticed the small triangle elements at the beginning, but didn’t pay much attention to it. Will it have such a big impact?

Regards, Thatcher

Hi @Winxingtf,

Using gmsh 4.9 built from the current git, with [Frontal-Delaunay] as 2d algorithm and [Blossom] as 2d recombination algorithm, and box_girder_2d.geo as input, I get 8613 quads and zero triangles.

Regarding the effect of a few tri elements, take a look at

BaseAdvectionDiffusionSystem :: rhs 
in file: pyfr/solvers/baseadvecdiff/system.py

for each rhs evaluation, PyFR runs separate kernels for each element type. So if the mesh has 8,000 quad elements, and say 4 tri elements, then for each rhs evaluation, not only does the solver have to solve for those 8,000 quads, it also has to run all the kernels for those 4 tris.

Also, in case you plan to extrude the 2d mesh for a 3d run, then removing those few tris would allow gmsh to extrude a neat pure hex mesh. Just a thought.

Nigel

Given that you are running at high Reynolds numbers 3d effect will be important, and so I’m not sure of the validity of a 2d mesh convergence study to confirm the results on a 3d mesh. Ultimately it has to be a judgement call on your part.

Even if the over-integration is moderate, say quad-deg = order + 1, and this causes the simulation to fail, that should be a red flag. It almost certainly means your mesh is under-resolved or low quality (aspect ratios that are too high etc.). Look to see where the first nans appear as an indicator of where the mesh quality might be poor.

Yes that is a useful paper although there are a number of errors in it. I have most of the Riemann solvers from that paper, including the exact solver, implemented on a branch. GitHub - WillTrojak/PyFR at feature/riemann_solvers

They might need some refactoring as they were implemented for pyfr ~v1.9, and some changes have since been made.

1 Like

Thanks, Nigel,

My current Gmsh version is 4.8.2, maybe that’s the reason I failed with the same setting. But I did a small adjustment of the Bump value and got a mesh containing only quad elements. Also, the run time of the simulation reduced about 10%, which was a sign that the solver skipped the triangle kernel and shortened total time as a consequence.

It’s really a big inspiration to me, also reminds me to always update the software to the newest version. :grinning:

Regards, Thatcher

Hi Will

Really appreciate your kind reply. I’ve refined the mesh to see whether the situation will be improved. All in all, It is not particularly wise to use 2d calculations to simulate cases with rich 3d effects, which maybe the critical source that leads to the problem.

Also, thanks for the guidance to the Riemann_solver branch, I think I need to take an in-depth look at it.

Regards, Thatcher