Gradients without writing to disk

Is there a way to access the gradients of a PyFR simulation from within Python, similar to integ.soln, without the need to write the solution file to disk and using pyfr export -g?

I intend to run a large number of simulations and compute some quantities based on the vorticity, which is slowed down substantially by the need to write to disk first when using the export approach.

UPDATE : I now found integ.grad_soln, but some other posts in the forum seem to imply that these gradients might actually not be the physical gradients, but rather the gradients w.r.t. the canonical coordinates in each element.

I looked at the time-averaging plugin, and the implementation of that plugin seems to rely on the following line: self.elementscls.grad_con_to_pri(soln, grad_soln, self.cfg)

If I try to call this function, in the following fashion: integ.system.elementscls.grad_con_to_pri(integ.soln[k], integ.grad_soln[k], integ.cfg), I get identical values to simply calling integ.grad_soln[k]. I’m assuming that I’m misusing this somehow?

The specifics here depend on what solver you are using.

For the incompressible solver the conservative variables are equal to the primitive variables and so no further processing is required. This is handled by grad_con_to_pri being a no-op. For the compressible solver it is necessary to employ the chain rule to convert the conserved gradients to primitive gradients.

Regards, Freddie.

1 Like

Hi @fdw and @WillT,

Re: gradient-export, and competing with this: Euler Mach 3 flow - YouTube

I just tried a pre-release version of @tdzanic’s entropy filter and got this:

Euler Mach 3 flow (Google folder with m3_gmsh.geo, m3.ini, frame and movie)

Allowing for Freddie’s point on discontinuity, the old “-g” export option gives a useable result. But (as with original post) to streamline runtime viz of hundreds of movie frames, is it possible to make a compute_grads() routine for Euler.baseadvec.system?

Or even better: is there some point in a full RK4 step of compressible Euler where we could calculate this gradient data from the existing vect_upts array?

Something like a cons_to_pri(), then vect_grads = (rcpdjac * smats * vect_upts)?

thanks!
Nigel

No. In Euler (which is an advection system), the gradients are not computed as part of the right hand side evaluation. As such vect_upts never contains the gradients. Indeed, it is not possible to meaningfully define corrected gradients for a pure advection system.

If you need gradients, compute them in ParaView. This will do a much better job than the old -g option and will also perform better since we are not writing large amounts of data to disk.

Regards, Freddie.

Hi Nigel,

Really cool results! Do you mind sharing an image of the mesh you used? And which Riemann solver did you use for this? I’m trying to figure out if the artifacts around the bow shock are due to mesh sensitivity or perhaps the carbuncle effect or maybe something else. There’s something similar in the IDP solution from the video, but I’m fairly sure they’re also not supposed to be there.

  • Tarik

I just noticed you had the mesh and .ini files attached as well. HLLC can suffer from the carbuncle effect, which might be affecting the bow shock there. Have you tried this with any other Riemann solver?

@fdw and @tdzanic,

Thanks Freddie – I’ll take a look at the Paraview pipeline. Since I run this as a solver inside the gmsh onelab GUI, I’m looking for a way to generate a minimal set of scalars for interactive runtime update.

Tarik, as we’re wrangling the full set of characteristic waves, is this a case where the EXACT solver might do better than HLLC?

Regarding anomalies in that movie, for this first test I ramped the inflow velocity from u1=1 to u1=3 (over time t=0.01 to t=0.15). This caused that odd start-up pulse. (And given what else I’ve done to the code, there are many places where I may have added extra bugs.)

PS: In case of possible issues with the mesh used for the movie, I’ve added the gmsh .msh file
to that same G-drive (see: m3_271189.zip - 24 MB)

Nigel

@nnunn Exact is usually my go-to if I’m running into issues, but it might be worthwhile to double check if something more dissipative like Rusanov fixes it. Of course, it could just be a deficiency in the method – it wouldn’t surprise me if the relaxed entropy bounds don’t perform well for a steady strong shock (overshoots compounding over time and whatnot), and it might need much more resolution in that region to fix any mesh sensitivity.

1 Like

Hi @tdzanic - thanks for alerting me to the issue of supersonic carbuncles. Reviewed some history about how this has been handled in the past, and the more I read the more interested I became to see how an “exact” Riemann solver might perform.

As a gentle first test, I set up a semi-structured quad mesh over a reduced domain, reduced solver order to 3, ramped the inflow velocity from u1 = 1 to 3 over time 0.1 to 0.3, then let it run out to t = 2.0

For fun, I’ve uploaded an 80 MB mp4 showing the mesh, the .ini settings, and a zoom into the bow shock, showing density and gradient of density.

Since this is the first time I’ve gone supersonic, I’m not sure what to look for with regard to “carbuncles”. Plots don’t seem to show much in the way of anomalies, so I’m keen to hear how you think the exact solver + entropy filter are shaping up.

Movie can be found here: Euler Mach 3, exact + entropy-filter

New file is “euler_exact_test.mp4”. Also the mesh file: m3_73812.zip

@nnunn The flow field looks very nice now, I also can’t see any real issues with it either. I’m surprised that there was such a big difference with switching to exact (or perhaps the new mesh is helping as well). This does definitely renew some motivation to get the exact solver mainlined!

Agreed. Also, your entropy filter seems to be working fine at Mach 8 on the same mesh. Next week I’ll work up a comparison between r-solver = rusanov vs. exact for this more interesting config.

PS: My original motivation for wrestling numerics on GPU’s was to get a fast coprocessor to compute entire eigensystems while the old CPU was busy doing a couple of fused multiply-adds… hence my interest in the exact solver.

Re: “fast coprocessors”, maybe a couple of under-clocked, liquid-cooled RTX 4090’s… ?

Good to know it works at higher Mach, those “steady” shocks are surprisingly tricky to resolve properly compared to the unsteady case.

I think Will did a comparison a while back for the exact solver and found that it only increased the runtime by a few percent on GPUs, so I don’t see a reason why not to use it in most cases.

1 Like

Hi @tdzanic,
Here’s a quick snapshot comparing r-solvers Rusanov v. Exact at Mach 3.
Mesh was simple unstructured gmsh (113,275 quads), poly order=4, time=4.0.
(Gradients still generated by old -g export flag)

Result with r-solver=hllc very similar to exact.
I’ll explore more closely in a few days!

Yeah, in my experience exact and hllc are very simillar, and generally you can classify Riemann solvers for these sorts of problems as contact resolving and non-contact resolving. For example, HLL will probably perform fairly similar to Rusanov.

I did some work a while ago on the TGV comparing about 7 or eight different Riemann solvers for the TGV, with the results splitting the Riemann solvers into these two classes. Ultimately the result were very similar to those from the Nektar++ team’s paper, so we didn’t take the work any further: http://dx.doi.org/10.1016/j.jcp.2016.10.056