Is there a way to access the gradients of a PyFR simulation from within Python, similar to integ.soln, without the need to write the solution file to disk and using pyfr export -g?

I intend to run a large number of simulations and compute some quantities based on the vorticity, which is slowed down substantially by the need to write to disk first when using the export approach.

UPDATE : I now found integ.grad_soln, but some other posts in the forum seem to imply that these gradients might actually not be the physical gradients, but rather the gradients w.r.t. the canonical coordinates in each element.

I looked at the time-averaging plugin, and the implementation of that plugin seems to rely on the following line: self.elementscls.grad_con_to_pri(soln, grad_soln, self.cfg)

If I try to call this function, in the following fashion: integ.system.elementscls.grad_con_to_pri(integ.soln[k], integ.grad_soln[k], integ.cfg), I get identical values to simply calling integ.grad_soln[k]. Iâ€™m assuming that Iâ€™m misusing this somehow?

The specifics here depend on what solver you are using.

For the incompressible solver the conservative variables are equal to the primitive variables and so no further processing is required. This is handled by grad_con_to_pri being a no-op. For the compressible solver it is necessary to employ the chain rule to convert the conserved gradients to primitive gradients.

I just tried a pre-release version of @tdzanicâ€™s entropy filter and got this:

Euler Mach 3 flow (Google folder with m3_gmsh.geo, m3.ini, frame and movie)

Allowing for Freddieâ€™s point on discontinuity, the old â€ś-gâ€ť export option gives a useable result. But (as with original post) to streamline runtime viz of hundreds of movie frames, is it possible to make a compute_grads() routine for Euler.baseadvec.system?

Or even better: is there some point in a full RK4 step of compressible Euler where we could calculate this gradient data from the existing vect_upts array?

Something like a cons_to_pri(), then vect_grads = (rcpdjac * smats * vect_upts)?

No. In Euler (which is an advection system), the gradients are not computed as part of the right hand side evaluation. As such vect_upts never contains the gradients. Indeed, it is not possible to meaningfully define corrected gradients for a pure advection system.

If you need gradients, compute them in ParaView. This will do a much better job than the old -g option and will also perform better since we are not writing large amounts of data to disk.

Really cool results! Do you mind sharing an image of the mesh you used? And which Riemann solver did you use for this? Iâ€™m trying to figure out if the artifacts around the bow shock are due to mesh sensitivity or perhaps the carbuncle effect or maybe something else. Thereâ€™s something similar in the IDP solution from the video, but Iâ€™m fairly sure theyâ€™re also not supposed to be there.

I just noticed you had the mesh and .ini files attached as well. HLLC can suffer from the carbuncle effect, which might be affecting the bow shock there. Have you tried this with any other Riemann solver?

Thanks Freddie â€“ Iâ€™ll take a look at the Paraview pipeline. Since I run this as a solver inside the gmsh onelab GUI, Iâ€™m looking for a way to generate a minimal set of scalars for interactive runtime update.

Tarik, as weâ€™re wrangling the full set of characteristic waves, is this a case where the EXACT solver might do better than HLLC?

Regarding anomalies in that movie, for this first test I ramped the inflow velocity from u1=1 to u1=3 (over time t=0.01 to t=0.15). This caused that odd start-up pulse. (And given what else Iâ€™ve done to the code, there are many places where I may have added extra bugs.)

PS: In case of possible issues with the mesh used for the movie, Iâ€™ve added the gmsh .msh file
to that same G-drive (see: m3_271189.zip - 24 MB)

@nnunn Exact is usually my go-to if Iâ€™m running into issues, but it might be worthwhile to double check if something more dissipative like Rusanov fixes it. Of course, it could just be a deficiency in the method â€“ it wouldnâ€™t surprise me if the relaxed entropy bounds donâ€™t perform well for a steady strong shock (overshoots compounding over time and whatnot), and it might need much more resolution in that region to fix any mesh sensitivity.

Hi @tdzanic - thanks for alerting me to the issue of supersonic carbuncles. Reviewed some history about how this has been handled in the past, and the more I read the more interested I became to see how an â€śexactâ€ť Riemann solver might perform.

As a gentle first test, I set up a semi-structured quad mesh over a reduced domain, reduced solver order to 3, ramped the inflow velocity from u1 = 1 to 3 over time 0.1 to 0.3, then let it run out to t = 2.0

For fun, Iâ€™ve uploaded an 80 MB mp4 showing the mesh, the .ini settings, and a zoom into the bow shock, showing density and gradient of density.

Since this is the first time Iâ€™ve gone supersonic, Iâ€™m not sure what to look for with regard to â€ścarbunclesâ€ť. Plots donâ€™t seem to show much in the way of anomalies, so Iâ€™m keen to hear how you think the exact solver + entropy filter are shaping up.

@nnunn The flow field looks very nice now, I also canâ€™t see any real issues with it either. Iâ€™m surprised that there was such a big difference with switching to exact (or perhaps the new mesh is helping as well). This does definitely renew some motivation to get the exact solver mainlined!

Agreed. Also, your entropy filter seems to be working fine at Mach 8 on the same mesh. Next week Iâ€™ll work up a comparison between r-solver = rusanov vs. exact for this more interesting config.

PS: My original motivation for wrestling numerics on GPUâ€™s was to get a fast coprocessor to compute entire eigensystems while the old CPU was busy doing a couple of fused multiply-addsâ€¦ hence my interest in the exact solver.

Re: â€śfast coprocessorsâ€ť, maybe a couple of under-clocked, liquid-cooled RTX 4090â€™sâ€¦ ?

Good to know it works at higher Mach, those â€śsteadyâ€ť shocks are surprisingly tricky to resolve properly compared to the unsteady case.

I think Will did a comparison a while back for the exact solver and found that it only increased the runtime by a few percent on GPUs, so I donâ€™t see a reason why not to use it in most cases.

Hi @tdzanic,
Hereâ€™s a quick snapshot comparing r-solvers Rusanov v. Exact at Mach 3.
Mesh was simple unstructured gmsh (113,275 quads), poly order=4, time=4.0.
(Gradients still generated by old -g export flag)

Yeah, in my experience exact and hllc are very simillar, and generally you can classify Riemann solvers for these sorts of problems as contact resolving and non-contact resolving. For example, HLL will probably perform fairly similar to Rusanov.

I did some work a while ago on the TGV comparing about 7 or eight different Riemann solvers for the TGV, with the results splitting the Riemann solvers into these two classes. Ultimately the result were very similar to those from the Nektar++ teamâ€™s paper, so we didnâ€™t take the work any further: http://dx.doi.org/10.1016/j.jcp.2016.10.056