Calculating cell averages in Mako macros

Assuming cell i, I want to compute cell average of solution value u_{i}, u_{i+1} in macro environment. How to implement this? And are there any examples I can refer to?

Second, Is it possible to install two different versions?


The average of a solution inside of an element can be computed relatively easily. An example of code which does something very similar can be found in the shock sensor.

If you want to have multiple versions of PyFR installed the easiest approach is to use multiple virtual environments.

Regards, Freddie.

Ok, thanks.

Then I have two virtual enviroments, but how do I choose which pyfr version commond when I run the case?

The version that runs will be determined by which virtual environment is active at the time.

Thanks a lot.

I unzip pyfr package in the new virtual environment, then I use the command “python install”, but there is no pycache file in the pyfr package. Am I missing some steps?

With best regards.

Why do you care about the pycache files?

Just in case you aren’t familiar with virtual environments. Once you have activated a venv, when you do python install an executable file will be added to the bin directory in the venv, assuming you’re on a Linux type system.

To install PyFR from source it can be useful in the long run to do pip install -e /path/to/PyFR. This will automatically install changes when you edit the source code.

Because when I successfully installed it before, the pyache file would appear.

Now I use the first way (python install) to install the downloaded installation package, there is .egg file in dist. And when I run thr case, some errors appear.
Traceback (most recent call last):
File “/home/pyfr/bin/pyfr”, line 33, in sys.exit(load_entry_point(‘pyfr==1.14.0’, ‘console_scripts’, ‘pyfr’)())
File “/home/pyfr/lib/python3.8/site-packages/pyfr-1.14.0-py3.8.egg/pyfr/”, line 118, in main
File “/home/pyf/lib/python3.8/site-packages/pyfr-1.14.0-py3.8.egg/pyfr/”, line 132, in process_import
File “/home/pyfr/lib/python3.8/site-packages/pyfr-1.14.0-py3.8.egg/pyfr/readers/”, line 23, in to_pyfrm
File “/home/pyfr/lib/python3.8/site-packages/pyfr-1.14.0-py3.8.egg/pyfr/readers/”, line 413, in _to_raw_pyfrm
TypeError: unsupported operand type(s) for |: ‘dict’ and ‘dict’

Second, to install PyFR from source, I use pip install -e git+ And there is an error:

As per the documentation PyFR 1.14 requires Python 3.9 or later. You are using 3.8.

Regards, Freddie.

Is it averaging by the vandermonde matrix? Could you be more specific?

No, this was just an example of a kernel which is capable of accessing each solution point inside of an element. This can then be used to compute an average.

Regards, Freddie.

An example of how to compute the element-wise average is here:

Thanks for your reply.

And why mean weights can be solved from the inverse of the Vandermonde matrix? I see in the code that Vandermonde matrix is an orthogonal basis for the solutuon point. And what are the jacobi orthogonal basis and Vandermonde matrix for? Can you recommend some papers or books for me to refer to?

With best regards

You don’t need the Vandermonde matrix to compute the mean, but you can use it to compute the mean. Orthogonal bases are defined with respect to a measure (weight function w(x))). The ones we choose are with respect to a unit measure (w(x) = 1), which for tensor-product elements are the Legendre polynomials. For other element types, the name may differ (like PKDO for triangles) or it may not have any name, but they are formed via the same method.

For orthogonal polynomials with respect to the unit measure, the first polynomial is the mean mode (p(x) = 1). The higher degree modes all integrate to zero over the element which is a nice property to have when doing things like filtering (you can change the coefficients of the higher degree modes without changing the mean which guarantees conservation).

Jacobi polynomials are just a more general set of orthogonal polynomials with the weight w(x) = (1-x)^a * (1+x)^b. If you set a = b = 0, you get w(x) = 1 which are the Legendre polynomials for tensor-product elements. This is what PyFR does (you can see when the jacobi(n, a, b, z) method is called it’s generally with a = b = 0).

The orthogonal basis is typically used for two reasons: better conditioned Vandermonde matrices for interpolations/projections and to compute the correction functions for FR.

1 Like

Sorry to bother you again.

When calculating the average of a solution inside of the adjacent element ele_{i+1} or ele_{i-1} in the same macros, the passed scal_upts[uin] is solution in the current element, but how to pass the solution in the adjacent element?

And when the current element is at the boundary, whether a ghost grid is set in pyfr?

Since PyFR is an unstructured code, there’s not necessarily an ordered structure to elements and adjacency only comes into play through the interfaces which are treated separately from the element interiors. Getting adjacent element information is not simple to do in PyFR. If you want to pass a single solution value per element through interfaces (such as the mean), you can look at how the minimum entropy is passed through the interfaces in this PR:

If you want to do operations on the whole solution across elements, I don’t think you can do this through the kernels. You may have to try it through the plugins which can access/modify the whole solution but it can be very slow.

For the boundaries, like I said, everything PyFR does is through the interfaces, so there are no “ghost elements”, but just a ghost state given as outer value for the boundary interface solution pair used to compute the boundary flux via the Riemann solver (see bccflux.mako and the associated BC kernels).

Thanks for your kind reply.

I check the code that how the minimum entropy is passed through the interfaces, it only seems to pass a single solution value of the immediate neighboring cells through the interface. could it pass the mean value of neighbors’ neighbors?

With best regards

Yes that’s doable, you would just have to define things as a nvars vector instead of a scalar. The approach would roughly be

  • Allocate data for element-wise means (see how we do it for entropy min and artificial viscosity, just with nvars instead of a scalar per element)
  • Compute element-wise mean through a element-wise kernel
  • “Interpolate” (copy) that mean value to interface points
  • Write interface kernels (int/mpi/bc) that swap these values
  • Use another element-wise kernel that takes in these swapped interface values as the input to do whatever you need

A lot of these tasks are implemented similarly in the entropy filter PR, except we do a minimization over the neighbors whereas you would likely need to swap the values. You may also want to look through the commit history from before the atomics reduction was put in, it may be more straightforward to replicate then.

1 Like