Print out variables within the mako templates subroutines

Dear all,

My name is Antonio Garcia-Uceda, and I’m a researcher on Flux Reconstruction methods. I’m using pyFR as part of my project.

I’d like to know whether it is possible to print out variables within the mako templates subroutines in pyFR. For instance, to print out the resulting data contained in tgradu_fpts after its extrapolation from the solution to the flux points.

Also, is it possible to visualize the temporary .c or .cu files generated by these templates when running pyFR with a particular backend?

Thanks a lot for your help.

Best regards,
Antonio

Dear all,

I’m afraid my indications are misleading, sorry. What I’d like to know is:

  1. whether it is possible to print out variables within the mako templates pointwise kernels, such as during the computation of inviscid/viscous fluxes.

  2. whether and where in the code to print out the variables after a matrix multiplication kernel, such as after the extrapolation of solution gradients from the solution to the flux points.

I know that all the kernels are executed inside the subroutines “runall([q1, q2])” or “runall([q1])” inside “rhs”. However the flowchart inside these subroutines is a bit confusing and I’d need some indications, if possible.

Thanks a lot once more.

Best regards,
Antonio

Hi Antonio,

1) whether it is possible to print out variables within the mako
templates pointwise kernels, such as during the computation of
inviscid/viscous fluxes.

In terms of outputting variables see:

  Redirecting to Google Groups

the point mostly stands: it is possible although unsupported.

2) whether and where in the code to print out the variables after
a matrix multiplication kernel, such as after the extrapolation of
solution gradients from the solution to the flux points.

This is not immediately possible. Kernels which are added to a queue
can be thought of as executing in one big atomic block. This makes it
difficult to access some intermediate results which are later
overwritten. Further, by the time the kernels are being executed
there is no easy way to get a handle to the matrices where
inputs/outputs are.

Hope that helps.

Regards, Freddie.

Hi Antonio,

When running with the OpenMP backend you can export the environmental
variable:

$ export PYFR_DEBUG_OMP_KEEP_LIBS=1

before you run PyFR. The various kernels will then be available in
your temporary directory; usually /tmp/pyfr--/

Dear Freddie,

Thanks a lot, it really helped.

I managed to output the data inside the mako kernels as you indicated in the other posts. Is it also possible to print out as well the given sol/flux point in which the operation is going on? There’s no index with this info inside the mako template as the kernels are pointwise. Perhaps any other trick up your sleeve? :).

Btw, if I keep the temporary .c files for the kernels as you indicated, and I rerun PyFR, Would these be overwritten? If not it might be easier to just modify these temp. .c files in standard c language.

Also, is it possible to keep these temp. files when using the backend CUDA?

When you say: “This makes it difficult to access some intermediate results which are later overwritten”, do you mean that the memory used to store scalars, vectors, at sol/flux points is used for different purposes as the iiteration processes? I mean, for instance, the data that stores solution extrapolated at flux points, it may be later overwritten with the common fluxes resulting from the Riemann solver?

Thanks a lot once more.

Best regards,
Antonio

Hi Antonio,

I managed to output the data inside the mako kernels as you
indicated in the other posts. Is it also possible to print out as
well the given sol/flux point in which the operation is going on?
There's no index with this info inside the mako template as the
kernels are pointwise. Perhaps any other trick up your sleeve? :).

Not portably. This is very much 'implicit' and depends on the backend
being used. If you look at some of the generated code you'll see the
complex indexing calculations that go on to compute this.

Btw, if I keep the temporary .c files for the kernels as you
indicated, and I rerun PyFR, Would these be overwritten? If not it
might be easier to just modify these temp. .c files in standard c
language.

Also, is it possible to keep these temp. files when using the
backend CUDA?

New kernels, each in their own unique directory, are created every
time PyFR runs. So the number of directories will keep on growing.

There is no CUDA equivalent for this. Although PyCUDA does cache the
kernels I think it only keeps the binaries about and not the source code
.

When you say: "This makes it difficult to access some intermediate
results which are later overwritten", do you mean that the memory
used to store scalars, vectors, at sol/flux points is used for
different purposes as the iiteration processes? I mean, for
instance, the data that stores solution extrapolated at flux
points, it may be later overwritten with the common fluxes
resulting from the Riemann solver?

Yes. And often this happens within the same runall call. Hence, if
you want to get at a certain variable you may need to break up the
various run calls. This combined with the fact that you can not
readily get handles to the various matrices is what complicates this.

Regards, Freddie.

Dear Freddie,

Thanks a lot for the info. I think I’ll manage to do what I want the way you indicated.

Best regards,
Antonio