About reconstructing exact solution with .vtu files

Hi all,

I want to use the solution of a turbulent flow from PyFR as an input for a Lagrangian simulation. The problem is computing the trajectories of the particles advected by the flow are sensitive to local errors in the fluid velocity field and can diverge. The trajectories can be computed one at a time since they do not interact and they are not coupled to the solver (flow not affected by the presence of particles). Because of that, I dump my flow solution from PyFR and convert it to .VTU so I can after read it with the Lagrangian software and use VTK libraries to extract velocities at different positions. The problem is in the conversion from .pyfrs to .vtu information is lost through the -d N linearisation stage and the computational advantage of having a coarser mesh it is lost. Furthermore, making N high, when resolving particle trajectories, makes the inverse search from particle positions to fluid cells really expensive in an unstructured mesh. Would it be possible to dump the internal PyFR polynomial information local to each cell so it can be used in a post-processing stage? This would be useful for any kind of post-processing doing integration over the fields.

Regards,

Eduardo

1 Like

Thanks for the email.

The polynomial information is stored inside the .pyfrs files. They can be interrogated with e.g. h5py or any other HDF5 wrapper.

Basically, we store the solution value at a set if nodal points within each element, which when combined with an associated nodal basis can be used to reconstruct the polynomials within each element.

If you are interested we would need to provide some more info about the exact file format, since it is not documented anywhere yet. Also, since elements can be curved, it can be a challenge to work out which element a given physical point is in. And when you have found the element the mapping may need to be inverted numerically.

Peter

Hi Vincent,

Thanks for your reply. I am really interested in this since for large meshes become unfeasible to subdivide into linear elements to do the post-processing. I think the best thing would be to include this functionality in “pyfr export” and export the polynomials information local to each cell to the VTU file with -d 1. This is desirable because working with the HDF5 format can be really slow to query for spatially dependent data, and in that respect , VTK provides space partitioners which are very useful to map coordinates to cells efficiently. I am willing to work on this and contribute to the project if you point me to the right place to look and if you provide me detailed information on how polynomial information is encoded in the HDF5 file and how can I reconstruct them. I think this is a valuable feature to have and I have seen at least another message in the mailing list related to this problem [1].

Thanks in advance,

Eduardo

[1] https://groups.google.com/forum/#!searchin/pyfrmailinglist/integration%7Csort:date/pyfrmailinglist/zNq4ooPSgDI/9hDxh88FCQAJ

Hi Eduardo,

Thanks for your reply. I am really interested in this since for large
meshes become unfeasible to subdivide into linear elements to do the
post-processing. I think the best thing would be to include this
functionality in "pyfr export" and export the polynomials information
local to each cell to the VTU file with -d 1. This is desirable because
working with the HDF5 format can be really slow to query for spatially
dependent data, and in that respect , VTK provides space partitioners
which are very useful to map coordinates to cells efficiently. I am
willing to work on this and contribute to the project if you point me to
the right place to look and if you provide me detailed information on
how polynomial information is encoded in the HDF5 file and how can I
reconstruct them. I think this is a valuable feature to have and I have
seen at least another message in the mailing list related to this
problem [1].

So things are a little bit more subtle than this. Specifically, PyFR
permits elements to be curved. Such elements are typically employed in
the vicinity of boundary layers. A consequence of this is that a -d 1
subdivision does not necessarily constitute an accurate representation
of the geometry.

This greatly complicates the task of finding what element a given point
(x, y, z) is in. Further, even once this has been determined some
additional work is required in order to determine the corresponding
location within the reference element (p,q,r) at which to evaluate the
solution polynomial.

In this sense subdivision remains a very good means of bypassing all of
these issues whilst simultaneously enabling one to use traditional
post-processing libraries which are designed for linear elements.

Although pyfr export is relatively slow (in the grand scheme of things)
this does not necessarily need to be the case. In the past we have
written plugins which perform the subdivision extremely efficiently (and
in parallel) without having to write anything to disk. The resulting
in-memory structures can then be passed to VTK or VTK-m for processing.

Regards, Freddie.

Hi Freddie,

Many thanks for the clarification. I think I understand the difficulties, even though I am not working with curved elements for the time being. Nevertheless, I guess I will have to live with this and use the repartition into linear elements as an OK approach for the moment.

BTW, I am having trouble getting from the .VTU files the boundary elements to do post-processing. I have checked with Paraview and it seems there is no information about them inside. I would have expected to be able to get them from the “Physical tags” provided in the GMSH file. Any advice on this?

Regards,

Eduardo

Hi Eduardo,

BTW, I am having trouble getting from the .VTU files the boundary
elements to do post-processing. I have checked with Paraview and it
seems there is no information about them inside. I would have expected
to be able to get them from the "Physical tags" provided in the GMSH
file. Any advice on this?

In the PyFR mesh format boundaries are defined as a sequence of faces.
For example, consider the couette flow test case which comes as part of
PyFR. After importing the .msh and running h5ls we see:

$ h5ls couette_flow_2d.pyfrm
bcon_bcwalllower_p0 Dataset {8}
bcon_bcwallupper_p0 Dataset {8}
con_p0 Dataset {2, 81}
mesh_uuid Dataset {SCALAR}
spt_quad_p0 Dataset {4, 37, 2}
spt_tri_p0 Dataset {3, 10, 2}

where the relevant boundary regions are given by the bcon_ prefixed
datasets. Inspecting one of these we find:

In : import h5py

In : import numpy as np

In : f = h5py.File('couette_flow_2d.pyfrm')

In : print(np.array(f['bcon_bcwallupper_p0']))
[(b'quad', 31, 3, 0) (b'quad', 7, 1, 0) (b'quad', 27, 1, 0)
(b'quad', 19, 3, 0) (b'tri', 0, 0, 0) (b'tri', 1, 0, 0)
(b'quad', 17, 0, 0) (b'quad', 32, 2, 0)]

where the tuples are of the form (element type, element number, face
number, unused). So the first member of the 'bcwallupper' boundary is
face 3 of quad 31. From this you can trivially obtain the elements
which border the boundary. Such information, however, is not carried
over to the .VTU file (which only has the concept of sub-divided
elements as opposed to faces).

The best solution depends on exactly what information you want from the
boundary. If it is just the solution (and gradients thereof) then this
is something which is not too difficult (for example the fluidforce
plugin --- which is designed to integrate up forces on boundaries ---
does exactly this with minimum fuss and no subdivision).

Regards, Freddie.

Hi Freddie,

Thanks for your fast response! This is very useful.

Regards,

Eduardo