Best practice pyFR export solution


do you have any advice regarding the export from .pyfrs to .vtu setting an appropriate division flag? At the moment I use the same number of divisions as my polynomial order but retaining a constant -d 1 would save a lot of space but this can by any means affect the quality of the results. (e.g. various plots over lines or integral quantities calculations?) Do you have any suggestions?

Moreover, I noticed that when converting the windowed time-averaged quantities with the same division of instantaneous solutions, the converted windowed file gets a bump in size of 13x whilst the instantaneous solution is roughly 1.5x (for the same -d flag). Is this normal, and why?


I wouldn’t use devision and instead use the high order vtu option. This will also reduce the export time, reduce memory usage in paraview and reduce fvtu file size.

As to your point about the time averager, I have no idea why that mght be. I’ll look into it.

Let me know about the time averager

Do you have the exact sizes of the relevant .pyfrs files, the commands used, and the sizes of the resulting .vtu files? Are you also 100% sure that both files were computed at the same polynomial order?

Regards, Freddie.

Hi @fdw,

I’ve noticed this behaviour with small files, in my case the windowed .pyfrs file was 2MB while the converted .vtu reached the ~120MB.
I’ll let you know in a couple of days if this is consistent with bigger files

Lastly a question regarding windowed and continuous time averaging: say that I have a simulation spanning t=[0,4]s.

If I use window averaging with a dt-out=2s to compute Reynolds stresses in pyFR it will use for the first window the mean value (say of U) in t=[0,2]s and for the second window the mean value of U in t=[2,4].

If I use continuous averaging it will use the mean value (say U again) in t=[0,4]s

Am I correct?

If my guess is right, and I want to get unbiased turbulent statistics I need to use continuous averaging but there is no possibility to know a priori when I should start collecting them, hence a propri I cannot set tstart = value in continuous averaging. Do you have any suggestion to overcome this rather than restarting with continuous averaging? Maybe allowing user to specify a batch of useful tstart and in the end he can choose which one he wants to retain.

What do you think?

This itself does not mean much. Pass a high enough value for the level of sub-division and an arbitrarily large file size is possible.

Regards, Freddie.

What about averages?


The average files are handled in the same way as the solution files. So the same rules apply.

I was referring to this, not to exporting. Sorry if I’ve not specified


Maybe the best course of action is to use windowed averaging, and then merge the averages after the run. That way you decide which windows to include. We have been working on a merge feature that isn’t quite ready yet, but it is straight forward enough to do yourself.

Hi @WillT,

I already come up with the same idea of averaging multiple windows but this will produce biased statistics. To actually compute them you need the averaged value over the entire average period and then average the fluctuations. This is not the same thing of averaging multiple windows.

The thing of allowing a batch of tstart is something viable? Or easily to attain by digging into code?


I don’t follow how this approach doesn’t work for you?

Say you are using the following config for you tavg plugin:

avg-u = u
avg-v = v
avg-uu = u*u
avg-vv = v*v
avg-uv = u*v

fun-avg-upup = uu - u*u
fun-avg-vpvp = vv - v*v
fun-avg-upvp = uv - u*v

If you were to merge windowed average files you would simply merge the avg- terms and then recompute the functional terms fun-avg-. This is exactly what I did here: PyFR/ at 7ae172f2e1b85a0739d17cc04adf90ca8f775cb3 · WillTrojak/PyFR · GitHub

That plugin class uses the new addition of CLI plugins to the develop branch. The standard deviation isn’t quite right at the moment but the averages are correct.