do you have any advice regarding the export from .pyfrs to .vtu setting an appropriate division flag? At the moment I use the same number of divisions as my polynomial order but retaining a constant -d 1 would save a lot of space but this can by any means affect the quality of the results. (e.g. various plots over lines or integral quantities calculations?) Do you have any suggestions?
Moreover, I noticed that when converting the windowed time-averaged quantities with the same division of instantaneous solutions, the converted windowed file gets a bump in size of 13x whilst the instantaneous solution is roughly 1.5x (for the same -d flag). Is this normal, and why?
I wouldn’t use devision and instead use the high order vtu option. This will also reduce the export time, reduce memory usage in paraview and reduce fvtu file size.
As to your point about the time averager, I have no idea why that mght be. I’ll look into it.
Do you have the exact sizes of the relevant .pyfrs files, the commands used, and the sizes of the resulting .vtu files? Are you also 100% sure that both files were computed at the same polynomial order?
I’ve noticed this behaviour with small files, in my case the windowed .pyfrs file was 2MB while the converted .vtu reached the ~120MB.
I’ll let you know in a couple of days if this is consistent with bigger files
Lastly a question regarding windowed and continuous time averaging: say that I have a simulation spanning t=[0,4]s.
If I use window averaging with a dt-out=2s to compute Reynolds stresses in pyFR it will use for the first window the mean value (say of U) in t=[0,2]s and for the second window the mean value of U in t=[2,4].
If I use continuous averaging it will use the mean value (say U again) in t=[0,4]s
Am I correct?
If my guess is right, and I want to get unbiased turbulent statistics I need to use continuous averaging but there is no possibility to know a priori when I should start collecting them, hence a propri I cannot set tstart = value in continuous averaging. Do you have any suggestion to overcome this rather than restarting with continuous averaging? Maybe allowing user to specify a batch of useful tstart and in the end he can choose which one he wants to retain.
Maybe the best course of action is to use windowed averaging, and then merge the averages after the run. That way you decide which windows to include. We have been working on a merge feature that isn’t quite ready yet, but it is straight forward enough to do yourself.
I already come up with the same idea of averaging multiple windows but this will produce biased statistics. To actually compute them you need the averaged value over the entire average period and then average the fluctuations. This is not the same thing of averaging multiple windows.
The thing of allowing a batch of tstart is something viable? Or easily to attain by digging into code?
That plugin class uses the new addition of CLI plugins to the develop branch. The standard deviation isn’t quite right at the moment but the averages are correct.