There is a paper studied the use of single and double precision arithmetic for simulation accuracy. The results in this paper suggest that single precision arithmetic is sufficient.

However, most comparisons are based on low-order statistics, such as velocity. I wonder whether the precision has a significant impact on the accuracy of high-order statistics (the Reynolds stress, for example). Is there any related work?

The reason I put forward this question is that statistical errors may still exist in high-order statistics, even though low-order statistics agree well with high-fidelity results. See this paper, please.

As with so many things in CFD, the answer is that it depends. With single precision you have an epsilon of 2^{-24}, and for double precision you have 2^{-53}. What this means is that as you accumulate, with double precision, the number of accumulations that can be made before a fixed error occurs is significantly higher.

To calculate the reynolds stresses, or the energy bugdet in general you end up needing terms like avg-uu = u*u and avg-uuu = u*u*u. When calculating u^2 the relative error is 2\epsilon, and so not only is the absolute error amplifified by the u^2 but it also is twice as big. What this aims to show is that when you then accumulate u^2, your error can quickly become of the order of u when working at single precision. But this will really depend on how long you need to average.

I note that when accumulating time-average statistics PyFR does all accumulation at double precision, irrespective of the working precision for the simulation. However, in order to save on storage space the time average files are written out in single precision by default (although this can be changed with an option).

Hi all,
I have a side question here. It is rather a trivial one, though. I just want to make sure.

I tend to start simulations with single precision so that I can quickly wash away the initial transients (simply because SP runs faster). After some time, i restart with DP, and some convection time later, finally start collecting statistics with DP. Is it possible to restart flawlessly from another precision. PyFR casts the precision to another one when needed, doesn’t it?

Yes, PyFR will recast the numbers when you change the precision in the inifile. Something to bear in mind is that the set of single precision numbers is a subset of the set of double precision numbers, so you can go from single to double ‘flawlessly’ as you say. However, when you go from double to single there will be some loss.

A further point to bear in mind is that the accumulation in the time averager is always performed in double precision and then cast to the relevant precision later. This done to reduce the errors encountered in accumulation.