Time derivatives of velocities and pressure for ac-navier-stokes

Is there a readily available/suggested method for computing the time derivatives of the velocities and pressure for the ac-navier-stokes solver? I can access the spatial derivatives from the grad_soln class member of the integrator, but I can’t seem to find a similar class member/method for obtaining the time derivatives. Is it necessary to approximate these manually using a finite difference method?

There isn’t a kernel that already exists to do what you want, but you could write something. It will of course depend on whether you are using explicit or implicit time stepping.

For explicit and implicit RK this can be done using the substeps. For the BDF method, this can be done using the time history.

However, it does beg the question why do you want the time derivative?

For the BDF method, this can be done using the time history.

I’m assuming by this you mean computing du/dt using finite differences at each solution point?

However, it does beg the question why do you want the time derivative?

I’m doing some flow reconstruction work. I need the time derivatives to compute residuals (based on the prediction of the flow reconstruction model) as I currently predict single snapshots as opposed to entire time histories.

Yes, for BDF if you just want du/dt you can just use finite difference to get it.

If you are after the residual, however, I think the functionality you want already exists. For example, see the pseudostats plugin and this: PyFR/pseudocontrollers.py at d37dc343ec700ec1542146ed8809639ba116f417 · PyFR/PyFR · GitHub

Thanks for the confirmation regarding using finite differences.

I am aware of soln-plugin-pseudostats. However, I was meaning to use du/dt and so forth to compute residuals given predictions for some of the solution fields from an ML model (e.g. use an ML predicted u + PyFR v and p to compute the residuals). Hence the necessity to obtain du/dt.

I could actually use help with a couple further points:

  1. I noticed now that if I try to compute the continuity equation residual using du/dx and dv/dy values from integ.grad_soln, I see the residual values are actually rather high. I suspect this is due to the artificial compressibility formulation. Is the du/dx + dv/dy method not valid for the continuity residual of the ac-navier-stokes result?
  2. This is not quite critical, but is there also a way to access/save the solution’s second derivatives such as d^2u/dx^2?

On your first point, if I understand correctly, you are looking at the divergence of the velocity field and it is higher than you expect? This will be entirely dependent on how you set up your case, for example, if pseudo-resid-tol=1 then it won’t be very good.

We actually have a paper that we are about to submit studying aspects of this in more detail. But say for a TGV aat Re=1600, I get a value of

\frac{1}{U_0|\Omega|}\int_\Omega|\nabla\cdot\mathbf{u}|\mathrm{d}\mathbf{x}

of about 1e-2. This might seem high, but this is ok from a physics point of view. Note that the same functional without the absolute is often not far from machine precision. If you wanted certifiably incompressible results, this would be achievable but you would need tighter tolerances, and a higher zeta.

On your second point, no, but you could add something to do using the method for the gradients as a starting point.

On your first point, if I understand correctly, you are looking at the divergence of the velocity field and it is higher than you expect?

Yes. I did not evaluate the expression in your post, but as a simpler measure the average of the pointwise absolute values of du/dx + dv/dy in my case (flow past a slightly deformed circle at Re=2000 or so) was about 0.90. I used pseudo-resid-tol=1e-12 and pseudo-niters-max=3. ac-zeta under constants was set to 2.5. (I took these values from the incompressible example in the PyFR documentation). I suppose increasing pseudo-niters-max and ac-zeta would help in this case?

On your second point, no, but you could add something to do using the method for the gradients as a starting point.

Thanks. I’ll look into it.

Yes, a maximum of 3 iterations is very low. If I have a well-tuned p-multigrid cycle I will typically set this to 100. Although after the initial transients have died down maybe only 2-10 will actually be needed to reach the tolerance. An indicator of if the cycle is well-tuned is if the iterations used remains approximately constant. If after a while the number of iteration increases to the max and doesn’t reduce then something is off, normally your tolerance isn’t tight enough. However, we are working on procedures to automate aspects of this.

If you aren’t using multigrid then far more steps will be needed.

Thanks, I’m re running my case with higher pseudo-niters-max and ac-zeta. Unfortunately though, for a 2D case with just 20000 or so mesh cells this leads to a predicted runtime of 65 hours, going by the PyFR progress bar. I’m using an Nvidia V100 GPU, and the utilization is just 33% or so. Is there any way I can change some settings to e.g. improve the GPU utilization and complete the simulation faster?

Locally adaptive pseudo-time stepping and p-multigrid are advised. Niki reported an 11x speedup on a case he ran.

The BDF2 and higher schemes have the issue of initialisation of the time history. This leads to poor convergence initially which in turn leads to the runtime being overpredicted. Off the top of my head, for the TGV this typically lasts for the first 0.25s or so depending on dt. After that the runtime will reduce. I have seen this effect be reduced by DIRK schemes, Semih has implemented these and they should be merged into develop fairly soon.