I just wonder why std time integrator cannot be implemented in ac-euler or ac-navier-stokes solvers. Is dual-time stepping compulsory for incompressible solvers?

Additionally, does PyFR have any “do nothing” boundary conditions?

I just wonder why std time integrator cannot be implemented in ac-euler
or ac-navier-stokes solvers. Is dual-time stepping compulsory for
incompressible solvers?

It is a requirement for artificial compressibility which is the
technique used by PyFR to implement incompressible Navier-Stokes.

Additionally, does PyFR have any "do nothing" boundary conditions?

If by "do nothing" you mean a symmetry condition, then yes, most solvers
have those.

Does it mean that for fully incompressible fluid, artificial compressibility should be “ac-zeta = 0.0” ?

Some other questions.

How can I continue my simulation from the latest time step?

Is it able to map the flow field from one case to another case, even regardless the difference of mesh? (Something like “mapFileds” in OpenFOAM)

Is there any command in PyFR that we can export all .pyfrs files to .vtu files at once rather than export one time step by one time step?

The output file is “.vtu”, which is unreadable by text readers. Is there any command to export readable text file such as vtk, csv or text for the entire fluid field for post-processing? (Pretty similar to the output “U” and “p” file in OpenFOAM)

I am quite not sure what “[solver-dual-time-integrator-multip]” is in dual-time-stepping solvers, since it is deletable and simulation becomes much faster when it is deleted.

Does it mean that for fully incompressible fluid, artificial compressibility should be "ac-zeta = 0.0” ?

No. You need some artificial compressibility for the approach to work.

How can I continue my simulation from the latest time step?

You can restart from a .pyfrm/.pyfrs file (pyfr restart mesh.pyfrm solution.pyfrs) - see the User Guide online.

Is it able to map the flow field from one case to another case, even regardless the difference of mesh? (Something like “mapFileds” in OpenFOAM)

No.

Is there any command in PyFR that we can export all .pyfrs files to .vtu files at once rather than export one time step by one time step?

There have been other threads on this. For various reasons, it is best just to write a quick BASH script.

The output file is “.vtu”, which is unreadable by text readers. Is there any command to export readable text file such as vtk, csv or text for the entire fluid field for post-processing? (Pretty similar to the output “U” and “p” file in OpenFOAM)

I am quite not sure what “[solver-dual-time-integrator-multip]” is in dual-time-stepping solvers, since it is deletable and simulation becomes much faster when it is deleted.

I am quite not sure what “[solver-dual-time-integrator-multip]” is in dual-time-stepping solvers, since it is deletable and simulation becomes much faster when it is deleted.

In the artificial compressibility formulation, you need dual time stepping to recover time-accuracy. To put it shortly, the physical time is discretised with a BDF scheme whose solution is iterated with explicit pseudo-time stepping in fictitious time. The analogy to OpenFOAM is that the pseudo-time stepping is like solving the Poisson equation for pressure in more local way.

[solver-dual-time-integrator-multip] enables polynomial multigrid acceleration in the pseudo-time. If it is enabled you perform pseudo-niters-max number of multigrid cycles within each time step. If you remove the multip field you perform pseudo-niters-max number of iterations, not cycles. So if you keep pseudo-niters-max constant, removing the field makes the simulation quicker, but it also means that your convergence is much worse.

Hi,
I am investigating using pyfr for use in cardiovascular simulations where the use of gpus is desired.
What is the reasoning behind implementing the artificial compressibility method versus other incompressible approaches?

Does it lend itself well to the flux reconstruction method over others? or does it parallelize more easily? I have been reading about the method but would like to be pointed where to look for more information?
Regards,

The main reason for the artificial compressibility method is that is well-suited for modern parallel platforms which have an abundance of compute capability to memory bandwidth. When you discretise it with flux reconstruction in space and explicit dual time stepping in time, the majority of operations can be cast as matrix-matrix multiplications. Pressure based algorithms (Poisson equation) / fully implicit time stepping tend to require more coupling between elements which introduces memory indirection. Moreover, many of the linear solvers are not scale invariant and increasing parallelism can decrease the efficiency of the preconditioner. Finally, the flux Jacobian matrices in 3D at higher orders are very large which can limit the problem size especially on GPUs.

In summary, we are developing the solver to maximise local computation. There are several acceleration techniques that can be added without compromising the parallel efficiency. For instance, the polynomial multigrid that has already been implemented gives 3.5x speed-up compared to pseudo-time stepping only at the highest level. Other explicit acceleration techniques will be added in the future releases.

Is there any benchmark on choosing max number of iterations for pseudo time step considering accuracy? It seems there is a strong connection between the max number of iterations and real calculation time for each physical time step.