Pseudostats residual convergence

Hi,
I’m running SD7003, Re=60,000 case with 1 gpu(RTX 3080 Ti) using p3 grid.
For first 10 sec, i use backward-euler scheme & euler pseudo-scheme in [solver-time-integrator]
then for 10 to 14 sec, i used sdirk33 scheme & rk34 pseudo-scheme.

Q1)
pseudostats

I got pseudostats residual from [soln-plugin-pseudostats]
In 0~10 sec, i think these small values are reasonable. but in 10~14 sec, the value is exponentially increase and it has very high value. Is it reasonable? Why it has very higher value?

I only change the scheme & pseudo-scheme and expect the lower residual value than euler scheme, but too much higher value.
On the other side, fluid force looks like converged.
force_compare

Q2)
In 0~10 sec, the pseudostats residual has one value on each time.
but 10~14 sec, it has 3 value on each time.
Why? is it related to sdirk33 or rk34 scheme?

Q3)
I read some paper w.r.t pyfr
High-Order Implicit Large-Eddy Simulations of Flow over a NACA0021 Aerofoil (AIAA Aerospace Research Central)
Heterogeneous computing on mixed unstructured grids with PyFR (Redirecting)

They show Time-span-average velocity profiles or pressure distribution.
How can i get Span averaging? in details.

Given that \nabla\cdot\mathbf{u} scales linearly with pressure residual, a value that high is not good.

To give you a more complete answer I would need a bit more information, can you share your ini file? You can do this inine with the markdown code rendering.

Also out of interest, when I ran this case I used 32 V100 GPUs and mesh with 17M DoF, what mesh are you using? You must be seriously under-resolved. Maybe checkout this paper: https://arxiv.org/pdf/2111.07915v1.pdf

In response to question 2, SDIRK schemes involve taking substeps and we output a residual at each of these. Formally the time for each substep could be calulated from the Butcher table, but because these aren’t at the finally order of accuracy of the DIRK scheme we just say that they are at the same physical time, they are just useful to see what’s going on.

In respone to question 3, to get time and span averaged data, you will first need to use the time averager, and then do some post processing on these files. The time average files can be opened just like the instaneous files, ie use pyfr export ... and then open in something like Paraview.

I use my mesh using pointwise ( 7M DoF, example case because of GPU memory limit 10GB. )
I attached my ini file. (How can ini file or msh file directly?)


[backend-cuda]
device-id = round-robin
mpi-type = standard

[constants]
;gamma = 1.4
;Pr = 0.72 
nu = 1.6667e-5
Uin = 0.9902681
Vin = 0.1391731
w=0
Pc = 1
ac-zeta = 2.5 ;3

[solver]
system = ac-navier-stokes ;incompressible n-s
order = 3 ;order of polynomial solution basis
;anti-alias = flux

[solver-time-integrator]
formulation = dual
scheme = sdirk33 ;backward-euler
pseudo-scheme =rk34 ;euler
tstart = 0.0
tend =14
dt = 0.00005
pseudo-dt = 0.000005
controller = none
pseudo-niters-min = 1
pseudo-niters-max = 1
pseudo-resid-norm = uniform ;l2
pseudo-resid-tol = 1e-8
pseudo-controller = none ;local-pi
;atol = 1e-6
;safety-fact = 0.9
;min-fact = 0.3 ;0.99
;max-fact = 2.5 ;1.005
;pseudo-dt-max-mult = 2.5

[solver-dual-time-integrator-multip]
pseudo-dt-fact = 2.3
;cycle = [(1, 1)]
;cycle = [(2, 1), (1, 1), (0, 2), (1, 1), (2, 1)]
;cycle = [(3, 1), (2, 1), (1, 1), (0, 2), (1, 1), (2, 1), (3, 3)]
cycle = [(3, 1), (2, 2), (1, 4), (0, 8), (1, 4), (2, 2), (3, 1)]

[solver-interfaces]
riemann-solver = rusanov
ldg-beta = 0.5 ;idk
ldg-tau = 0.1 ;idk


[solver-interfaces-line]
flux-pts = gauss-legendre
quad-deg = 10
quad-pts = gauss-legendre

[solver-interfaces-tri]
flux-pts = williams-shunn
quad-deg = 10
quad-pts = williams-shunn

[solver-interfaces-quad]
flux-pts = gauss-legendre
quad-deg = 10
quad-pts = gauss-legendre

[solver-elements-tri]
soln-pts = williams-shunn

[solver-elements-quad]
soln-pts = gauss-legendre

[solver-elements-hex]
soln-pts = gauss-legendre
quad-deg = 10
quad-pts = gauss-legendre

[solver-elements-tet]
soln-pts = shunn-ham
quad-deg = 10
quad-pts = shunn-ham

[solver-elements-pri]
soln-pts = williams-shunn~gauss-legendre
quad-deg = 10
quad-pts = williams-shunn~gauss-legendre

[solver-elements-pyr]
soln-pts = gauss-legendre
quad-deg = 10
quad-pts = witherden-vincent



[soln-plugin-writer]
dt-out = 0.02
basedir = .
basename = {t:.4f}

[soln-plugin-nancheck]
nsteps = 10

[soln-plugin-pseudostats]
flushsteps = 10
file = residual.csv
header = true

[soln-plugin-fluidforce-upper]
nsteps = 10
file = upper_forces.csv
header = true
morigin = (0.0, 0.0, 0.0)

[soln-plugin-fluidforce-lower]
nsteps = 10
file = lower_forces.csv
header = true
morigin = (0.0, 0.0, 0.0)


[soln-bcs-inlet]
type = ac-char-riem-inv
;rho = 0.0
u = Uin
v = Vin
w = 0
p = Pc
ac-zeta = 2.5 ;3

[soln-bcs-outlet]
type = ac-char-riem-inv
;rho = 0.0
u = Uin
v = Vin
w = 0
p = Pc
ac-zeta = 2.5 ;3

[soln-bcs-upper]
type = no-slp-wall

[soln-bcs-lower]
type = no-slp-wall

[soln-ics]
;rho = rhoc
u = Uin
v = Vin
w = 0
p = Pc
rho = 1.0

Also for question3, do you mean span-wise average can be get in Paraview (not pyfr)?

So the problem is that your max number of pseudo iterations is set to 1. This needs to be higher, probably arounf 80 for this set up.

The pseudo time stepping needs enough steps in order to relax the solution in pseudo tome on to the true solution. Without these the solution won’t be correct. If this is a new concept to you, think of it like solving a fixed point problem, it would be unrealistic to expect a single finxed point iteration to solve the problem.

Sort of, my work flow for these cases is to read the time average data into Paraview, use the interpolation tools in paraview to output a structured slice of data, write out this now structured dataset. With this you can then easily use python/matlab/julia/etc. to manipulate this data.

In order to make more effective use of your memory you may want to consider switching to single precision.

Also, if you are anti-aliasing with polynomial multigrid it is worth specifying quadrature rules on a per-level basis. Using a high strength rule at a low polynomial order is often wasteful.

Regards, Freddie.

1 Like

I solved high residuals issue. thanks WillT !!

But I mistaked, my mesh has 7M points (checked in paraview).
And I saw [How can I know the DoFs with .pyfrs file - #2 by fdw] this page.

I checked my mesh’s DoF

then DoF = (64 X 4 X 105070) + (40 X 4 X 11900) = 28,801,920 (28M) Right?
Is it enough to solve this SD7003 airfoil (Re=60,000) case?

It depends, adding in that factor of 4 for the eqaution variables, I was just a mesh with ~68M DoF.

But DoF is not a measure of whether a mesh is approiate, for that you’ll need to do a grid independence study.