Scalability and linearised NS solver


I am interested in the optimal grid points for each CPU/GPU core since I am going to run some large scale cases. Although it could be different from machines, it would be nice to have some basic idea before doing test on our clusters. Do you have previous test about this topic?

The second question is does PyFR have linearised NS solver for fully compressible flow?

Thanks ahead!

Best wishes,

If wall time is all you care about and unless you go completely over the top, the more you can strong scale the faster in terms of wall time you will be. If you care about wall time/$ then occupancy is more important. @fdw might have a better idea, but about 50k elements per GPU I think is pretty optimal, but this will be dependent on the GPU you are using.

No, we don’t have a linearised Navier–Stokes equation implemented. None of the current governing equations supported has a \mathbf{v}\cdot\nabla\mathbf{u} structure which I think you’ll need. Although fiddly, it would be possible for you to implement this in PyFR.

Further to this, is there a particular reason you want linearised NS?

Thanks for your reply! We mainly doing linear stability so linearised solver is necessary to our work. If we have to write this part, do you have some suggestions or tips since I am not very familiar to the PyFR code itself! Thanks!


A good starting point for adding new systems of equations is to look at the existing ones. For example Euler and Navier–Stokes. The Navier–Stokes solver builds on top of the Euler system to add the diffusion terms. The main files you’ll want to look at are the various, and files. Beyond that, the developer guide in the documentation is a good resource Developer Guide — Documentation

Thanks, I will look into that.