Installing PyFR with spack

Hi @mlaufer,

a bit late… did you install pyfr with spack as well? I did not see that this is available in the spack packages.

Best Regards
Fab

Hi,
I created spack package recipes for PyFR, as well as the needed dependencies (GiMMiK), but i never pushed them to upstream Spack. @fdw and team, if you are interested in that I will revisit it and create a Spack PR.

The main sticking point I had was the special LIBXSMM commit that was required. Recently, I see that LIBXSMM has begun to release new versions once again (after taking a year off), so as long as the new version works out of the box, it should not be an issue. @fdw, how important is setting CODE_BUF_MAXSIZE=262144 when compiling LIBXSMM?

I am hoping that libxsmm 2.0 will be released in the next few months. This will include all of the relevant commits which PyFR needs (including ARM support). Although there have been a few released recently, these are all bugfixes to a legacy branch. The code buffer size is important for getting good performance at p = 5 and beyond. Below that it does not make a difference.

Regards, Freddie.

1 Like

Thanks @fdw, in that case I will work on finalizing a PyFR Spack recipe, but will leave out LIBXSMM for now (as requiring a specific commit in a recipe is not a common practice in Spack). Once LIBXSMM is updated to v2.0, I will add a a variant with LibXSMM support. I will update here once the PR is pushed.

Sounds good. Just as an FYI in the next major release libxsmm will become a hard dependency for the OpenMP backend rather than an optional one.

Regards, Freddie.

Hi, Thank you very much for your quick response!
Regards Fab

FYI
I have pushed a PR for PyFR and GiMMiK packages:

As I mentioned above, LIBXSMM support is not yet implemented in the PyFR package, but will be added once LIBXSMM is updated to v2.0. I have tested the OpenMP backend, and CUDA backend succesfully, but I did not have the oppurtunity to test the HIP variant, I would appreciate if someone with access to relavant HW could give it a shot.

Optimized dependencies are built according to the Performance Tuning guide (Parallel I/O support, optimized BLAS)

Assuming an optimized MPI implementation is already installed through Spack, a production ready install could be accomplished with a single command:

spack install py-pyfr +cuda ^cuda@11.2.2 ^python+optimizations

@fdw, Let me know if you would like to assign one of the devs as a maintainer of these packages. If not, I will be happy to do it.

Hi,
I am not only new to pyfr, but also to spack :slight_smile: I understand from the https://github.com/spack/spack/pull/28847
that a simple git pull will not provide this yet to my local repository. Is this right?
Thank you!
Fab

Hi,
Unless time-sensitive I would advise you to wait for the PR to be merged (a couple of days I expect). Otherwise you can use the Github CLI tool to test the PR. Luckily that is also available in spack:

spack install gh
spack load gh
gh pr checkout 28847
spack install py-pyfr

Good luck!

Thank you very much! I will wait for the merge.

FYI the Spack PR has been merged. I would appreciate feedback, especially for the HIP variant, which I was unable to test. Please submit issues to the Spack github and not here.
Note that the Scotch mesh decomposition functionality is not yet usable (see here).