Cylinder 3D case freezes on P100

See debug file:
gdb.txt (21.7 KB)

Hi Robert,

Looking at the stack trace it appears as if something is hooking
malloc/free (probably MPI or some related library). This is almost
always a bad idea as such code is extremely difficult to get right.
PyFR is particularly sensitive to such hooking on account of the fact
that we load MPI and friends at runtime. Thus, the hooking is done
after a large number of pointers have already been allocated by the
original (un-hooked) malloc. When these pointers are later freed the
hooked free often mistakenly believes they came from the hooked malloc.
Hilarity ensues.

In my experience there is usually a way to prevent such hooking.

Regards, Freddie.

Freddie,

Thanks for a quick reply.

Do you think it will help if I force OpenMPI to use TCP instead of Infiniband? It should
then avoid that specific function ucm_*.

Isn't it surprising though that other examples work fine and that the said example works
on the login node. Surely the hooking is the same?

I understand that this is all runtime stuff, but do you think that my, unusual perhaps,
marriage of anaconda and lmod may be causing it. I use lmod to account for compiler-mpi
hierarchy but perhaps putting anaconda into my gcc/6.4, openmpi/3.1 branch doesn't make
much sense.

Finally, I will also try downgrading openmpi as I am almost sure that only a few months
ago I was running on P100 without putting any thought into it.

Best wishes,
Robert

Hi Robert,

Do you think it will help if I force OpenMPI to use TCP instead of Infiniband? It should
then avoid that specific function ucm_*.

Isn't it surprising though that other examples work fine and that the said example works
on the login node. Surely the hooking is the same?

I understand that this is all runtime stuff, but do you think that my, unusual perhaps,
marriage of anaconda and lmod may be causing it. I use lmod to account for compiler-mpi
hierarchy but perhaps putting anaconda into my gcc/6.4, openmpi/3.1 branch doesn't make
much sense.

Finally, I will also try downgrading openmpi as I am almost sure that only a few months
ago I was running on P100 without putting any thought into it.

Falling back to TCP may help. However, this can also come with
substantial performance implications. My advice would therefore be to
build OpenMPI yourself. This way you can be sure that no libraries are
hooking themselves into application code.

Regards, Freddie.

Freddie,

We got the code to work, by reverting to OPAL hooks. Your suggestion was
correct, but I fear some more work is needed. The code runs with this command:

mpirun \
--mca pml_ucx_opal_mem_hooks 1 \
-report-bindings \\
pyfr run -b cuda mesh.pyfrm ../config.ini

For details, please read below. Are you running PyFR on Summit? I am not 100%
sure, but I think this may become relevant for you at some point.

I actually build OpenMPI myself. So the my build the following transport
layers are enabled:

Transports

Hi Robert,

We got the code to work, by reverting to OPAL hooks. Your suggestion was
correct, but I fear some more work is needed. The code runs with this command:

mpirun \
--mca pml_ucx_opal_mem_hooks 1 \
-report-bindings \\
pyfr run -b cuda mesh.pyfrm ../config.ini

For details, please read below. Are you running PyFR on Summit? I am not 100%
sure, but I think this may become relevant for you at some point.

I suspect the reason you only encounter this problem for the larger (3D)
cases is a consequence of how Python manages memory. Small allocations
are handled by a memory pool and thus never result in a malloc/free
operation. Thus it is possible that the issue is only triggered when
running larger cases whose allocations bypass the pool.

Either way this is almost certainly a bug in UCX.

In terms of Summit we have run successfully with Spectrum MPI.
Performance and scaling were both very impressive. I do not believe
that any special modifications or MPI parameters were required.

Regards, Freddie.

Freddie and Eduardo,

Just to let you know. I didn't have a lot of time to look into this memory hook
problem until now, but I have been to SC and went to OpenMPI BoF to accost
someone there. I was advised to put it on their issue tracker so here it is.

https://github.com/open-mpi/ompi/issues/6101

You may be interested to follow this. I am sure you both knew that MPIs are
moving towards UCX so it may be relevant for you in the future. That was all
news to me.

Best wishes,
Robert

Hi,

This is probably irrelevant by now, but I just want to close the issue.

Updating UCX to 1.4.0 and rebuilding OpenMPI against it solves the freezing
problem. Not sure if UCX is part of OFED but we do have a relatively old
version of it on the cluster so I will try to ask sys admins to update it
centrally on the system as well.

I am currently testing across several nodes and it's definitely working. I
still cannot get cuda-aware version to work, but I'll send a separate email
about this when I get my head around what's going on.

Best wishes,
Robert