Memory on CUDA GPU

Hello, everybody.

I’d like to ask you about a memory problem during cuda calculations, I’m experiencing a lack of video memory.

To give a brief description, when I do a calculation with a 3D mesh of 7 million with order=1, there is no problem, when I switch to order=5, I get an error that there is not enough video memory (120G of cuda memory).

ine 136, in _malloc_checked
raise RuntimeError('Allocation too large for normal backend ’
RuntimeError: Allocation too large for normal backend memory-model

Is there any way to reduce the memory usage, even if it reduces the speed of computation?
Incidentally, my mesh is generated using pointwise and is not set in high order format.

Regards, Wgbb.

No, we’ve already been very aggressive in memory optimisations. If you’re allocating a lot of memory, you’ll want to add this:

[backend]
memory-model = large

But that just changes the allocation method and won’t reduce the overall memory usage. In order to run your calculation, you’ll probably need to partition it and run it across more GPUs.