How many GPU cores am I using?

Hi, everyone

I have problem on how to monitor the GPU usage of PyFR.

I am using cuda backend runs euler_vortex_2d case. The platform is a ubuntu 12.04 system with 64 AMD cores and NV GT620 (96 cores, cuda 7.0 installed ), while I run the case without partition

pyfr run -b CUDA -p euler_vortex_2d.pyfrm euler_vortex_2d.ini

Here is the output

100.0% [================================>] 100.00/100.00 ela: 00:01:16 rem: 00:00:0

And I was using nvidia-smi

to monitor the GPU usage, here is the output:

`
Wed Jun 24 16:40:06 2015

Hi,

I am sure the PyFR is running when I use nvidia-smi command. But GT620
does not support displaying GPU usage. How can I know how many cores am
I using?

The term 'CUDA cores' is not particularly meaningful. It is likely that
during a time step cuBLAS will employ all of the SMs on the GPU --
however this says nothing about how efficiently they're being used.

another problem is when I use more CPU cores for OpenMP backend, it runs
slower
$ export OMP_NUM_THREADS=32
$ pyfr run -b OPENMP -p euler_vortex_2d.pyfrm euler_vortex_2d.ini
100.0% [=======================================>] 100.00/100.00 ela: 00:01:52 rem: 00:00:00
$ export OMP_NUM_THREADS=64
$ pyfr run -b OPENMP -p euler_vortex_2d.pyfrm euler_vortex_2d.ini
100.0% [=======================================>]100.00/100.00 ela: 00:02:12 rem: 00:00:00

You should aim to use one MPI rank per NUMA zone. There are some good
postings on the mailing list outlining how to best to go about this.

However, the Euler vortex test case is far too small to be useful for
any sort of benchmarking. It will struggle to saturate a single CPU core.

And I am confused with the relationship between mpirun and openmp. It
seems that mpirun -n N and OMP_NUM_THREADS=M means M*N CPU cores are
used. Am I right?

That is correct.

Regards, Freddie.

Regards, Freddie.

Thanks a lot for your relpy. I have another question on the opencl-backend:

I have cuda 7.0, AMD APP SDK and clBLAS installed, and the output of clinfo is in the last part. My question is how to set platform-id and device-id when I want to use cpu or gpu.

platform-id = 0, device-id =0 for GPU?

platform-id = 1, device-id =0 for CPU?

-------------------clinfo output ----------------------------

Number of platforms: 2
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 1.1 CUDA 7.0.28
Platform Name: NVIDIA CUDA
Platform Vendor: NVIDIA Corporation
Platform Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 1.2 AMD-APP (1445.5)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback cl_amd_offline_devices cl_amd_hsa

Platform Name: NVIDIA CUDA
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 10deh
Max compute units: 2
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 64
Max work group size: 1024
Preferred vector width char: 1
Preferred vector width short: 1
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 1
Native vector width short: 1
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 1400Mhz
Address bits: 32
Max memory allocation: 268222464
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 8
Max image 2D width: 32768
Max image 2D height: 32768
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 16
Max size of kernel argument: 4352
Alignment (bits) of base address: 4096
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 128
Cache size: 32768
Global memory size: 1072889856
Constant buffer size: 65536
Max number of constant args: 9
Local memory type: Scratchpad
Local memory size: 49151
Kernel Preferred work group size multiple: 32
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1000
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities: 
Execute OpenCL kernels: Yes
Execute native function: No
Queue properties: 
Out-of-Order: Yes
Profiling : Yes
Platform ID: 0x00000000020706d0
Name: GeForce GT 620
Vendor: NVIDIA Corporation
Device OpenCL C version: OpenCL C 1.1
Driver version: 346.46
Profile: FULL_PROFILE
Version: OpenCL 1.1 CUDA
Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64

Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_CPU
Vendor ID: 1002h
Board name: 
Max compute units: 64
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 1024
Preferred vector width char: 16
Preferred vector width short: 8
Preferred vector width int: 4
Preferred vector width long: 2
Preferred vector width float: 8
Preferred vector width double: 4
Native vector width char: 16
Native vector width short: 8
Native vector width int: 4
Native vector width long: 2
Native vector width float: 8
Native vector width double: 4
Max clock frequency: 1400Mhz
Address bits: 64
Max memory allocation: 67618519040
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 8
Max image 2D width: 8192
Max image 2D height: 8192
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 16
Max size of kernel argument: 4096
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 270474076160
Constant buffer size: 65536
Max number of constant args: 8
Local memory type: Global
Local memory size: 32768
Kernel Preferred work group size multiple: 1
Error correction support: 0
Unified memory for Host and Device: 1
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities: 
Execute OpenCL kernels: Yes
Execute native function: Yes
Queue properties: 
Out-of-Order: No
Profiling : Yes
Platform ID: 0x00007f66755b5de0
Name: AMD Opteron(tm) Processor 6282 SE
Vendor: AuthenticAMD
Device OpenCL C version: OpenCL C 1.2
Driver version: 1445.5 (sse2,avx,fma4)
Profile: FULL_PROFILE
Version: OpenCL 1.2 AMD-APP (1445.5)
Extensions: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_amd_svm cl_khr_gl_event

Hi,

I have cuda 7.0, AMD APP SDK and clBLAS installed, and the output
of clinfo is in the last part. My question is how to set
*platform-id* and *device-id *when I want to use cpu or gpu.

platform-id = 0, device-id =0 for GPU? platform-id = 1, device-id
=0 for CPU?

That is correct.

platform-id = 0
device-id = 0

will run on the first device (a GPU) of the first platform (NVIDIA).

platform-id = 1
device-id = 0

will run on the first device (of type CPU) of the second platform (AMD).

Regards, Freddie.