CUDALink on Multiple Devices

The functional and list-oriented characteristics of the core Wolfram Language allow CUDALink to provide immediate built-in data parallelism, automatically distributing computations across available GPU cards.

Introduction

First, load the CUDALink application.

This launches as many worker kernels as there are devices.

$CUDADeviceCount is the number of the devices on the system.

$CUDADeviceCountnumber of CUDA devices on system

$CUDADeviceCount gets the number of CUDA GPUs on the system.

This loads CUDALink on all worker kernels.

CUDALink relies on existing Wolfram Language parallel computing capabilities to run on multiple GPUs. Throughout this section the following functions will be used.

ParallelNeedsload a package into all parallel subkernels
DistributeDefinitionsdistribute definitions needed for parallel computations
ParallelEvaluateevaluate the input expression on all available parallel kernels and return the list of results obtained

CUDALink relies on the Wolfram Language's parallel computing capabilities to use multiple GPUs.

This sets the $CUDADevice variable on all kernels.

CUDALink Functions

High-level CUDALink functions like the image processing, linear algebra, and fast Fourier transforms can be used on different kernels like any other Wolfram Language function. The only difference is that the $CUDADevice variable is set to the device on which computation is performed.

Here you set image names to be taken from the TestImages dataset for ExampleData.

Distribute the variable imgNames with the worker kernels.

Perform CUDAErosion on images taken from ExampleData.

Notice the 2x speed improvement. Since these images are small and data must be transferred, you do not get the 4x performance speedup.

In other cases, the amount of time spent transferring the data is not as significant as the amount of time spent in calculation. Here, you allocate 2000 random integer vectors.

Map CUDAFold on each device.

Notice that there is now a better speedup.

CUDALink Programming

Since a CUDAFunction is optimized and local to one GPU, it cannot be shared with worker kernels using DistributeDefinitions. This section describes an alternative way of programming the GPU.

Add Two

This loads a basic CUDA code that adds 2 to a vector.

This loads the CUDAFunction. Notice the use of SetDelayed in the assignment. This allows DistributeDefinitions to distribute all dependent variables in the CUDAFunctionLoad call.

This sets the input parameters.

This distributes the definitions between worker kernels.

This runs the CUDAFunction on each worker kernel using different CUDA devices.

This gathers the result showing the first 20 elements.

Mandelbrot Set

This is the same CUDA code defined in other sections of the CUDALink documentation.

Here, you load the CUDAFunction.

This shares the variables with worker kernels.

This launches the kernel, each with a different zoom level, returning the "Bit" image.

Random Number Generators

The Mersenne Twister is implemented in the following file.

This loads the function into the Wolfram Language.

This sets the input variables for the CUDAFunction.

This distributes the mersenneTwister function and input parameters.

This allocates the seed valuesnote that seed evaluation needs to be performed on each worker kernel so that the random numbers are not correlated. The output memory is also allocated, computation is performed, and the result is visualized.

Memory

CUDAMemory is tied to both the kernel and device where it is loaded and thus cannot be distributed among worker kernels.

Load memory in the master kernel.

Then distribute the definition.

Distributed CUDAMemory cannot be operated on by worker kernels.

To load memory onto the worker kernels, users can use ParallelEvaluate.

Operations can be further done on the memory using ParallelEvaluate.

Bandwidth

In some cases, the amount of time spent transferring the data dwarfs the time spent in computation.

Here you load a large list.

Since the parallel version needs to share the large list with worker kernels, it takes considerably longer than the sequential version.

The sequential version is faster since no data transfer is necessary.