Applications

Because GPUs are SIMD machines, to exploit CUDA's potential you must pose the problem in an SIMD manner. Computation that can be partitioned in such a way that each thread can compute one element independently is ideal for the GPU.

Some algorithms either cannot be written in parallel, or cannot be used on CUDA (due to architecture constraints). In those cases, research is ongoing to introduce alternative methods to use the GPU to perform those computations.

In this section, some usage of CUDA programming inside the Wolfram Language is showcased. All the following examples use CUDAFunctionLoad, which allows you to load CUDA source, binaries, or libraries into the Wolfram Language.

CUDAFunctionLoadload CUDA function into the Wolfram Language

CUDAFunctionLoad allows you to load CUDA source, binaries, or libraries into the Wolfram Language.

Image Processing

This section contains examples of CUDA applications that perform image processing operations. CUDALink contains a few built-in functions that perform image processing operations, such as CUDAImageConvolve, CUDABoxFilter, CUDAErosion, CUDADilation, CUDAOpening, CUDAClosing, CUDAImageAdd, etc.

Image Binarize

Binarize takes an input image and outputs a binary image with pixels set to white if above a threshold and black otherwise.

If you have not done so already, import the CUDALink application.

This defines the input image. To reduce the memory footprint on the GPU, use to represent the image.

This loads CUDAFunction. The type is signified by "UnsignedByte" in the parameter list.

This defines the input image and allocates "UnsignedByte" memory for the output.

This calls the binarize function, using 150 as the threshold value.

This displays the output image.

The result agrees with the Wolfram Language.

This unloads the memory allocated.

Box Filter

The box filter is an optimized convolution when the kernel is a BoxMatrix. It is implemented here.

This loads the CUDA functions needed to perform the box filter.

This sets the input parameters.

The radius is set to 5.

This calls the functions.

This gets the image.

This unloads the memory allocated.

Image Adjust

This is an implementation of ImageAdjust in CUDA.

CUDAFunction is loaded from the source string, and a float is used for the constant values.

This wraps the CUDAFunction to make CUDAImageAdjust with similar syntax to ImageAdjust.

The function can be used similarly to ImageAdjust.

Canny Edge Detection

Canny edge detection combines a dozen or so filters to find edges in an image. The Wolfram Language's EdgeDetect provides similar functionality. Here is the implementation.

This loads the CUDA functions needed to perform the edge detection.

This sets the input image along with its parameters.

Now add host and device tensors to the CUDA manager. These will hold both the input and the output.

Next, define device-only temporary memory that will be used in the computation. Since Canny edge detection involves many filters, you need quite a few of them.

This calls the Canny edge-detection functions.

This views the output as an image.

This unloads the memory allocated.

Linear Algebra and List Processing

This section contains examples of CUDA applications that perform linear algebra operations. Most of the functions discussed can be performed using CUDATranspose or CUDADot.

Matrix Transpose

Matrix transposition is essential in many algorithms. CUDALink provides a ready-made implementation in the form of CUDATranspose. Users may wish to implement their own, however.

This loads the CUDAFunction for matrix transposition and defines a new function newCUDATranspose that takes a real-valued matrix and outputs its transpose.

This sets a matrix.

This transposes matrix A.

The result agrees with the Wolfram Language.

Matrix-Vector Multiplication

Matrix-vector multiplication is a common operation in linear algebra, finite element analysis, etc. This loads CUDAFunction, implementing matrix-vector multiplication.

This sets the input matrix and vector.

This invokes the above defined function, displaying the result using MatrixForm.

The result agrees with the Wolfram Language.

Matrix-Matrix Multiplication

Matrix-matrix multiplication is a pivotal function in many algorithms. This loads CUDAFunction from a source file, setting the block dimension to 4.

This sets input values and allocates memory for the output.

This performs the computation.

This displays the result using MatrixForm.

The result agrees with the Wolfram Language.

This unloads the output buffer.

Dot Product

The dot product of two vectors is a common operation in linear algebra. This implements a function that takes a set of vectors and gives the dot product of each.

This loads the function into the Wolfram Language.

Generate 50 random vectors.

This calls the function, returning the result.

The results agree with the Wolfram Language's output.

Convex Hull

Convex hull is hard to make parallel on the GPU, so in this example the hybrid approach to GPU programming is taken: do computation that makes sense on the GPU, and the rest is done on the CPU.

This convex hull implementation is a textbook implementation of Andrew's algorithm and is designed to be simple, not efficient.

Define the LeftOf predicate in the Wolfram Language.

This defines a function that splits the point set to be either above or below a given line connecting the extreme points.

This loads the above function as a CUDA function.

Define a Wolfram Language function that takes two split point sets and finds the convex hull for them.

This calls the above function. Note that the list is sorted with CUDASort before being processed by partitionPts.

To test, create 20,000 uniformly distributed random points.

This computes the hull.

This visualizes the result. Lines are drawn between the hull points.

The above algorithm handles the extreme case where all or most points lie on the hull. This generates uniformly distributed points on the unit disk.

This computes the hull points.

This visualizes the hull.

The above is a prime example of combining Wolfram Language and CUDA programming to make an algorithm partially parallel that would otherwise be written in only serial code.

Random Number Generation

Many algorithms, ranging from ray tracing to PDE solving, require random numbers as input. This is, however, a difficult problem on many core systems, where each core has the same state as any other core. To avoid that, parallel random number generators usually use entropy values such as the time of day to seed the random number generators, but those calls are not available in CUDA.

The following section gives three classes of algorithms to generate random numbers. The first is uniform random number generators (where pseudorandom number generators and quasi-random number generators are showcased). The second is random number generators that exploit the uniform distribution of a hashing function to generate random numbers. The final is normal random number generators.

Pseudorandom Number Generators

Pseudorandom number generators are deterministic algorithms that generate numbers that appear to be random. They rely on the seed value to generate further random numbers.

In this section, a simple linear congruential random number generator (the ParkMiller algorithm) is shown, along with a more complex Mersenne Twister.

ParkMiller

The ParkMiller random number generator is defined by the following recurrence equation:

It can be implemented easily in the Wolfram Language.

Here, common values for and are used. Using NestList, you can generate a list of 1000 numbers and plot them.

Here is the timing to generate 10 million numbers.

An alternative is to use the Method option in SeedRandom; this can be used in the same manner as before.

Compared to the Wolfram Language implementation, this is around 300 times faster.

The CUDA implementation is similar to the one written in the Wolfram Language. The implementation is distributed along with CUDALink, and the location is shown below.

This loads the CUDA function into the Wolfram Language.

This allocates the output memory.

This calls CUDAFunction.

The result is random.

If you measure the timing, you notice that it is twice as fast as the Wolfram Language's built-in method, and 600 times faster than pure Wolfram Language implementation.

The timing against a Compile is similar to that of CUDA. This generates C code from a Compile statement with no integer overflow detection.

This finds the timing. Notice that there is little difference.

If you ignore the time it takes for memory allocation (and you can rightly do so in this case, since generated random numbers are usually reused on the GPU), you notice a 10× speed improvement.

Mersenne Twister

Mersenne Twister utilizes shift registers to generate random numbers. Because the implementation is simple, it maps well to the GPU. The following file contains the implementation.

This loads the "MersenneTwister" function from the file.

Here, the Mersenne Twister input parameters are defined. The twister requires seed values. Those values can be computed offline and stored in a file, or they can be generated by the Wolfram Language. The latter is shown here.

This allocates the output memory. Since the output will be overwritten, there is no need to load memory from the Wolfram Language onto the GPU.

This invokes CUDAFunction with parameters.

The output can be plotted to show it is random.

A Wolfram Language function can be written that takes the number of random numbers to be generated as input, performs the required allocations and setting of parameters, and returns the random output memory.

This generates a plot of the first 10,000 elements.

The following measures the time it takes for the random number generator to generate 100 million numbers.

This is on par with the Wolfram Language's random number generator timings.

Considering that random numbers are seeds to other problems, a user may get performance increase in the overall algorithm even if the Wolfram Language's timings are superior to the CUDA implementation.

Quasi-Random Number Generators

This section describes quasi-random number generators. Unlike pseudorandom number generators, these sequences are nonuniform and have underlying structures that are sometimes useful in numerical methods. For instance, these sequences typically provide faster convergence in multidimensional Monte Carlo integration.

Halton Sequence

The Halton sequence generates quasi-random numbers that are uniform on the unit interval. While the code works with arbitrary dimensions, only the van der Corput sequence is discussed, which works on 1D space. This is adequate for comparison.

The resulting numbers of the Halton (or van der Corput) sequence are deterministic but have low discrepancy over the unit interval. Because they fill the space uniformly in some applications, such as Monte Carlo integration, they are preferred to pseudorandom number generators.

For a given with base :

The one-dimensional Halton (or van der Corput) value in base :

The sequence of length is then written as:

Given a number in base representation , the van der Corput sequence mirrors the number across the decimal point, so that its sequence value is .

In the Wolfram Language, you can find the sequence using IntegerDigits.

Setting the base to 2, you can calculate the first 1000 elements in the sequence.

This plots the result; notice how it fills the space uniformly.

A property of low-discrepancy sequences is that the next elements in the sequence know where the previous elements are positioned. This can be shown with Manipulate.

For the CUDA implementation, you have to implement your own version of IntegerDigits, but this is not difficult. First, load the implementation source code.

This loads CUDAFunction.

This allocates memory for the output. Here, only 1024 random numbers are generated.

This runs the function for dimension 1.

This plots the results.

Sobol Sequence

The Sobol sequence is also a low-discrepancy sequence. It is implemented in the following CUDA file.

This loads the function, using {64,1} as the block dimension.

Here, the input parameters are loaded. The direction vectors needed by the Sobol sequence are precomputed and stored in a file.

This executes the Sobol function, passing parameters.

This plots the first 10,000 values in the sequences. Note that the space is filled evenly with points (a property of quasi-random number generators).

When complete, the memory must be unloaded.

Hashing Random Number Generators

Random number generators that depend on hashing generate random numbers of lesser quality, but they generate them fast. For many applications, they are more than adequate.

Tiny Encryption Algorithm Hashing

The Tiny Encryption Algorithm (TEA) is a very simple hashing algorithm implemented in the following file.

Load CUDAFunction.

This allocates memory for the output.

This calls CUDAFunction.

This plots the result.

This deletes allocated memory.

MD5 Hashing

Other general hashing methods can be used for random number generators. Here is an implementation of the MD5 algorithma well-known hashing algorithm.

This loads CUDAFunction from the source.

This loads the output memory.

This calls CUDAFunction.

This plots the results.

This deletes allocated memory.

Normal Random Numbers

The following algorithms generate normally distributed random numbers.

Inverse Cumulative Normal Distribution

The following implements a way to generate normally distributed random numbers.

This loads CUDAFunction.

Allocate memory for 100,000 random numbers.

This calls CUDAFunction.

This gets the memory into the Wolfram Language.

This plots the result, using Histogram.

This unloads the memory.

BoxMuller

BoxMuller is a method of generating normally distributed numbers, given a set of uniformly distributed random numbers. The CUDA implementation is found in the following file.

This loads CUDAFunction.

This sets the input arguments.

Use the Mersenne Twister (defined two sections ago) to generate a list of uniformly distributed random numbers.

Transform the list of uniform random numbers to normally distributed random numbers.

You can see the bell curve when using Histogram.

This deletes allocated memory.

Applications of Random Number Generators

Random numbers have applications in many areas. Here two main applications are presented: Monte Carlo integration (by approximating and an arbitrary function) and simulating Brownian motion.

Approximating π

The value of can be approximated using Monte Carlo integration. First, generate uniformly random numbers in the unit square. Then the number of points inside the first quadrant of the unit circle is counted. The result is then divided by the number of points. This will give .

This implements reduction, counting the number of points in a unit circle.

This loads CUDAFunction.

Use 1,000,000 points.

Generate the random numbers using the Mersenne Twister algorithms discussed previously.

This allocates the output memory.

This performs the computation.

This gets the output memory.

The result agrees with the Wolfram Language.

The timing is considerably faster.

Compared to the Wolfram Language.

Monte Carlo Integration

Monte Carlo integration finds its way into many areas. Here, Sqrt[x] from 0 to 1 is integrated.

This loads the function.

Use the Sobol quasi-random number generator for random numbers.

Then use the number of random number generators as the length.

This allocates memory for the output.

This calls the function.

This checks whether the first few elements make sense.

You now need to sum the output. This can be done using CUDAFold.

The result agrees with NIntegrate.

Unload the allocated memory.

Brownian Motion

This allocates memory for the simulation.

The values of the pseudorandom sequence are normally distributed.

Code Generation

Since CUDALink is integrated in the Wolfram Language, you can use Wolfram Language features like SymbolicC to generate CUDA kernel code. If you have not done so already, import CUDALink.

For this example, you need the SymbolicC package.

This defines some common Wolfram Language constructs, translating them to their SymbolicC representation.

To test, pass in a Wolfram Language statement and get the SymbolicC output.

To convert to a C string, use the ToCCodeString method.

The above allows you to write a function that takes a Wolfram Language function (pure or not) and would generate the appropriate CUDA kernel source.

Passing a pure function to CUDAMapSource returns the kernel code.

This defines a function that, given a Wolfram Language function and an input list, generates the CUDA kernel code, loads the code as a CUDAFunction, runs the CUDAFunction, and returns the result.

You can test myCUDAMap with a pure function that adds 2 to each element in an input list.

Any construct translated by toSymbolicC is supported by myCUDAMap. Here, each element is squared.

This performs Monte Carlo integration.

Functions can be defined and passed into myCUDAMap. Here, a color negation function is defined.

Invoke the color negation function.

The above is a simple example, but can be used as a seed for more complicated ones.