Applications
This feature is not supported on the Wolfram Cloud.

Because GPUs are SIMD machines, to exploit CUDA's potential you must pose the problem in an SIMD manner. Computation that can be partitioned in such a way that each thread can compute one element independently is ideal for the GPU.

Some algorithms either cannot be written in parallel, or cannot be used on CUDA (due to architecture constraints). In those cases, research is ongoing to introduce alternative methods to use the GPU to perform those computations.

In this section, some usage of CUDA programming inside the Wolfram Language is showcased. All the following examples use CUDAFunctionLoad, which allows you to load CUDA source, binaries, or libraries into the Wolfram Language.

CUDAFunctionLoadload CUDA function into the Wolfram Language

CUDAFunctionLoad allows you to load CUDA source, binaries, or libraries into the Wolfram Language.

Image Processing

This section contains examples of CUDA applications that perform image processing operations. CUDALink contains a few built-in functions that perform image processing operations, such as CUDAImageConvolve, CUDABoxFilter, CUDAErosion, CUDADilation, CUDAOpening, CUDAClosing, CUDAImageAdd, etc.

Image Binarize

Binarize takes an input image and outputs a binary image with pixels set to white if above a threshold and black otherwise.

If you have not done so already, import the CUDALink application.

In[1]:=
Click for copyable input

This defines the input image. To reduce the memory footprint on the GPU, use to represent the image.

In[2]:=
Click for copyable input

This loads CUDAFunction. The type is signified by in the parameter list.

In[3]:=
Click for copyable input
Out[3]=

This defines the input image and allocates memory for the output.

In[4]:=
Click for copyable input

This calls the binarize function, using as the threshold value.

In[8]:=
Click for copyable input
Out[8]=

This displays the output image.

In[9]:=
Click for copyable input
Out[9]=

The result agrees with the Wolfram Language.

In[10]:=
Click for copyable input
Out[10]=

This unloads the memory allocated.

In[11]:=
Click for copyable input

Recursive Gaussian

Recursive Gaussian approximates performing a Gaussian filter on input data. It sacrifices some accuracy in favor of speed. This implements a Deriche-style recursive Gaussian filter.

In[12]:=
Click for copyable input
Out[12]=

This loads two functions: the and functions. The functions take an RGBA image, and the data is passed as or .

In[13]:=
Click for copyable input

This specifies the value. Recall that the Gaussian is .

In[15]:=
Click for copyable input

The following parameters are calculated based on the value.

In[16]:=
Click for copyable input

The following defines the input image and allocates memory for the output.

In[17]:=
Click for copyable input

This runs the functions. Perform a vertical Gaussian, transpose, perform another vertical Gaussian, and transpose again to get the result.

In[21]:=
Click for copyable input

This gets the output image.

In[25]:=
Click for copyable input
Out[25]=

Memory allocated must be freed.

In[26]:=
Click for copyable input

Box Filter

The box filter is an optimized convolution when the kernel is a BoxMatrix. It is implemented here.

In[27]:=
Click for copyable input
Out[27]=

This loads the CUDA functions needed to perform the box filter.

In[28]:=
Click for copyable input

This sets the input parameters.

In[30]:=
Click for copyable input

The radius is set to .

In[34]:=
Click for copyable input

This calls the functions.

In[35]:=
Click for copyable input

This gets the image.

In[37]:=
Click for copyable input
Out[37]=

This unloads the memory allocated.

In[38]:=
Click for copyable input

Image Adjust

This is an implementation of ImageAdjust in CUDA.

In[39]:=
Click for copyable input

CUDAFunction is loaded from the source string, and a float is used for the constant values.

In[40]:=
Click for copyable input
Out[40]=

This wraps the CUDAFunction to make with similar syntax to ImageAdjust.

In[41]:=
Click for copyable input

The function can be used similarly to ImageAdjust.

In[43]:=
Click for copyable input
Out[43]=

Canny Edge Detection

Canny edge detection combines a dozen or so filters to find edges in an image. The Wolfram Language's EdgeDetect provides similar functionality. Here is the implementation.

In[44]:=
Click for copyable input
Out[44]=

This loads the CUDA functions needed to perform the edge detection.

In[45]:=
Click for copyable input

This sets the input image along with its parameters.

In[57]:=
Click for copyable input

Now add host and device tensors to the CUDA manager. These will hold both the input and the output.

In[60]:=
Click for copyable input

Next, define device-only temporary memory that will be used in the computation. Since Canny edge detection involves many filters, you need quite a few of them.

In[62]:=
Click for copyable input

This calls the Canny edge-detection functions.

In[68]:=
Click for copyable input

This views the output as an image.

In[80]:=
Click for copyable input
Out[80]=

This unloads the memory allocated.

In[81]:=
Click for copyable input

Linear Algebra and List Processing

This section contains examples of CUDA applications that perform linear algebra operations. Most of the functions discussed can be performed using CUDATranspose or CUDADot.

Matrix Transpose

Matrix transposition is essential in many algorithms. CUDALink provides a ready-made implementation in the form of CUDATranspose. Users may wish to implement their own, however.

This loads the CUDAFunction for matrix transposition and defines a new function that takes a real-valued matrix and outputs its transpose.

In[82]:=
Click for copyable input

This sets a matrix.

In[83]:=
Click for copyable input
Out[83]//MatrixForm=

This transposes matrix .

In[84]:=
Click for copyable input
Out[84]//MatrixForm=

The result agrees with the Wolfram Language.

In[85]:=
Click for copyable input
Out[85]//MatrixForm=

Matrix-Vector Multiplication

Matrix-vector multiplication is a common operation in linear algebra, finite element analysis, etc. This loads CUDAFunction, implementing matrix-vector multiplication.

In[86]:=
Click for copyable input

This sets the input matrix and vector.

In[87]:=
Click for copyable input

This invokes the above defined function, displaying the result using MatrixForm.

In[89]:=
Click for copyable input
Out[89]//MatrixForm=

The result agrees with the Wolfram Language.

In[90]:=
Click for copyable input
Out[90]//MatrixForm=

Matrix-Matrix Multiplication

Matrix-matrix multiplication is a pivotal function in many algorithms. This loads CUDAFunction from a source file, setting the block dimension to .

In[91]:=
Click for copyable input

This sets input values and allocates memory for the output.

In[93]:=
Click for copyable input

This performs the computation.

In[96]:=
Click for copyable input

This displays the result using MatrixForm.

In[97]:=
Click for copyable input
Out[97]//MatrixForm=

The result agrees with the Wolfram Language.

In[98]:=
Click for copyable input
Out[98]//MatrixForm=

This unloads the output buffer.

In[99]:=
Click for copyable input

Dot Product

The dot product of two vectors is a common operation in linear algebra. This implements a function that takes a set of vectors and gives the dot product of each.

In[100]:=
Click for copyable input
Out[100]=

This loads the function into the Wolfram Language.

In[101]:=
Click for copyable input
Out[101]=

Generate 50 random vectors.

In[102]:=
Click for copyable input

This calls the function, returning the result.

In[105]:=
Click for copyable input
Out[105]=

The results agree with the Wolfram Language's output.

In[106]:=
Click for copyable input
Out[106]=

Convex Hull

Convex hull is hard to make parallel on the GPU, so in this example the hybrid approach to GPU programming is taken: do computation that makes sense on the GPU, and the rest is done on the CPU.

This convex hull implementation is a textbook implementation of Andrew's algorithm and is designed to be simple, not efficient.

Define the predicate in the Wolfram Language.

In[107]:=
Click for copyable input

This defines a function that splits the point set to be either above or below a given line connecting the extreme points.

In[108]:=
Click for copyable input

This loads the above function as a CUDA function.

In[109]:=
Click for copyable input

Define a Wolfram Language function that takes two split point sets and finds the convex hull for them.

In[110]:=
Click for copyable input

This calls the above function. Note that the list is sorted with CUDASort before being processed by .

In[111]:=
Click for copyable input

To test, create 20,000 uniformly distributed random points.

In[112]:=
Click for copyable input

This computes the hull.

In[114]:=
Click for copyable input

This visualizes the result. Lines are drawn between the hull points.

In[115]:=
Click for copyable input
Out[115]=

The above algorithm handles the extreme case where all or most points lie on the hull. This generates uniformly distributed points on the unit disk.

In[116]:=
Click for copyable input

This computes the hull points.

In[118]:=
Click for copyable input

This visualizes the hull.

In[119]:=
Click for copyable input
Out[119]=

The above is a prime example of combining Wolfram Language and CUDA programming to make an algorithm partially parallel that would otherwise be written in only serial code.

Random Number Generation

Many algorithms, ranging from ray tracing to PDE solving, require random numbers as input. This is, however, a difficult problem on many core systems, where each core has the same state as any other core. To avoid that, parallel random number generators usually use entropy values such as the time of day to seed the random number generators, but those calls are not available in CUDA.

The following section gives three classes of algorithms to generate random numbers. The first is uniform random number generators (where pseudorandom number generators and quasi-random number generators are showcased). The second is random number generators that exploit the uniform distribution of a hashing function to generate random numbers. The final is normal random number generators.

Pseudorandom Number Generators

Pseudorandom number generators are deterministic algorithms that generate numbers that appear to be random. They rely on the seed value to generate further random numbers.

In this section, a simple linear congruential random number generator (the ParkMiller algorithm) is shown, along with a more complex Mersenne Twister.

ParkMiller

The ParkMiller random number generator is defined by the following recurrence equation:

It can be implemented easily in the Wolfram Language.

In[120]:=
Click for copyable input

Here, common values for and are used. Using NestList, you can generate a list of 1000 numbers and plot them.

In[121]:=
Click for copyable input
Out[121]=

Here is the timing to generate 1 million numbers.

In[122]:=
Click for copyable input
Out[122]=

An alternative is to use the Method option in SeedRandom; this can be used in the same manner as before.

In[123]:=
Click for copyable input
Out[123]=

Compared to the Wolfram Language implementation, this is around 300 times faster.

In[124]:=
Click for copyable input
Out[124]=

The CUDA implementation is similar to the one written in the Wolfram Language. The implementation is distributed along with CUDALink, and the location is shown below.

In[125]:=
Click for copyable input
Out[125]=

This loads the CUDA function into the Wolfram Language.

In[126]:=
Click for copyable input
Out[126]=

This allocates the output memory.

In[127]:=
Click for copyable input
Out[127]=

This calls CUDAFunction.

In[128]:=
Click for copyable input
Out[128]=

The result is random.

In[129]:=
Click for copyable input
Out[129]=

If you measure the timing, you notice that it is twice as fast as the Wolfram Language's built-in method, and 600 times faster than pure Wolfram Language implementation.

In[130]:=
Click for copyable input
Out[130]=
In[131]:=
Click for copyable input
Out[131]=

The timing against a Compile is similar to that of CUDA. This generates C code from a Compile statement with no integer overflow detection.

In[132]:=
Click for copyable input

This finds the timing. Notice that there is little difference.

In[133]:=
Click for copyable input
Out[133]=

If you ignore the time it takes for memory allocation (and you can rightly do so in this case, since generated random numbers are usually reused on the GPU), you notice a 10× speed improvement.

In[134]:=
Click for copyable input
Out[136]=

Mersenne Twister

Mersenne Twister utilizes shift registers to generate random numbers. Because the implementation is simple, it maps well to the GPU. The following file contains the implementation.

In[137]:=
Click for copyable input
Out[137]=

This loads the function from the file.

In[138]:=
Click for copyable input

Here, the Mersenne Twister input parameters are defined. The twister requires seed values. Those values can be computed offline and stored in a file, or they can be generated by the Wolfram Language. The latter is shown here.

In[139]:=
Click for copyable input

This allocates the output memory. Since the output will be overwritten, there is no need to load memory from the Wolfram Language onto the GPU.

In[146]:=
Click for copyable input
Out[146]=

This invokes CUDAFunction with parameters.

In[147]:=
Click for copyable input
Out[147]=

The output can be plotted to show it is random.

In[148]:=
Click for copyable input
Out[148]=

A Wolfram Language function can be written that takes the number of random numbers to be generated as input, performs the required allocations and setting of parameters, and returns the random output memory.

In[149]:=
Click for copyable input

This generates a plot of the first 10,000 elements.

In[150]:=
Click for copyable input
Out[150]=

The following measures the time it takes for the random number generator to generate 100 million numbers.

In[151]:=
Click for copyable input
Out[151]=

This is on par with the Wolfram Language's random number generator timings.

In[152]:=
Click for copyable input
Out[152]=

Considering that random numbers are seeds to other problems, a user may get performance increase in the overall algorithm even if the Wolfram Language's timings are superior to the CUDA implementation.

Quasi-Random Number Generators

This section describes quasi-random number generators. Unlike pseudorandom number generators, these sequences are nonuniform and have underlying structures that are sometimes useful in numerical methods. For instance, these sequences typically provide faster convergence in multidimensional Monte Carlo integration.

Halton Sequence

The Halton sequence generates quasi-random numbers that are uniform on the unit interval. While the code works with arbitrary dimensions, only the van der Corput sequence is discussed, which works on 1D space. This is adequate for comparison.

The resulting numbers of the Halton (or van der Corput) sequence are deterministic but have low discrepancy over the unit interval. Because they fill the space uniformly in some applications, such as Monte Carlo integration, they are preferred to pseudorandom number generators.

For a given with base :

The one-dimensional Halton (or van der Corput) value in base :

The sequence of length is then written as:

Given a number in base representation , the van der Corput sequence mirrors the number across the decimal point, so that its sequence value is .

In the Wolfram Language, you can find the sequence using IntegerDigits.

In[153]:=
Click for copyable input

Setting the base to , you can calculate the first 1000 elements in the sequence.

In[154]:=
Click for copyable input

This plots the result; notice how it fills the space uniformly.

In[155]:=
Click for copyable input
Out[155]=

A property of low-discrepancy sequences is that the next elements in the sequence know where the previous elements are positioned. This can be shown with Manipulate.

In[156]:=
Click for copyable input
Out[156]=

For the CUDA implementation, you have to implement your own version of IntegerDigits, but this is not difficult. First, load the implementation source code.

In[157]:=
Click for copyable input
Out[157]=

This loads CUDAFunction.

In[158]:=
Click for copyable input
Out[158]=

This allocates memory for the output. Here, only 1024 random numbers are generated.

In[159]:=
Click for copyable input
Out[159]=

This runs the function for dimension 1.

In[160]:=
Click for copyable input
Out[160]=

This plots the results.

In[161]:=
Click for copyable input
Out[161]=

Sobol Sequence

The Sobol sequence is also a low-discrepancy sequence. It is implemented in the following CUDA file.

In[162]:=
Click for copyable input
Out[162]=

This loads the function, using as the block dimension.

In[163]:=
Click for copyable input
Out[163]=

Here, the input parameters are loaded. The direction vectors needed by the Sobol sequence are precomputed and stored in a file.

In[164]:=
Click for copyable input

This executes the Sobol function, passing parameters.

In[169]:=
Click for copyable input

This plots the first 10,000 values in the sequences. Note that the space is filled evenly with points (a property of quasi-random number generators).

In[170]:=
Click for copyable input
Out[170]=

When complete, the memory must be unloaded.

In[171]:=
Click for copyable input

Niederreiter Sequence

The Niederreiter sequence is another commonly used low-discrepancy sequence. It is implemented in the following CUDA file.

In[172]:=
Click for copyable input
Out[172]=

This loads the seed data for the random number generator.

In[173]:=
Click for copyable input

This loads CUDAFunction.

In[174]:=
Click for copyable input
Out[174]=

This allocates the memory needed for computation and sets the proper parameters. Here, the sequence for dimension 3 is generated, and the resolution is set to .

In[175]:=
Click for copyable input

This runs the function.

In[181]:=
Click for copyable input

This gets the output from the GPU.

In[182]:=
Click for copyable input

This plots the first 10,000 values in the sequences. Note that the space is filled evenly, like the Sobol sequence.

In[183]:=
Click for copyable input
Out[183]=

When complete, the memory must be unloaded.

In[184]:=
Click for copyable input

Hashing Random Number Generators

Random number generators that depend on hashing generate random numbers of lesser quality, but they generate them fast. For many applications, they are more than adequate.

Tiny Encryption Algorithm Hashing

The Tiny Encryption Algorithm (TEA) is a very simple hashing algorithm implemented in the following file.

In[185]:=
Click for copyable input
Out[185]=

Load CUDAFunction.

In[186]:=
Click for copyable input
Out[186]=

This allocates memory for the output.

In[187]:=
Click for copyable input
Out[187]=

This calls CUDAFunction.

In[188]:=
Click for copyable input
Out[188]=

This plots the result.

In[189]:=
Click for copyable input
Out[189]=

This deletes allocated memory.

In[190]:=
Click for copyable input

MD5 Hashing

Other general hashing methods can be used for random number generators. Here is an implementation of the MD5 algorithma well-known hashing algorithm.

In[191]:=
Click for copyable input
Out[191]=

This loads CUDAFunction from the source.

In[192]:=
Click for copyable input
Out[192]=

This loads the output memory.

In[193]:=
Click for copyable input
Out[193]=

This calls CUDAFunction.

In[194]:=
Click for copyable input
Out[194]=

This plots the results.

In[195]:=
Click for copyable input
Out[195]=

This deletes allocated memory.

In[196]:=
Click for copyable input

Normal Random Numbers

The following algorithms generate normally distributed random numbers.

Inverse Cumulative Normal Distribution

The following implements a way to generate normally distributed random numbers.

In[197]:=
Click for copyable input
Out[197]=

This loads CUDAFunction.

In[198]:=
Click for copyable input
Out[198]=

Allocate memory for 100,000 random numbers.

In[199]:=
Click for copyable input

This calls CUDAFunction.

In[201]:=
Click for copyable input
Out[201]=

This gets the memory into the Wolfram Language.

In[202]:=
Click for copyable input

This plots the result, using Histogram.

In[203]:=
Click for copyable input
Out[203]=

This unloads the memory.

In[204]:=
Click for copyable input

BoxMuller

BoxMuller is a method of generating normally distributed numbers, given a set of uniformly distributed random numbers. The CUDA implementation is found in the following file.

In[205]:=
Click for copyable input
Out[205]=

This loads CUDAFunction.

In[206]:=
Click for copyable input
Out[206]=

This sets the input arguments.

In[207]:=
Click for copyable input

Use the Mersenne Twister (defined two sections ago) to generate a list of uniformly distributed random numbers.

In[212]:=
Click for copyable input
Out[212]=

Transform the list of uniform random numbers to normally distributed random numbers.

In[213]:=
Click for copyable input
Out[213]=

You can see the bell curve when using Histogram.

In[214]:=
Click for copyable input
Out[214]=

This deletes allocated memory.

In[215]:=
Click for copyable input

Applications of Random Number Generators

Random numbers have applications in many areas. Here two main applications are presented: Monte Carlo integration (by approximating and an arbitrary function) and simulating Brownian motion.

Approximating π

The value of can be approximated using Monte Carlo integration. First, generate uniformly random numbers in the unit square. Then the number of points inside the first quadrant of the unit circle is counted. The result is then divided by the number of points. This will give .

This implements reduction, counting the number of points in a unit circle.

In[216]:=
Click for copyable input
Out[216]=

This loads CUDAFunction.

In[217]:=
Click for copyable input
Out[217]=

Use 1,000,000 points.

In[218]:=
Click for copyable input

Generate the random numbers using the Mersenne Twister algorithms discussed previously.

In[221]:=
Click for copyable input
Out[221]=

This allocates the output memory.

In[222]:=
Click for copyable input
Out[222]=

This performs the computation.

In[223]:=
Click for copyable input
Out[223]=

This gets the output memory.

In[224]:=
Click for copyable input
Out[224]=

The result agrees with the Wolfram Language.

In[225]:=
Click for copyable input
Out[225]=

The timing is considerably faster.

In[226]:=
Click for copyable input
Out[226]=

Compared to the Wolfram Language.

In[227]:=
Click for copyable input
Out[227]=

Monte Carlo Integration

Monte Carlo integration finds its way into many areas. Here, Sqrt[x] from 0 to 1 is integrated.

In[228]:=
Click for copyable input

This loads the function.

In[229]:=
Click for copyable input
Out[229]=

Use the Sobol quasi-random number generator for random numbers.

In[230]:=
Click for copyable input

Then use the number of random number generators as the length.

In[238]:=
Click for copyable input
Out[238]=

This allocates memory for the output.

In[239]:=
Click for copyable input
Out[239]=

This calls the function.

In[240]:=
Click for copyable input
Out[240]=

This checks whether the first few elements make sense.

In[241]:=
Click for copyable input
Out[241]=

You now need to sum the output. This can be done using CUDAFold.

In[242]:=
Click for copyable input
Out[242]=

The result agrees with NIntegrate.

In[243]:=
Click for copyable input
Out[243]=

Unload the allocated memory.

In[244]:=
Click for copyable input

Brownian Motion

This allocates memory for the simulation.

In[245]:=
Click for copyable input
Out[245]=
In[246]:=
Click for copyable input
In[247]:=
Click for copyable input

The values of the pseudorandom sequence are normally distributed.

In[248]:=
Click for copyable input
Out[248]=
In[249]:=
Click for copyable input
In[250]:=
Click for copyable input
Out[250]=

Code Generation

Since CUDALink is integrated in the Wolfram Language, you can use Wolfram Language features like SymbolicC to generate CUDA kernel code. If you have not done so already, import CUDALink.

In[251]:=
Click for copyable input

For this example, you need the SymbolicC package.

In[252]:=
Click for copyable input

This defines some common Wolfram Language constructs, translating them to their SymbolicC representation.

In[253]:=
Click for copyable input

To test, pass in a Wolfram Language statement and get the SymbolicC output.

In[271]:=
Click for copyable input
Out[271]=

To convert to a C string, use the ToCCodeString method.

In[272]:=
Click for copyable input
Out[272]=

The above allows you to write a function that takes a Wolfram Language function (pure or not) and would generate the appropriate CUDA kernel source.

In[273]:=
Click for copyable input

Passing a pure function to returns the kernel code.

In[276]:=
Click for copyable input
Out[276]=

This defines a function that, given a Wolfram Language function and an input list, generates the CUDA kernel code, loads the code as a CUDAFunction, runs the CUDAFunction, and returns the result.

In[277]:=
Click for copyable input

You can test with a pure function that adds 2 to each element in an input list.

In[279]:=
Click for copyable input
Out[279]=

Any construct translated by is supported by . Here, each element is squared.

In[280]:=
Click for copyable input
Out[280]=

This performs Monte Carlo integration.

In[281]:=
Click for copyable input
Out[281]=

Functions can be defined and passed into . Here, a color negation function is defined.

In[282]:=
Click for copyable input

Invoke the color negation function.

In[283]:=
Click for copyable input
Out[283]=

The above is a simple example, but can be used as a seed for more complicated ones.