Examples
This section shows a number of examples of the Wolfram Compiler. They are chosen to help learn how to get the most from the compiler and to learn about some of its unique features. They are not always the best way to solve a particular problem, but when there are better ways, these are usually noted.
Reversing an Array
This example will show how the elements in an array can be reversed.
The following is a dynamic array data structure:
Reversing the elements is quite simple using the "SwapPart" operation:
When the function runs on the data structure, the elements are reversed. Note that this is done in place. The data structure is mutated, and no copy is made:
It is quite straightforward to make a compiled version using DownValuesFunction:
This applies the function, and the elements get reversed again:
To compare timings, a larger data structure is created:
This runs the evaluated function:
Here the compiled version is run:
You can see that the compiled version is significantly faster.
Polymorphic
It might be useful to write a polymorphic version of reversing, maybe even one that works for several data structures.
This would need a function declaration. The following declaration is going to work for dynamic and fixed arrays as well as different contents of the array. The constraint could be written with a type class, but there is not a convenient type class that contains just dynamic and fixed arrays. Rather than create a new type class, an alternatives constraint is given:
This compiles the code for a dynamic array:
A dynamic array to test the code:
When the function operates on the array, the elements are reversed:
This compiles code for reversing a fixed array:
This example has shown that it is relatively easy to extend the operations for a data structure, adding new optimized functionality as desired.
Prime Testing
This example shows how data can be prepared in the Wolfram Language and then used in compiled code. It will use an example of primality testing, which of course is well handled in the Wolfram Language. Despite there being other ways to solve the problem, this is an interesting example, and the techniques used here are generally applicable.
This creates a data table where the entries are 1 if the number is prime and 0 otherwise. It might seem strange to use primality testing to make data for primality testing, but this could use a slow method for creating the table, which would then be used to make a faster method:
This uses the data to create a data table of a "CArray" of machine integers using TypeHint to set the data. It then looks up in the table using FromRawPointer indexed by the test integer. If the entry is 1, then the test integer is a prime:
This verifies the result for all numbers in the table:
This can be improved upon, because it is testing even numbers, which other than 2, are not prime. Here the data is chosen to be of the given size, but to start at 3:
The code now has to subtract 3 and then divide by 2; it does the latter with BitShiftRight of 1. These will be very fast operations:
So this has helped a bit. But now it can be even cleverer. The data table uses 64-bit integers to store 0 or 1 values. It would be better to store 64 0 or 1 values in each integer.
The code that follows will do this, except it will use blocks of 8-bit integers, which will be stored as an "UnsignedInteger8" type.
First the data for the table is created:
Here the first element encodes the first eight numbers. You can see the bits with the following:
This will be the same as the first eight numbers generated:
The code is as below. The index is computed as previously, but now the code has to pick out the specific bit from the value in the table. The code is only interested in the bottom eight bits, and does a left bit shift to determine which bit in the table value to use. If these seem a little confusing, some simple experiments in the Wolfram Language will help to make it clear:
Here, the value of the compiled code is compared with the built-in Wolfram Language functionality. The results are the same:
It would be better to run this for a larger table:
This compares the values with the built-in PrimeQ, they give the same result. This works for numbers between 3 and 2097153:
The code shown here is very similar to that in a bit vector implementation. In fact, another way to implement this would be just to use the bit vector data structure. However, the purpose of the example is to show techniques for generating fast compiled code but using the Wolfram Language to help understand how it works and to actually generate the code.
2D Coordinates
This example provides support for two-dimensional coordinates. The declarations in this example are all placed into a compiled component called "Coordinate2DExample".
First, a type called "Coordinate2D" will be declared. The type is set to be a value (by turning off reference semantics); it will contain only native integers, and reals will have no issues for memory management. It will have two fields to hold the and
values. It is also set to be polymorphic in its contents.
Coordinates can, of course, be held in packed arrays. This type has an advantage in that it is known to be for two dimensions, so checking the size is never necessary. Also, the data is not held in a reference, so working with it does not require any reading from memory:
This declares a constructor for the type:
Serialization
In order to pass the type between compiled code and the Wolfram evaluator, it will need serialization code.
First, it is useful to add a utility function to extract the type data in a coordinate. As typical, this is a declaration of Information for "ElementType":
Now, a function that returns the element type of a fraction can be compiled as below:
This shows that the contents are "Real64":
These are the declarations of the three serialization functions. The first serializes a coordinate to an expression. The second verifies that an expression can be converted back to a coordinate. The last actually does the conversion.
The most complicated is the last. This uses CurrentCompiledFunctionData and Information to extract the expected elements of the fraction. Deserialization uses Numerator and Denominator on the expression; if these are both integers, then it works. This is quite convenient, as it allows integers to be entered:
Casting
It is quite convenient to declare a cast from coordinate instances to expressions. This is done as below:
There are a number of features that use Cast to an expression. For example, now Print will work as shown in the following:
Computation
This section adds some computation functionality. A key benefit of fractions is that there is a result for integer division. The following implements the Divide function for integers:
With addition, it is possible to carry out more complicated tasks, such as summing the and the
values in a vector of coordinates. The vector can be held with a "ListVector", and the summation can use Fold:
Holding a vector of coordinates with a "ListVector" is convenient. However, it will have a drawback, in that every time a vector is passed between the Wolfram evaluator and compiled code, it will copy all of the data. A way to avoid this that involves a reference might be better. This could be done by putting the data into a "FixedArray" data structure, as in the following:
This creates an instance of the data structure:
As was discussed in the section on working with data structures, an object like this can be worked on with compiled code. For example, this function returns the elements in the fixed array:
This returns the elements as a Wolfram Language list structure:
Other functions can carry out computation. For example, this sums up the and
elements:
This returns the same result as previously:
As a demonstration, this creates an array of coordinates. It then displays a graphics object:
This creates a rotation matrix and applies that to the array of coordinates:
Now when the coordinates are displayed in a graphics object, it has been rotated:
The Wolfram Language has many functions for transformations like these. However, this is an example of how these transformations can be applied by using the Wolfram Compiler.
Further Work
There are a lot of interesting things that could be extended here. It would be possible to combine this with a "Coordinate3D" type; polymorphic definitions would probably allow for code sharing. Another possibility would be to have a transformation matrix type.
Fractions
This example adds support for fractions or rational numbers to the Wolfram Compiler. These are supported in the Wolfram Language, where they are expressions with head Rational. They have very minimal native support in the compiler.
These examples are purely set up for demonstration. There are a number of places where errors are not handled, and optimizations for special cases are also not taken advantage of. This is so that the implementation does not become too complicated.
The declarations in this example are all placed into a compiled component called "FractionExample".
First, a type is declared. It will be called "Fraction", to not conflict with the "Rational" type that exists in the compiler. The type is set to be a value (by turning off reference semantics), and since it will contain only native integers, will have no issues for memory management. It will have two fields to hold the numerator and denominator. It is also set to be polymorphic in its contents. Thus fractions of different sized integers are supported:
This declares Numerator and Denominator for fractions:
This declares a constructor for fractions. It will be called Fraction, which is the name of the expression functionality. It has to reduce the numerator and denominator by their GCD:
Now a function that creates a fraction can be compiled. Since serialization has not been hooked up, a list of the numerator and denominator is returned. It is quite convenient to use the functions Numerator and Denominator, rather than accessing the fields from the type. This allows the code to run in evaluated mode:
If the inputs cannot be reduced, they are returned:
If the inputs can be reduced, this will happen:
A weakness of the code is that it does not return an error if the denominator is zero:
Serialization
In order to pass the type between compiled code and the Wolfram evaluator, it will need serialization code.
First, this adds a utility function to extract the type of the numerator and denominator in a fraction. As is typical, this is a declaration of Information for "ElementType":
Now, a function that returns the element type of a fraction can be compiled as below:
This shows that the contents are "Integer64":
These are the declarations of the three serialization functions. The first serializes a fraction to an expression. The second verifies that an expression can be converted back to a fraction. The last actually does the conversion.
The most complicated is the last. This uses CurrentCompiledFunctionData and Information to extract the expected elements of the fraction. Deserialization uses Numerator and Denominator on the expression; if these are both integers, then it works. This is quite convenient, as it allows integers to be entered:
This compiles code to take two integers and create a fraction, which is returned:
In this case, the fraction cannot be reduced:
This function takes a fraction and returns a list of the numerator and denominator. This and the previous example verify that fractions can be passed into and returned from compiled code:
This enters an integer that is converted into a fraction with a denominator of 1:
This enters an integer zero that generates a fraction with numerator of 0 and denominator of 1:
Casting
It is quite convenient to declare a cast from fraction instances to expressions. This is done as below:
There are a number of features that use Cast to an expression. For example, now Print will work as shown in the following:
This calls the printing code. The fraction could be processed by the Wolfram evaluator to mark it as valid:
Computation
This section adds some computation functionality. A key benefit of fractions is that there is a result for integer division. The following implements the Divide function for integers:
Now when two integers are divided, there is a result:
This carries out the division:
This computes a reciprocal, which also uses division:
There is a limit to how much can be done. For example, when computing integer powers, these are carried out with integer arithmetic:
This squares the input integer:
This has a runtime error; the result is expected to be an integer:

This declares Times for two fractions:
This declares Plus of two fractions. A robust implementation should have some more code to take advantage of some optimizations:
This generates a fraction with denominator of 1 that is converted by the Wolfram evaluator into an integer:
Further Work
There is plenty more that could be added to the fraction type. This could include more computation such as powers, and arithmetic with other types such as integers or reals. In addition, comparisons such as Equal and SameQ could be supported.
Another area is that there is little error checking, and some important optimizations from special cases are not handled.
Incremental Functions
Incremental functions allow their execution to be paused and resumed multiple times; they have been discussed previously. In this section, some interesting examples of incremental functions are given.
Unique Elements
This example shows the use of an incremental function that returns unique elements from a list. An implementation is shown below. The function takes an input type of "InertExpression" and iterates over the elements of the list. The elements are tested to see if they are in a "HashSet" data structure. If they are not found, then they are added and then yielded:
Generate an incremental function for an expression:
This goes over the elements with unique elements being yielded one at a time:
There are no more elements, and the incremental function returns that it is terminated:
A more sophisticated version can use a polymorphic declaration. This will work for any input type that can iterate over its elements:
This compiles the unique incremental function for an expression, similar to what was seen earlier:
There are only two unique elements:
This is a version that yields unique elements of a packed array:
This sums the unique elements:
This is one that iterates over the elements of a red-black tree:
This creates a red-black tree with some random elements:
The visualization shows that the elements are ordered and that the tree is balanced:
This creates a incremental function that will yield the elements of the red-black tree:
These are the unique elements; they come out in order:
Repeated Elements
The following IncrementalFunction example takes a packed array of integers and yields the positions of elements that are the same. It does this by recording the start value and position, then traversing elements of the list, comparing each element to the start value. If the values are not the same, it yields the start and end positions and updates the start value and position. At the end, there is a yield of the remaining positions:
These run through the sequences that are found:
This regenerates the positions for the entire list:
These are the elements that are the same:
This is equivalent to Split:
It would be relatively straightforward to create a polymorphic version of this.
Sorted Elements
This example of a incremental function generates sorted elements from an input list. It does this with an incremental version of merge sort. Merge sort is a divide and conquer algorithm, and this lends itself to a recursive incremental function implementation.
The following is a function declaration that implements the merge sort. In the function, there is a Which, which halts the recursion if the test is for one or two elements:
This instantiates the function for a packed array of integers:
Here is some test data to run the algorithm:
This is the incremental function instance:
This is the smallest element in the list:
These are the next elements in order:
This generates all the elements in sorted order:
It is the same as sorting the data:
The merge sort implementation shown here is not very efficient. In real cases, when the number of elements to be compared is reduced, the implementations often switch to a different technique, such as a sorting network. In addition, there is some overhead with the incremental function itself.
However, the first element is going to be produced with O(n) time, where N is the number of elements. With a conventional sorting technique, the first element (and all the elements in order) will typically arrive in order O(n ln(n)) time. It may be that the efficiency of the conventional sort means that the overall time is still better than that for the incremental function. Nonetheless, this is an interesting usage of incremental functions.