# Compiled Function Operation

The *Mathematica* compiler generates a CompiledFunction expression that contains a sequence of simple instructions for evaluating a *Mathematica* computation. The compiled function expression also contains other information such as argument and result specifications, flags, error handlers, and version information. Since all of this information is stored in the expression, many of the tools for working with compiled functions can be written in *Mathematica*.

The compiled function instructions can be executed by a virtual machine that is included in *Mathematica*. This is based on an idealized register machine that uses temporary locations (registers) to store intermediate results as they are calculated.

You can get a good insight into how the instructions work with the package. To use the package you have to load it.

In[1]:= |

Now you can use CompilePrint to show details of a compiled function.

In[1]:= |

Out[2]= |

In this computation you can see that the compiled function takes one real argument and returns a real argument. The argument is placed into register R0 and the result is found in register R3. The square of the argument is stored in register R1, and this is used twice. In addition, other information about runtime settings of the compiled function is displayed.

This example demonstrates a key part of the compiler: it maintains a type system. The type of the input is specified and the type of each intermediate result is computed, leading to the type of the result.

The type of arguments can be specified in the input. For example, to specify that the input is a complex number you would use Complex, as in the following.

In[3]:= |

Out[4]= |

In this example, the operations such as squaring and sin will use complex versions. This leads to a speed advantage since an optimized function for complex numbers is used rather than having to decide what should be done each time the function is called based on the actual arguments.

The compiler can also support working with vectors, matrices, and tensors. A simple example is shown in the following.

In[11]:= |

Out[12]= |

The compiler supports many *Mathematica* programming constructs such as Table, Map, Apply, Nest, NestList, Fold, and FoldList. The following example shows how local variables can be introduced with Module, and then an iteration run with Do.

In[8]:= |

Out[9]= |

Compiled functions such as these often run much faster than the uncompiled equivalents.

## Type System

The *Mathematica* compiler uses a type system to generate its instructions. The type of each intermediate computation is determined from knowledge of the function and the type of its arguments. Types can be specified for input arguments to a function. You can also introduce a variable of a given type by assigning a local variable to a constant.

The types that can be used in the *Mathematica* compiler are summarized in this section.

#### Boolean

Booleans can be passed in as arguments to compiled functions, and are also supported as the value of a local variable.

Compile[{{arg,True|False}},body...] | a Boolean argument |

Module[ {var=True},body...] | a Boolean local variable |

The following demonstrates a function that takes two Boolean arguments.

In[9]:= |

Out[10]= |

#### Integer

Integers can be passed in as arguments to compiled functions, and are also supported as the value of a local variable.

Compile[{{arg,_Integer}},body...] | an integer argument |

Module[ {var=10},body...] | an integer local variable |

The following demonstrates a function that takes one integer argument.

In[15]:= |

Out[16]= |

#### Real

Reals can be passed in as arguments to compiled functions, and are also supported as the value of a local variable.

The following demonstrates a function that takes one real argument.

In[19]:= |

Out[20]= |

If you call the function with an integer, this is converted to a real number.

In[21]:= |

Out[21]= |

#### Complex

Complex numbers can be passed in as arguments to compiled functions, and are also supported as the value of a local variable.

Compile[{{arg,_Complex}},body...] | a complex argument |

Module[ {var=10.5},body...] | a complex local variable |

The following demonstrates a function that takes one complex argument.

In[29]:= |

Out[30]= |

If you call the function with a real, this is converted to a complex number.

In[31]:= |

Out[31]= |

#### Tensor

The compiler works with vectors, matrices, and tensors. These can be passed in as arguments to compiled functions, and are also supported as the value of a local variable.

Compile[{{arg,_type, num}},body...] | a tensor argument |

Module[ {var={5,6,7,8}},body...] | a tensor local variable |

The following demonstrates a function that takes a real vector as an argument.

In[32]:= |

Out[33]= |

If you call the function with a vector of integers, this is converted to a vector of reals.

In[34]:= |

Out[34]= |

#### Summary

The type specification syntax for arguments is summarized in the following table.

Compile[{x_{1},x_{2},...},expr] | create a compiled function that evaluates expr for numerical values of the |

Compile[{{x_{1},t_{1}},{x_{2},t_{2}},...},expr] | compile expr assuming that is a of type |

Compile[{{x_{1},t_{1},n_{1}},{x_{2},t_{2},n_{2}},...},expr] | compile expr assuming that is a rank array of objects each of type |

_Integer | machine-size integer |

_Real | machine-precision approximate real number |

_Complex | machine-precision approximate complex number |

True|False | logical variable |

Type specification in Compile.

### Type Propagation

After the arguments of a compiled function have been specified, the types are propagated through the set of functions in the compiler. Typically, this is relatively straightforward. For example, if two real numbers are added, the result will be a real.

The package is useful to demonstrate the operation of the compiler. First, it needs to be loaded.

In[7]:= |

Now, you can use CompilePrint to show details of a compiled function. This shows that the square root of a complex number is also a complex number.

In[43]:= |

Out[44]= |

However, if the input is a real number, the square root could be a complex number or it could be a real number. This is demonstrated in the following.

In[45]:= |

Out[46]= |

This shows that for this input, the compiler chooses the result of the square root to also be a real number. This is less general but faster.

### Type Coercion

The *Mathematica* numerical coercion system maintains results in their most accurate form. For example, evaluates to itself, since any other result would involve a loss of precision.

In[48]:= |

Out[48]= |

This should be contrasted with many other computation systems. For example, a C or Fortran program would convert the result to a floating-point number. This is because in C or Fortran there is no other way to represent the result of computing the square root of 2. *Mathematica* does not do this because its symbolic nature allows it to represent and work with expressions such as , and avoiding conversion to a floating-point number avoids a loss of information.

However, if is combined with a floating-point number, *Mathematica* will return a floating-point number.

In[50]:= |

Out[50]= |

Similar issues are found with any exact input, which can be integers, rationals, or constants such as Pi or E.

When the compiler is invoked directly from Compile, it gives a different type coercion. For example, it converts integers to reals when used inside functions like square root. In the following example the result is a vector of real numbers.

In[7]:= |

Out[8]= |

The compiler works in this different way because its focus is to optimize numerical computations rather than exact mathematics.

However, when the compiler is called automatically, for example, by top-level calls to functions like Map, it is run in a mode where its coercion matches that of regular *Mathematica*. If the compiler cannot return appropriate results it will not be used.

This allows *Mathematica* to use the compiler as an internal optimization, but maintains the same results as if the compiler were not being used. In the following example, the result is a vector of exact integers and roots of integers (if the compiler had been used it would be a vector of real numbers).

In[9]:= |

Out[9]= |

### Type Consistency

The compiler type system determines the type of intermediate results in the computation. These need to follow in a consistent way. For example, if there is a branch, both sides of the branch must return the same result. The following generates a compilation error because the different branches are incompatible.

In[15]:= |

Out[15]= |

Note that type coercion can be applied to move to a consistent type. This is demonstrated in the following.

In[22]:= |

Out[22]= |

Closely related to the issue of type consistency across a branch is type consistency in a return statement. The following shows how all ways to return from a function must be type consistent.

In[16]:= |

Out[16]= |

A different example of type consistency involves local variables. Once a local variable has been assigned a type, it cannot be assigned to a value of an incompatible type.

In[18]:= |

Out[18]= |

## External Calls

The *Mathematica* compiler can create instructions for a wide variety of *Mathematica* commands and functions covering the supported type system. However, sometimes the compiler reaches something for which it does not have any built-in support.

When this happens the compiler can still continue. It has to create an instruction to compute the result; this is done by adding an instruction that calls out to the *Mathematica* evaluator. However, in addition it also has to determine the type of the result. Unless this information has been added, this is not easy.

It is worth having a good understanding of this process because it is a common source of inefficiencies in the compiler.

The package is useful to demonstrate the operation of the compiler. First, it needs to be loaded.

In[1]:= |

The following shows an external function called from inside a compiled function. The instructions show that the *Mathematica* evaluator is called and the result is expected to be a real number.

In[2]:= |

Out[4]= |

This shows the function works as expected.

In[5]:= |

Out[5]= |

Here a different definition of the external function is used. It shows that the compiler still expects the result to be a real, whereas a Boolean is returned.

In[6]:= |

Out[8]= |

Now when the function is used a runtime error results.

In[9]:= |

Out[9]= |

You can avoid these problems by declaring that the external function returns a Boolean.

In[10]:= |

Out[12]= |

Now the function works as expected.

In[13]:= |

Out[13]= |

### Undefined External Calls

The compiler will make external calls for parts of the input for which it does not have information. This can sometimes lead to problems as shown above. These problems can definitely cause efficiency problems when the compiled function executes. Even if no problems are found, the compiled function computation can still be inefficient.

One way to track these problems is to use the Compile message. This can be enabled to give more insight into the compilation process but is disabled by default.

In[1]:= |

Out[1]= |

You can enable the message with On.

In[2]:= |

Now the compilation generates a message. This can be a useful way to track down efficiency problems.

In[3]:= |

Out[3]= |

Note that if you give type information for the computation, no message is generated.

In[4]:= |

Out[4]= |

## Packed Arrays

Packed arrays are a feature of *Mathematica* that provide important speed and memory benefits. They exist so that *Mathematica* can be general enough to support symbolic computation and also fast enough that numerical computation is efficient. They give a specialized representation for vectors, matrices, and tensors of machine numbers such as integers, reals, and complex reals.

This example shows a vector of real numbers.

In[1]:= |

Out[2]= |

In[3]:= |

Out[3]= |

In[4]:= |

Out[4]= |

Now insert a symbolic quantity into the vector.

In[5]:= |

Out[5]= |

When the byte count is measured, the vector is shown to use more memory.

In[6]:= |

Out[6]= |

In addition, a mathematical operation on the vector is much slower.

In[7]:= |

Out[7]= |

The vector is no longer a packed array.

In[8]:= |

Out[8]= |

Packed arrays are important for the compiler because internally the compiler works with packed arrays. This gives an efficiency gain since they are fast to pass in and return from the compiler.

This demonstrates a compiled function that returns a vector of integers.

In[9]:= |

Out[10]= |

The result is a packed array. Whenever the compiler returns a vector, matrix, or tensor it will always be packed.

In[11]:= |

Out[11]= |

## Compilation Errors

The *Mathematica* compiler generates the sequence of instructions for evaluating a *Mathematica* expression. As it visits parts of the expression, it might encounter a problem such as the following.

One issue with problems like these is that due to *Mathematica*'s symbolic nature it is always possible that any unrecognized input is in fact a valid expression that might mean something.

The solution taken by the compiler is that it uses external calls to deal with many of these problems, so the computation still runs in the compiler. But it might also issue a warning message.

The package is useful to demonstrate the operation of the compiler. First, it needs to be loaded.

In[1]:= |

The following shows an unknown function called from inside a compiled function. The result is expected to be a real number and if this is not the case a runtime error will result. Note that no error message is issued during compilation.

In[2]:= |

Out[3]= |

An error is issued when the compiled function is used.

In[4]:= |

Out[4]= |

The following shows a call to Sin with two arguments. In this case, an error is issued and an external call is made. This is consistent with *Mathematica*'s principles of symbolic computation.

In[5]:= |

Out[6]= |

The following shows a call to Sin with a Boolean argument. In this case, an error is issued and an external call is made. This is consistent with *Mathematica*'s principles of symbolic computation.

In[7]:= |

Out[8]= |

This is an example of a type consistency error. The compiler generates an error message.

In[9]:= |

Out[9]= |

This is an example of a variable error where a variable has been initialized in one branch of a conditional but not in the other. The compiler generates an error message.

In[10]:= |

Out[10]= |

It is important to keep track of these variable errors because they can lead to computations not being as fast as you might expect.

## Runtime Errors

The *Mathematica* virtual machine executes a compiled function for specific input arguments; when it finishes, it returns the result. However, an error can occur as the machine is running. Some possible errors include the following.

These errors all have consequences for the execution speed of a compiled function. This can be due to the error itself, or it can be due to extra code that is inserted to deal with errors.

When a runtime error occurs, execution of the compiled function terminates and the error handler is called. Typically, this is set to repeat the computation in the *Mathematica* interpreter.

### Mathematical Function Error

Mathematical function errors can arise from domain problems or function exceptions.

Domain problems arise because of the compiler type system. For example, working with real numbers, certain inputs can lead to an error. In the following case a compiled function works for positive inputs.

In[7]:= |

Out[8]= |

However, if the input is negative a runtime error is generated. At this point an error handler is called that switches to the *Mathematica* interpreter.

In[9]:= |

Out[9]= |

These errors do not have any analog for the *Mathematica* interpreter, because this switches between domains at runtime. In other words, there is only one type, a *Mathematica* expression.

In the compiler, you could avoid a domain error such as this by working with complex numbers. However, this would have a consequence for speed.

A function exception takes place for certain inputs to mathematical functions. In the following example, the power is computed without any problems.

In[12]:= |

Out[13]= |

However, is undefined and this generates a runtime error and calls the *Mathematica* interpreter. The interpreter then also has a problem with the computation and it returns Indeterminate.

In[14]:= |

Out[14]= |

### Overflow and Underflow

Overflow happens when a number grows above its dynamic range, and underflow happens when it decreases below the range. In *Mathematica *these are handled by automatically switching from numbers implemented with the machine hardware to numbers implemented in software. This allows *Mathematica* to continue to return useful results, but it also has a consequence for efficiency, since software arithmetic (despite being very efficient in *Mathematica*) is slower than hardware arithmetic.

The following shows a machine integer.

In[25]:= |

Out[26]= |

When the number is incremented by 1 the expected result is returned.

In[29]:= |

Out[29]= |

However, the result is not a machine integer (note that this is done using 32-bit signed integers).

In[30]:= |

Out[30]= |

If this computation were carried in a C program, the result would have been -2147483648. This is because integer arithmetic is defined to work on bit patterns that are used to implement the integer, which in many cases actually carry out the operation. The consequence is that arithmetic is fast but not always "mathematically" correct. Of course, since programmers are aware of these errors they usually take care to detect them if they think they are likely to cause problems.

For floating point computation, numbers can also overflow as demonstrated with the following machine real.

In[31]:= |

Out[32]= |

When the number is multiplied by 10 the expected result is returned.

In[35]:= |

Out[35]= |

However, the result is not a machine real (note that this done using double-precision reals).

In[36]:= |

Out[36]= |

An equivalent behavior is found when a number gets smaller than the range provided by machine real numbers.

If computations that involve overflow or underflow are done with machine reals in C the results are slightly different from the case of machine integers. This is because they follow a standard (IEEE 754) which allows for arithmetic exceptions. For overflow the result will typically be a machine real infinity. This machine real infinity will propagate through subsequent computations in a way defined by the standard.

This quick summary of overflow and underflow issues shows in part why *Mathematica* is useful. It completely takes care of all of these details. Of course, some C or Fortran programmers simply ignore these issues and expect that their computations will never be troubled by such problems.

Execution of *Mathematica-*compiled functions also faces these issues. Typically, compiled functions catch integer problems. This is demonstrated in the following.

In[37]:= |

Out[38]= |

However, it should be understood that checking for integer overflow requires extra work for an operation that is typically very fast, which can in cases lead to a speed degradation. It is possible to disable the checking by a setting for the RuntimeOptions. The following shows how integer overflow checking is disabled and the mathematically incorrect (but fast) result is returned.

In[39]:= |

Out[40]= |

The case for real computation is slightly different. The following is a real computation.

In[45]:= |

Out[46]= |

If the input is larger, a runtime error is generated.

In[47]:= |

Out[47]= |

However, the treatment of overflow errors for real computation is different than for integer computation. In fact, the error is detected at the end of the computation. So if a different function that does return the overflow result is created, no error is generated.

In[48]:= |

Out[49]= |

This is possible for real computations because the hardware arithmetic allows these errors to propagate through other computations. The benefit is that computations do not have to be checked as they take place, leading to a performance increase.

There is also a setting for the RuntimeOptions that allows checking for real errors as they occur.

### External Call Error

An external call error takes place when execution of a compiled function calls the *Mathematica* interpreter but the result is not of the expected type. The following shows how an external error can be introduced.

In[1]:= |

Out[2]= |

When the function is used, an error results.

In[3]:= |

Out[3]= |

Here a compiled function is created that has the expected result for the external evaluation.

In[4]:= |

Out[5]= |

Despite the fact that no runtime error now takes place, external calls are still a source of inefficiency and it is good to avoid them if possible.

### List Processing and Other Errors

There are a number of other types of errors that can occur while a compiled function is executing. The following compiled function returns an element of a vector.

In[59]:= |

Out[60]= |

If the part number is greater than the length of the vector, an error results.

In[61]:= |

Out[61]= |

## Runtime Attributes

Attributes are a way to specify various properties of *Mathematica* functions. Listable is one useful attribute; a listable function automatically threads over any lists in its input.

The following declares a listable function.

In[1]:= |

In[2]:= |

Out[2]= |

You can also specify attributes for Function.

In[6]:= |

Out[7]= |

Many built-in functions have the Listable attribute.

You can give attributes to a compiled function by setting the RuntimeAttributes option of Compile. At present you can only set the Listable attribute. This is demonstrated in the following.

In[10]:= |

Out[10]= |

The listable compiled function works on a single number in the normal fashion.

In[11]:= |

Out[11]= |

However, when the arguments include a list with higher rank than the input specification, the function threads over that argument.

In[12]:= |

Out[12]= |

In[13]:= |

Out[13]= |

A listable compiled function is useful for creating a function that will work with large amounts of data but which has branches and tests in it. For example, the following function is listable.

In[15]:= |

Out[16]= |

Now the function can be applied to a large dataset.

In[17]:= |

Out[18]= |

Here a compiled version of the function is made. It runs much faster.

In[19]:= |

Out[20]= |

## Compilation Options

CompilationOptions is an option of Compile that specifies settings for the compilation process. At present, the only setting is the expression optimization level.

You can make individual settings in the RuntimeOptions.

"ExpressionOptimization" | Automatic | whether to optimize the input expression |

"InlineCompiledFunctions" | Automatic | whether to expand the body of nested compiled functions |

"InlineExternalDefinitions" | Automatic | whether to use external definitions |

### ExpressionOptimization

The option controls whether optimization should be applied to the input to Compile.

The default setting is that input settings should be optimized to avoid computing the same thing more than once. The following shows how is only computed once and is saved with a local variable.

In[21]:= |

Out[21]= |

You can disable expression optimization with a setting of False.

In[22]:= |

Out[22]= |

The default is a setting of Automatic; this means that if the compiled function needs to make an external call, no optimization is carried out.

In[23]:= |

Out[23]= |

A setting of True will force optimization to take place even if there is an external call.

In[24]:= |

Out[24]= |

### InlineCompiledFunctions

The option controls whether nested compiled functions should be inlined or called at runtime.

This simple compiled function can be called from another function.

In[1]:= |

This compiled function calls the simple function. It uses With to place the simple compiled function in its body.

In[2]:= |

Out[2]= |

In[3]:= |

Now CompilePrint shows how the simple function has been inlined into the function.

In[4]:= |

Out[4]= |

If you set to False, the nested function will not be inlined. Instead, it is called with a special instruction that avoids using the *Mathematica* evaluator.

In[5]:= |

Out[6]= |

The default setting of is Automatic, which uses a heuristic to inline small functions.

It is inconvenient to make a direct call to a compiled function. This is because Compile has the attribute HoldAll so that it does not evaluate its argument. The examples avoid this problem by using With to insert the called function into the body of the caller. This is somewhat awkward and will prevent any recursive operation of the function.

An alternative is to use a definition to hold the called compiled function. However, this only works if the option has been set.

In the following, the default value of is used. This does not insert external definitions and the compiled function is not inlined and it does not use the special instruction for calling the compiler.

In[8]:= |

Out[10]= |

When is set to True, the definition of the variable is used. This inserts the compiled function. Then the is used, and this causes the function to be inlined.

In[11]:= |

Out[13]= |

Here the external definition is used, but the compiled function is not inlined. Instead it uses an efficient instruction to allow one compiled function to call another. This type of call is important since it could allow a compiled function to call itself, and when parallel execution is carried out in the compiler the call can be done without any synchronization locking.

In[14]:= |

Out[16]= |

Here a recursive call is set up. The compiler will still use the efficient instruction to make the nested call.

In[17]:= |

Out[18]= |

### InlineExternalDefinitions

The option controls whether external definitions should be inlined into a compiled function.

This makes an external definition, held in the symbol val. This is used by Compile.

In[1]:= |

Out[2]= |

To understand what the generated compiled function does it is useful to use the package. First, it needs to be loaded.

In[3]:= |

Now, you can see how the compiled function works. It makes an external call to the function definition. However, it has processed the value to get the type correct. This is a consequence of the default setting of to Automatic.

In[4]:= |

Out[4]= |

If is set to True, the external definition is inserted into the compiled function. This is not done by default because if the definition is changed, the compiled function will continue to use the old value.

In[5]:= |

Out[6]= |

If is set to False, the external definition is not inserted into the compiled function and type information is not derived from the definition. Note how the result of the external evaluation is a real, whereas it should really be an integer.

In[7]:= |

Out[8]= |

The option can be used with the option. In this example, a call is made to another compiled function. Setting to False avoids inlining the compiled function but causes an efficient instruction to be used to call from one compiled function to another.

In[9]:= |

Out[11]= |

## Runtime Options

RuntimeOptions is an option of Compile that specifies runtime settings for the compiled function. It takes a number of settings to control how overflow, underflow, and runtime errors should be handled, as well as whether messages should be issued. You can give overall settings to optimize either speed or quality.

The following shows the default setting, which catches machine integer overflow. The computation is then repeated using the *Mathematica* interpreter.

In[29]:= |

Out[30]= |

If you set the runtime options to be "Speed", there is no checking for machine integer overflow. The computation gets an incorrect result. If you knew that there was no possibility for machine integer overflow, then you might want to use this option.

In[31]:= |

Out[32]= |

You can make individual settings in the RuntimeOptions.

"CatchMachineOverflow" | False | whether real overflow should be caught as it happens |

"CatchMachineUnderflow" | False | whether real underflow should be caught as it happens |

"CatchMachineIntegerOverflow" | True | whether integer overflow should be caught |

"CompareWithTolerance" | True | whether comparisons should work similarly to SameQ |

"RuntimeErrorHandler" | Evaluate | a function to apply if there is a fatal runtime error executing the function |

"WarningMessages" | True | whether warning messages should be omitted |

### CatchMachineOverflow

By default, machine overflow is not caught during a computation; consequently this example does not generate a runtime error.

In[12]:= |

Out[13]= |

If you turn on overflow checking, a runtime error is generated.

In[14]:= |

Out[15]= |

Note that if the compiler returns a result to *Mathematica* that has a machine overflow, this does create an error.

In[9]:= |

Out[10]= |

It does this because there is no other way to represent the error.

### CatchMachineUnderflow

By default, machine underflow is not caught and this computation returns zero.

In[16]:= |

Out[17]= |

If you turn on underflow checking, a runtime error is generated.

In[18]:= |

Out[19]= |

### CatchMachineIntegerOverflow

By default, machine integer overflow is caught and this generates a runtime error.

In[20]:= |

Out[21]= |

If you turn off machine integer overflow checking, no runtime error is generated and the result might be incorrect.

In[22]:= |

Out[23]= |

### RuntimeErrorHandler

The setting is used when there is a runtime error. The following example throws an exception when a runtime error is encountered.

In[1]:= |

With no error, the compiled function works as normal.

In[1]:= |

Out[1]= |

If there is a runtime error, the special runtime error handler is called.

In[2]:= |

Out[2]= |

The default runtime error handler is just to repeat the computation using the *Mathematica* interpreter.