This is documentation for an obsolete product.

Parallel Computing Toolkit — Examples

# Preparation

Make sure you have at least two kernels available.

# Problem Statement

This example shows you how to try out several methods to solve a problem in parallel, return the first successful result, and abort all remaining computations.

# Example: Finding Large Primes

We want to find a prime larger than k. We start one search process per processor and return the first prime found by any one of them.
This function searches for a prime starting at i0, in increments of d.

### Running the Search

We restrict ourselves to odd numbers; the search increment d is therefore 2n, where n is the number of processors available.
The initial values are k+1, k+3, ..., (if k is even)
We are ready to start the search processes. (Activity will not commence until after you invoke WaitOne[].)
Now we wait for one of the processes to finish. res gives us the prime found and id identifies the process that found it. Note that the prime returned is not necessarily the smallest one larger than k.
You can now abort all the other processes.

# The ParallelTry Function

The auxiliary procedure parallelTry[tf,exprs...] submits all exprs to the remote kernels and waits until one of them returns a result other than tf. Any remaining computations are aborted (using ResetSlaves[]). The reason for using an auxiliary procedure is that we can use it for other parallel evaluations, such as ParallelAnd and ParallelOr, which are later in this document. CheckAbort[] absorbs any abort during the loop and thus makes sure the cleanup code (ResetSlaves[]) is also run if you abort the computation.
The function ParallelTry[exprs...] submits the arguments to remote kernels. The first result other than \$Failed is returned.
Note that it does not make much sense to give more expressions than there are remote kernels, unless you expect some of the calculations to return \$Failed.

# Using the Function with Our Prime Example

## Step by Step

Here again is the code for finding a prime. It needs to be exported to all remote kernels.
We generate as many tasks as there are kernels available.
Here is the initial value for the search. It should be an even number.
The next two steps show one way to generate the required Hold[expr1,expr2,...] expression, which is used to protect the nextPrime function from local evaluation.
Now we pass the held expressions as arguments to ParallelTry[], by simply replacing the head Hold by ParallelTry.

## A Function for the Prime Search

Here is a function for the search that combines the previous steps. Note that it also declares nextPrime as a local function and cleans it up on all remote kernels after the computation.

# Finding Irreducible Polynomials

### Sequential Code

We want to find an irreducible polynomial modulo n. This generates a random polynomial of a given degree mod p.
It is irreducible if it does not have any factors other than a numerical content. However, we must allow for a pure power.
This loop keeps trying until it finds an irreducible one.
Here is the function that combines these steps.

### Parallel Algorithm

Use ParallelTry to find one irreducible polynomial by performing a random search on each remote kernel.
Generate a list of identical commands, one for each remote kernel. The commands can be identical because they involve randomness generated on the remote kernels.
Let them loose.

### A Convenience

If you frequently use ParallelTry[] with identical expressions for all remote kernels, this helper function may be useful. It takes a single argument and duplicates it as often as is necessary (once for each available remote processor).
Now you can simply say the following.

### Counting the Number of Trials

This instrumented version of findOne[] uses a shared variable to count the total number of polynomials tried before an irreducible one was found. Note the use of ntry++ rather than ntry=ntry+1 to avoid synchronization problems with several remote kernels accessing the variable at the same time.

# Parallel And and Or

There is one other class of evaluations whose result can often be determined without evaluating all expressions: parallel versions of And and Or. The value of And[e1,e2,...,en] is True only if all the ei evaluate to True. As soon as one of the ei evaluates to anything else, we can abort the remaining evaluations and return. Similar considerations apply to Or[e1,e2,...,en].

### The Code

ParallelAnd[] and ParallelOr[] are straightforward using the auxiliary procedure parallelTry[] developed earlier.

## Example: Modular Polynomials

Here again is the predicate from the previous section to test the irreducibility of polynomials.
Here we test whether the polynomial x4+1 is irreducible modulo the given primes. Note how we generate the expressions inside Hold[] to prevent their (sequential) evaluation.
This result tells us that the polynomial is reducible modulo all these primes.
Evaluating all predicates gives us the individual results.
Note that ParallelEvaluate[And[e1,e2,...,en]] may also avoid evaluating all terms because the behaviour is built into And[]. The formulation with Replace[] prevents the (sequential) evaluation of the intermediate expression And[e1,e2,...,en].

# Comparing Methods of NMinimize

The Nminimize[] function in Mathematica supports several different methods to find minima of functions. It is often not clear which one will perform best on a given problem. This section shows how you can run all available methods in parallel on the same problem to compare their performance or identify the fastest one.

## Example Functions

Choose one function and the definition of the variables used, optionally including search ranges.

## Try a Method

tryMethod runs NMinimize with the given method (a string or a list) and returns the name of the method, the time taken, and the result.
Method specifications can be any possible arguments of the Method option. The methods are described in the documentation of NMinimize[].
You can give method-specific parameters, too.

## Compare Results

We run all methods in parallel to compare the results.
Note: Timings are meaningful only if all remote processors have the same speed.

## Let the First Win

Here we use ParallelTry to find the fastest method and abort all other calculations. There is no point in including the Automatic method setting because it implies one of the other methods.