Legacy Documentation

Parallel Computing Toolkit (2000)

This is documentation for an obsolete product.
Current products and services
Previous section-----Next section

Parallel Computing Toolkit — Examples

Advanced Use of Shared Variables

Preparation

Make sure you have at least two kernels available.

Parallel Dynamic Programming

Introduction

The right side of a delayed definition for a shared downvalue is evaluated on the remote kernel that asked for the value, provided the definition itself was performed on a remote kernel.
Let us turn the dynamic programming example for Fibonacci numbers into a parallel program. Here is the sequential version.
For the parallel version we have to declare fib as a shared downvalue.
Now we make the definitions, but through a (single) remote kernel.
The definitions are in effect performed on the master kernel, but note that the delayed definition contains special code that makes it evaluate back on the remote kernel that asked for it.

Tracing a Calculation

To see what is going on, we should enable tracing of shared memory operations. Note that this is only possible if you loaded the debug package before Parallel Computing Toolkit.
Note that despite dynamic programming, some values were computed more than once. Here are all additional definitions performed during the calculation.

Callbacks and Shared Resources

This example shows how to manage a shared resource through shared downvalues rather than through synchronization and a shared variable. It is similar to the more complicated one in the Interval Minimum example. In that case we needed a shared priority queue; here we work with a simple first in/first out (FIFO) queue. The queue itself exists only in the master kernel; the slave kernels can enter items into the queue and remove the first item from the queue.

Master-Side Definitions

On the master kernel we maintain the queue. We can use the priority queue implementation that is part of Parallel Computing Toolkit.
These functions are used from the priority queue package: FIFOQueue[], EnQueue[], EmptyQ[], and DeQueue[].
The variable masterQueue holds the priority queue; the function newQueue[] initializes the queue.

Slave-Side Access

The slave kernels need these access functions. Note that none of them takes the queue itself as an argument. They all work implicitly on the master queue defined previously.
A selector sizeQ[] returns the number of items currently in the queue.
A function enQ[item] inserts the given item into the queue. Note that the function ends in a semicolon, to avoid sending an unnecessary value back to the slave kernel that calls it.
A function deQ[] removes the largest item from the queue and returns it. If the queue is empty, it returns $Failed.
To set up the slave kernels, we define these access functions as shared downvalues (and not as an exported environment!). The definitions themselves are not exported. Because the clients are not supposed to modify these definitions, we protect the symbols before we export them.
The effect is that each time one of these functions is invoked in a slave kernel, a callback is made to the master where the code is evaluated and the shared queue manipulated accordingly. The result, if any, is then sent back to the kernel that requested it.

Example Use

We need at least two remote kernels.
The queue is initialized (and empty).
To get started, we can insert items into the queue also on the master kernel.
We can use Normal[] to look at the current queue contents.
A remote kernel can enter an item into the queue. It shows up on the one master queue.
A kernel can remove the oldest item from the queue.
Here all remote kernels enter a value into the queue. Synchronization happens automatically. Note that no values are returned to the remote kernels.
But we can convince ourselves that the items made it into the queue.
Here all kernels remove an item from the queue. The order in which the remote kernels access the queue is unspecified; the fastest one gets the first item.
As soon as the queue becomes empty, $Failed is returned, which is often used as a stopping condition for the code on the remote kernels.

The Dining Philosophers

The Dining Philosophers is a classic example of synchronization and access to shared resources. A number of philosophers sit at a round table with one fork between any two adjacent ones. When a philosopher gets hungry, he tries to grab the forks to his left and right, eats for a while, then puts back the forks and continues thinking. If one of the forks is not available, he gets angry and has to wait until the fork becomes available.

Code

The code for each philosopher is an infinite loop. The downvalues fork[1], fork[2], ... are used for the synchronization. Their value is set to i if the ith philosopher holds the respective fork and cleared when he puts it back.

Sample Run (With an Interrupt)

Note that we cleared the local definition of philo[] after exporting it to all remote kernels; we can, therefore, set up the expressions to be evaluated remotely without having to protect them from local evaluation.
Now, we start the simulation. After it has been running for a while, the master kernel can be interrupted by choosing Abort Evaluation.
After the local abort, we should also abort the remote kernels.

With Tracing

To see the details of the synchronization, you can turn on tracing before running the previous simulation.
And turn it off after running the simulation.