Virtual Shared Memory
Special-purpose multiprocessing hardware comes in two types, shared memory and distributed memory. In a shared-memory machine, all processors have access to a common main memory. In a distributed-memory machine, each processor has its own main memory, and the processors are connected through a sophisticated network. A collection of networked PCs is also a kind of distributed-memory parallel machine.
Communication between processors is an important prerequisite for all but the most trivial parallel processing tasks. In a shared-memory machine, a processor can simply write a value into a particular memory location, and all other processors can read this value. In a distributed-memory machine, exchanging values of variables involves explicit communication over the network.
Virtual shared memory is a programming model that allows processors on a distributed-memory machine to be programmed as if they had shared memory. A software layer takes care of the necessary communication in a transparent way.
The Wolfram Language uses independent kernels as parallel processors. It is clear that these kernels do not share a common memory, even if they happen to reside on the same machine. However, the Wolfram Language provides functions that implement virtual shared memory for these remote kernels.
This is done with a simple programming model. If a variable a is shared, any kernel that reads the variable (simply by evaluating it), reads a common value that is maintained by the master kernel. Any kernel that changes the value of a, for example by assigning it with a=val, will modify the one global copy of the variable a, so that all other kernels that subsequently read the variable will see its new value.
|SetSharedVariable[s1,s2,…]||declare the symbols si as shared variables|
|SetSharedFunction[f1,f2,…]||declare the symbols fi as shared functions or data types|
|$SharedVariables||the list of currently shared variables (wrapped in Hold)|
|$SharedFunctions||the list of currently shared functions (wrapped in Hold)|
|UnsetShared[s1,s2,…]||stop the sharing of the given variables or functions|
|UnsetShared[patt]||stop the sharing of all variables and functions whose names match the string pattern patt|
A variable s that has been declared shared with SetSharedVariable[s] exists only in the master (local) kernel. The following operations on a remote kernel are redefined so that they have the described effect.
For technical reasons, every shared variable must have a value. If the variable in the master kernel does not have a value, it is set to Null.
If a variable is Protected at the time you declare it as shared, remote kernels can only access the variable, but not change its value.
A symbol f that has been declared shared with SetSharedFunction[f] exists only in the master (local) kernel. The following operations on a remote kernel are redefined so that they have the described effect.
|f[[i]]++,f[[i,j]]--,++f[[i]],--f[[i]]||the increment/decrement operation is performed in the master kernel (this operation is atomic and can be used for synchronization)|
For technical reasons, every expression of the form f[…] must have a value. If the expression f[…] in the master kernel does not evaluate, the result is set to Null.
You can define shared functions, as in the following. Be sure that the symbol x does not have a value in either the remote kernels or in the master kernel. The symbol x should not be a shared variable.
If you make a delayed assignment on a remote kernel, the right side of the definition will be evaluated on the kernel where you use the function. Any immediate assignment is always evaluated on the master kernel.
If a function is Protected when you declare it as shared, remote kernels can only use it, but not change its definition.
In a situation where several concurrently running remote kernels access the same shared variable for reading and writing, there is no guarantee that the value of a variable is not changed by another process between the time you read a value and write a new value. Any other new value that another process wrote in the meantime would get overwritten.
The code inside the first argument of ParallelMap is the client code that is executed independently on the available remote kernels. The code reads the shared variable y, stores its value in a local variable a, performs some computations (here simulated with Pause), and then wants to increment the value of y by setting it to a+1. But by that time, the value of y is most likely no longer equal to a, because another process will have changed it.
The code between reading the variable y and setting it to a new value is called a critical section. During its execution, no other process should read or write y. To reserve a critical section, a process can acquire an exclusive lock before entering the critical section and release the lock after leaving the critical section.
The Wolfram Language provides the function CriticalSection[lck,expr] to acquire a lock, evaluate an expr and then release the lock. Once a process has acquired the lock, no other process can do so. The lock is released when the expression finishes evaluation.
Note that between attempts to acquire the lock (inside While) the process waits for a while. Otherwise, processes waiting to acquire a lock that is reserved for another process will put a heavy load onto the master kernel.
Locking slows down a computation because remote processes may have to wait for one another. In this example, the result is essentially sequential execution. You should keep critical sections as short as possible. If two processes each have locks and then try to gain each other's lock, a deadlock will occur in which the process will wait forever.