# Parallel Evaluation

## Sending Commands to Remote Kernels

Recall that connections to remote kernels, as opened by LaunchKernels, are represented as kernel objects. See Connection Methods for details. The commands in this section take parallel kernels as arguments and use them to carry out computations.

### Low-Level Parallel Evaluation

ParallelEvaluate[cmd,kernel] | sends cmd for evaluation to the parallel kernel kernel, then waits for the result and returns |

ParallelEvaluate[cmd,{kernels}] | sends cmd for evaluation to the parallel kernels given, then waits for the results and returns them |

ParallelEvaluate[cmd] | sends cmd for evaluation to all parallel kernels and returns the list of results; equivalent to ParallelEvaluate[cmd,Kernels[]] |

Sending and receiving commands to and from remote kernels.

ParallelEvaluate has the attribute HoldFirst.

You cannot use ParallelEvaluate while a concurrent computation involving ParallelSubmit or WaitAll is in progress. See "Concurrency: Managing Parallel Processes" for details.

### Values of Variables

Values of variables defined on the local master kernel are usually not available to remote kernels. If a command you send for evaluation refers to a variable, it usually will not work as expected.

If you need variable values and definitions carried over to the remote kernels, use DistributeDefinitions or shared variables.

Iterators, such as Table and Do, work in the same way with respect to the iterator variable. Therefore, a statement like the following will not do the expected thing.

Pattern variables, constants, and pure function variables will work as expected on the remote kernel. Each of the following three examples will produce the expected result.

In[23]:= |

## Parallel Evaluation of Expressions

ParallelCombine[f,h[e_{1},e_{2},…,e_{n}],comb] | evaluates f[h[e_{1},e_{2},…,e_{n}]] in parallel by distributing chunks f[h[e_{i},e_{i+1},…,e_{i+k}]] to all kernels and combining the results with comb[] |

ParallelCombine[f,h[e_{1},e_{2},…,e_{n}]] | the default combiner comb is h, if h has attribute Flat, and Join otherwise |

ParallelCombine[f,h[e_{1},e_{2},…,e_{n}],comb] breaks h[e_{1},e_{2},…,e_{n}] into pieces h[e_{i},e_{i+1},…,e_{i+k}], evaluates f[h[e_{i},e_{i+1},…,e_{i+k}]] in parallel, then combines the results Null using comb[r_{1},r_{2},…,r_{m}]. ParallelCombine has the attribute HoldFirst, so that h[e_{1},e_{2},…,e_{n}] is not evaluated on the master kernel before the parallelization.

### ParallelCombine

ParallelCombine is a general and powerful command with default values for its arguments that are suitable for evaluating elements of containers, such as lists and associative functions.

#### Evaluating List-like Containers

If the result of applying the function f to a list is again a list, ParallelCombine[f,h[e_{1},e_{2},…,e_{n}],comb] simply applies f to pieces of the input list and joins the partial results together.

The result is the same as that of Prime[{1,2,3,4,5,6,7,8,9}], but the computation is done in parallel.

If the function is Identity, ParallelCombine simply evaluates the elements e_{i} in parallel.

If the result of applying the function f to a list is *not* a list, a custom combiner has to be chosen.

The function Function[li,Count[li,_?OddQ]] counts the number of odd elements in a list. To find the total number of odd elements, add the partial results together.

#### Evaluating Associative Operations

If the operation h in h[e_{1},e_{2},…,e_{n}] is associative (has attribute Flat), the identity

holds; with the default combiner being h itself, the operation is parallelized in a natural way. Here all numbers are added in parallel.

### Parallel Mapping and Iterators

The commands in this section are fundamental to parallel programming in the Wolfram Language.

ParallelMap[f,h[e_{1},e_{2},…] | evaluates h[f[e_{1}],[f[e_{2}],…] in parallel |

ParallelTable[expr,{i,i_{0},i_{1},di},{j,j_{0},j_{1},dj},…]
| builds Table[expr,{i,i_{0},i_{1},di},{j,j_{0},j_{1},dj},…] in parallel; parallelization occurs along the first (outermost) iterator {i,i_{0},i_{1},di} |

ParallelSum[…],ParallelProduct[…] | computes sums and products in parallel |

Parallel evaluation, mapping, and tables.

ParallelMap[f,h[e_{1},e_{2},…]] is a parallel version of f/@h[e_{1},e_{2},…] evaluating the individual f[e_{i}] in parallel rather than sequentially.

### Side Effects

Unless you use shared variables, the parallel evaluations performed are completely independent and cannot influence each other. Furthermore, any side effects, such as assignments to variables, that happen as part of evaluations will be lost. The only effect of a parallel evaluation is that its result is returned at the end.

### Examples

First, start several remote kernels.

The sine function is applied to the given arguments. Each computation takes place on a remote kernel.

This particular computation is almost certainly too trivial to benefit from parallel evaluation. The overhead required to send the expressions Sin[0], Sin[π], and so on to the remote kernels and to collect the results will be larger than the gain obtained from parallelization.

Factoring integers of the form 111…1 takes more time, so this computation can benefit from parallelization.

Alternatively, you can use ParallelTable. Here a list of the number of factors in 11…1/i is generated.

### Automatic Distribution of Definitions

Parallel commands such as ParallelTable will automatically distribute the values and functions needed, using effectively DistributeDefinitions.

In[1]:= |

This automatic distribution happens for any functions and variables you define interactively, within the same notebook (technically, for all symbols in the default context). Definitions from other contexts, such as functions from packages, are not distributed automatically.

Lower-level functions, such as ParallelEvaluate, do not perform any automatic distribution of values.

In[4]:= |

In[6]:= |

### Automatic Parallelization

Parallelize[cmd[list,arguments…]] recognizes if cmd is a Wolfram Language function that operates on a list or other long expression in a way that can be easily parallelized and performs the parallelization automatically.

Not all uses of these commands can be parallelized. A message is generated and the evaluation is performed sequentially on the master kernel if necessary.