This is documentation for Mathematica 8, which was
based on an earlier version of the Wolfram Language.

# ParallelCombine

 ParallelCombine evaluates in parallel by distributing parts of the computation to all parallel kernels and combining the partial results with comb. ParallelCombineis equivalent to ParallelCombine if h has attribute Flat, and ParallelCombine[f, h[e1, e2, ...], Join] otherwise.
• ParallelCombine forms expressions , , ..., , evaluates these on all available kernels, and combines the results with .
• The default combiner Join is appropriate for functions f such that the result of has head h. This includes all functions with attribute Listable.
• For heads h with attribute Flat the default combiner h effectively implements the associative law .
Apply f in parallel to chunks of a list (with 4 parallel kernels available):
Show where each computation takes place:
By default Join is used as a combiner function:
Do a parallel filtering operation:
Apply f in parallel to chunks of a list (with 4 parallel kernels available):
 Out[1]=
Show where each computation takes place:
 Out[2]=

By default Join is used as a combiner function:
 Out[1]=
Do a parallel filtering operation:
 Out[2]=
 Scope   (9)
All Listable functions can be parallelized with ParallelCombine:
If the function is not Listable use an explicit Map:
Many functional programming constructs can be parallelized with ParallelCombine:
The result need not have the same length as the input:
Evaluate the elements of a list in parallel:
The overall count of matching elements is equal to the sum of the partial counts:
An element appears in a list if it appears in at least one of the pieces:
An element does not appear on a list if it appears in none of the pieces:
Each subkernel performs a smaller number of additions and the combiner combines the results:
Automatically pick the combiner for Flat functions:
Typical Flat functions:
Listable functions of several arguments:
 Options   (11)
Break the computation into the smallest possible subunits:
Break the computation into as many pieces as there are available kernels:
Break the computation into at most 2 evaluations per kernel for the entire job:
Break the computation into evaluations of at most 5 elements each:
The default option setting balances evaluation size and number of evaluations:
Visualize the number of evaluations per kernel and items per evaluation:
By default, definitions in the current context are distributed automatically:
Do not distribute any definitions of functions:
Distribute definitions for all symbols in all contexts appearing in a parallel computation:
Distribute only definitions in the given contexts:
Restore the value of the DistributedContexts option to its default:
 Applications   (3)
Reduce an associative expression in parallel:
Find out how a computation is distributed among all kernels: