evaluates f[h[e1,e2,]] in parallel by distributing parts of the computation to all parallel kernels and combining the partial results with comb.

ParallelCombine[f,h[e1,e2, ]]

is equivalent to ParallelCombine[f,h[e1,e2,],h] if h has attribute Flat, and ParallelCombine[f,h[e1,e2,],Join] otherwise.

Details and Options

  • ParallelCombine[f,h[e1,,en],comb] forms expressions f[h[e1,,ek]], f[h[ek+1,]], , f[h[,en]], evaluates these on all available kernels, and combines the results ri with comb[r1,r2,].
  • The default combiner Join is appropriate for functions f such that the result of f[h[e1,,ek]] has head h. This includes all functions with attribute Listable.
  • For heads h with attribute Flat the default combiner h effectively implements the associative law h[e1,,en] = h[h[e1,,ek],h[ek+1,],,h[,en]].
  • With a compatible choice of comb, ParallelCombine[f,h[e1,e2,],comb] is equivalent to f[h[e1,e2,]].
  • If no kernels are available, ParallelCombine evaluates f[h[e1,e2,]] normally.
  • ParallelCombine takes the same Method option as Parallelize. Possible settings include:
  • "CoarsestGrained"break the computation into as many pieces as there are available kernels
    "FinestGrained"break the computation into the smallest possible subunits
    "EvaluationsPerKernel"->ebreak the computation into at most e pieces per kernel
    "ItemsPerEvaluation"->mbreak the computation into evaluations of at most m subunits each
    Automaticcompromise between overhead and load balancing
  • ParallelCombine takes the same DistributedContexts option as Parallelize; the default value is DistributedContexts:>$DistributedContexts.
  • The ProgressReporting option specifies whether to report the progress of the parallel computation.
  • The default value is ProgressReporting:>$ProgressReporting.


open allclose all

Basic Examples  (2)

Apply f in parallel to chunks of a list (with 4 parallel kernels available):

Show where each computation takes place:

By default Join is used as a combiner function:

Do a parallel filtering operation:

Scope  (9)

Listable Functions  (1)

All Listable functions can be parallelized with ParallelCombine:

If the function is not Listable use an explicit Map:

Structure-Preserving Functions  (3)

Many functional programming constructs can be parallelized with ParallelCombine:

The result need not have the same length as the input:

Evaluate the elements of a list in parallel:

Reductions  (3)

The overall count of matching elements is equal to the sum of the partial counts:

An element appears in a list if it appears in at least one of the pieces:

An element does not appear on a list if it appears in none of the pieces:

Associative Functions  (2)

Each subkernel performs a smaller number of additions and the combiner combines the results:

Automatically pick the combiner for Flat functions:

Typical Flat functions:

Generalizations & Extensions  (1)

Listable functions of several arguments:

Options  (11)

Method  (6)

Break the computation into the smallest possible subunits:

Break the computation into as many pieces as there are available kernels:

Break the computation into at most 2 evaluations per kernel for the entire job:

Break the computation into evaluations of at most 5 elements each:

The default option setting balances evaluation size and number of evaluations:

Visualize the number of evaluations per kernel and items per evaluation:

DistributedContexts  (5)

By default, definitions in the current context are distributed automatically:

Do not distribute any definitions of functions:

Distribute definitions for all symbols in all contexts appearing in a parallel computation:

Distribute only definitions in the given contexts:

Restore the value of the DistributedContexts option to its default:

Applications  (3)

Reduce an associative expression in parallel:

Find out how a computation is distributed among all kernels:

A parallel version of MapThread:

Properties & Relations  (5)

An implementation of ParallelMap:

For listable functions ParallelCombine and ParallelMap are equivalent:

Parallelize is often implemented in terms of ParallelCombine:

Parallel versions of many data-parallel commands can easily be written with ParallelCombine:

Functions defined interactively are automatically distributed to all kernels when needed:

Distribute definitions manually and disable automatic distribution:

The function is now evaluated on the parallel kernels:

Possible Issues  (2)

The combiner must be compatible with the type of the partial results:

The default combiner is Join, which is appropriate for list-like results:

Functions may simplify short argument lists, but not longer ones:

Such simplification of partial expressions may make parallel evaluation impossible:

Prevent simplification of partial expressions and apply the desired function only at the end:

Wolfram Research (2008), ParallelCombine, Wolfram Language function, (updated 2010).


Wolfram Research (2008), ParallelCombine, Wolfram Language function, (updated 2010).


Wolfram Language. 2008. "ParallelCombine." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2010.


Wolfram Language. (2008). ParallelCombine. Wolfram Language & System Documentation Center. Retrieved from


@misc{reference.wolfram_2024_parallelcombine, author="Wolfram Research", title="{ParallelCombine}", year="2010", howpublished="\url{}", note=[Accessed: 23-May-2024 ]}


@online{reference.wolfram_2024_parallelcombine, organization={Wolfram Research}, title={ParallelCombine}, year={2010}, url={}, note=[Accessed: 23-May-2024 ]}