This is documentation for Mathematica 8, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.1)

ParallelMap

ParallelMap
applies f in parallel to each element on the first level in expr.
ParallelMap
applies f in parallel to parts of expr specified by levelspec.
  • ParallelMap is a parallel version of Map, which automatically distributes different applications of f among different kernels and processors.
  • ParallelMap will give the same results as Map, except for side effects during the computation.
  • ParallelMap uses the same level specifications as Map. Not all level specifications can be parallelized.
  • If an instance of ParallelMap cannot be parallelized, it is evaluated using Map.
ParallelMap works like Map, but in parallel:
ParallelMap can work with any function:
Use explicit pure functions:
Functions defined interactively can immediately be used in parallel:
ParallelMap works like Map, but in parallel:
In[1]:=
Click for copyable input
Out[1]=
In[2]:=
Click for copyable input
Out[2]=
 
ParallelMap can work with any function:
In[1]:=
Click for copyable input
Out[1]=
Use explicit pure functions:
In[2]:=
Click for copyable input
Out[2]=
 
Functions defined interactively can immediately be used in parallel:
In[1]:=
Click for copyable input
In[2]:=
Click for copyable input
Out[2]=
Map at level 1 (default):
Map down to level 2:
Map at level 2:
Map down to level 3:
Map on all levels, starting at level 1:
Negative levels:
Positive and negative levels can be mixed:
Different heads at each level:
Not all levels can be parallelized:
ParallelMap can be used on expressions with any head:
Functions with attribute Listable are mapped automatically:
Break the computation into the smallest possible subunits:
Break the computation into as many pieces as there are available kernels:
Break the computation into at most 2 evaluations per kernel for the entire job:
Break the computation into evaluations of at most 5 elements each:
The default option setting balances evaluation size and number of evaluations:
Calculations with vastly differing runtimes should be parallelized as finely as possible:
A large number of simple calculations should be distributed into as few batches as possible:
By default, definitions in the current context are distributed automatically:
Do not distribute any definitions of functions:
Distribute definitions for all symbols in all contexts appearing in a parallel computation:
Distribute only definitions in the given contexts:
Restore the value of the DistributedContexts option to its default:
Watch the results appear as they are found:
Solve the arithmetic puzzle:
Write it in symbolic form:
Assign a single digit number to each letter:
And interpret each word as a number in base 10:
Automate the checking for a particular digit assignment:
To systematically solve this assignment, first get the list of letters:
We can solve the puzzle by considering all permutations of all subsets of eight digits:
Parallelize it:
Only the solutions with are usually considered:
We can also stop the search as soon as we find a nontrivial solution using ParallelTry:
The parallelization happens at the outermost level used:
Map can be parallelized automatically, effectively using ParallelMap:
Show the effect of load balancing with tasks of known size:
Define a number of tasks with known runtimes:
Measure the time for parallel execution:
The speedup obtained (more is better):
Finest-grained scheduling gives better load balancing and higher speedup:
Scheduling large tasks first gives even better results:
A function of several arguments can be mapped with MapThread:
Get a parallel version by using Parallelize:
MapIndexed passes the index of an element to the mapped function:
Get a parallel version by using Parallelize:
Scan does the same as Map, but without returning a result:
Functions defined interactively are automatically distributed to all kernels when needed:
Distribute definitions manually and disable automatic distribution:
For functions from a package, use ParallelNeeds rather than DistributeDefinitions:
If a level specification prevents parallelization, ParallelMap evaluates like Map:
Side effects cannot be used in the function mapped in parallel:
Use a shared variable to support side effects:
A function used that is not known on the parallel kernels may lead to sequential evaluation:
Define the function on all parallel kernels:
The function is now evaluated on the parallel kernels:
Definitions of functions in the current context are distributed automatically:
Definitions from contexts other than the default context are not distributed automatically:
Use DistributeDefinitions to distribute such definitions:
Alternatively, set the DistributedContexts option to include all contexts:
Trivial operations may take longer when parallelized:
Measure the minimum parallel communication overhead:
Compare with a simple calculation done on the master kernel itself:
Functions may simplify short argument lists, but not longer ones:
Such simplification of partial expressions may make parallel mapping impossible:
Prevent simplification of partial expressions and apply the desired function only at the end:
New in 7 | Last modified in 8