This is documentation for Mathematica 8, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)

Parallelize

Updated In 8 Graphic
Parallelize[expr]
evaluates expr using automatic parallelization.
  • Parallelize[expr] automatically distributes different parts of the evaluation of expr among different available kernels and processors.
  • Parallelize[expr] normally gives the same result as evaluating expr, except for side effects during the computation.
  • The Method option for Parallelize specifies the parallelization method to use. Possible settings include:
"CoarsestGrained"break the computation into as many pieces as there are available kernels
"FinestGrained"break the computation into the smallest possible subunits
"EvaluationsPerKernel"->ebreak the computation into at most e pieces per kernel
"ItemsPerEvaluation"->mbreak the computation into evaluations of at most m subunits each
Automaticcompromise between overhead and load-balancing
  • Method is suitable for computations involving many subunits, all of which take the same amount of time. It minimizes overhead, but does not provide any load balancing.
  • Method is suitable for computations involving few subunits whose evaluations take different amounts of time. It leads to higher overhead, but maximizes load balancing.
  • The DistributedContexts option for Parallelize specifies which symbols appearing in expr have their definitions automatically distributed to all available kernels before the computation.
  • The default value is DistributedContexts:>$Context, which distributes definitions of all symbols in the current context, but does not distribute definitions of symbols from packages.
Map a function in parallel:
Generate a table in parallel:
Functions defined interactively can immediately be used in parallel:
Map a function in parallel:
In[1]:=
Click for copyable input
Out[1]=
 
Generate a table in parallel:
In[1]:=
Click for copyable input
Out[1]=
 
Functions defined interactively can immediately be used in parallel:
In[1]:=
Click for copyable input
In[2]:=
Click for copyable input
Out[2]=
All listable functions with one argument will automatically parallelize when applied to a list:
Implicitly defined lists:
Many functional programming constructs that preserve list structure parallelize:
The result need not have the same length as the input:
Without a function, Parallelize simply evaluates the elements in parallel:
Count the number of primes up to one million:
Check whether 93 occurs in a list of the first 100 primes:
Check whether a list is free of 5:
The argument does not have to be an explicit List:
Inner products automatically parallelize:
Outer products automatically parallelize:
Evaluate a table in parallel, with or without an iterator variable:
Generate an array in parallel:
Evaluate sums and products in parallel:
The evaluation of the function happens in parallel:
Functions with the attribute Flat automatically parallelize:
Listable functions of several arguments:
Only the right side of an assignment is parallelized:
Elements of a compound expression are parallelized one after the other:
Break the computation into the smallest possible subunits:
Break the computation into as many pieces as there are available kernels:
Break the computation into at most 2 evaluations per kernel for the entire job:
Break the computation into evaluations of at most 5 elements each:
The default option setting balances evaluation size and number of evaluations:
Calculations with vastly differing runtimes should be parallelized as finely as possible:
A large number of simple calculations should be distributed into as few batches as possible:
By default, definitions in the current context are distributed automatically:
Do not distribute any definitions of functions:
Distribute definitions for all symbols in all contexts appearing in a parallel computation:
Distribute only definitions in the given contexts:
Restore the value of the DistributedContexts option to its default:
Search for Mersenne primes:
Watch the results appear as they are found:
Compute a whole table of visualizations:
Search a range in parallel for local minima:
Choose the best one:
Use a shared function to record timing results as they are generated:
Set up a dynamic bar chart with the timing results:
Run a series of calculations with vastly varying runtimes:
For data parallel functions, Parallelize is implemented in terms of ParallelCombine:
Parallel speedup can be measured with a calculation that takes a known amount of time:
Define a number of tasks with known runtimes:
The time for a sequential execution is the sum of the individual times:
Measure the speedup for parallel execution:
Finest-grained scheduling gives better load balancing and higher speedup:
Scheduling large tasks first gives even better results:
Form the arithmetic expression for chosen from , , , :
Each list of arithmetic operations gives a simple calculation:
Evaluating it is easy:
Find all sequences of arithmetic operations that give 0:
Display the corresponding expressions:
Functions defined interactively are automatically distributed to all kernels when needed:
Distribute definitions manually and disable automatic distribution:
For functions from a package, use ParallelNeeds rather than DistributeDefinitions:
Set up a random number generator that is suitable for parallel use and initialize each kernel:
Expressions that cannot be parallelized are evaluated normally:
Side effects cannot be used in the function mapped in parallel:
Use a shared variable to support side effects:
If no subkernels are available, the result is computed on the master kernel:
If a function used is not distributed first, the result may still appear to be correct:
Only if the function is distributed is the result actually calculated on the available kernels:
Definitions of functions in the current context are distributed automatically:
Definitions from contexts other than the default context are not distributed automatically:
Use DistributeDefinitions to distribute such definitions:
Alternatively, set the DistributedContexts option to include all contexts:
Explicitly distribute the definition of a function:
Modify the definition:
The modified definition is automatically distributed:
Suppress the automatic distribution of definitions:
Symbols defined only on the subkernels are not distributed automatically:
The value of $DistributedContexts is not used in Parallelize:
Set the value of the DistributedContexts option of Parallelize:
Restore all settings to their default values:
Trivial operations may take longer when parallelized:
Display nontrivial automata as they are found:
New in 7 | Last modified in 8