Optimization
Contents
Optimization¶
Performance can be significantly improved in different contexts by making small optimizations on the Dask graph before calling the scheduler.
The dask.optimization
module contains several functions to transform graphs
in a variety of useful ways. In most cases, users won’t need to interact with
these functions directly, as specialized subsets of these transforms are done
automatically in the Dask collections (dask.array
, dask.bag
, and
dask.dataframe
). However, users working with custom graphs or computations
may find that applying these methods results in substantial speedups.
In general, there are two goals when doing graph optimizations:
Simplify computation
Improve parallelism
Simplifying computation can be done on a graph level by removing unnecessary
tasks (cull
), or on a task level by replacing expensive operations with
cheaper ones (RewriteRule
).
Parallelism can be improved by reducing
inter-task communication, whether by fusing many tasks into one (fuse
), or
by inlining cheap operations (inline
, inline_functions
).
Below, we show an example walking through the use of some of these to optimize a task graph.
Example¶
Suppose you had a custom Dask graph for doing a word counting task:
>>> def print_and_return(string):
... print(string)
... return string
>>> def format_str(count, val, nwords):
... return (f'word list has {count} occurrences of '
... f'{val}, out of {nwords} words')
>>> dsk = {'words': 'apple orange apple pear orange pear pear',
... 'nwords': (len, (str.split, 'words')),
... 'val1': 'orange',
... 'val2': 'apple',
... 'val3': 'pear',
... 'count1': (str.count, 'words', 'val1'),
... 'count2': (str.count, 'words', 'val2'),
... 'count3': (str.count, 'words', 'val3'),
... 'format1': (format_str, 'count1', 'val1', 'nwords'),
... 'format2': (format_str, 'count2', 'val2', 'nwords'),
... 'format3': (format_str, 'count3', 'val3', 'nwords'),
... 'print1': (print_and_return, 'format1'),
... 'print2': (print_and_return, 'format2'),
... 'print3': (print_and_return, 'format3')}
Here we are counting the occurrence of the words 'orange
, 'apple'
, and
'pear'
in the list of words, formatting an output string reporting the
results, printing the output, and then returning the output string.
To perform the computation, we first remove unnecessary components from the
graph using the cull
function and then pass the Dask graph and the desired
output keys to a scheduler get
function:
>>> from dask.threaded import get
>>> from dask.optimization import cull
>>> outputs = ['print1', 'print2']
>>> dsk1, dependencies = cull(dsk, outputs) # remove unnecessary tasks from the graph
>>> results = get(dsk1, outputs)
word list has 2 occurrences of apple, out of 7 words
word list has 2 occurrences of orange, out of 7 words
As can be seen above, the scheduler computed only the requested outputs
('print3'
was never computed). This is because we called the
dask.optimization.cull
function, which removes the unnecessary tasks from
the graph.
Culling is part of the default optimization pass of almost all collections. Often you want to call it somewhat early to reduce the amount of work done in later steps:
>>> from dask.optimization import cull
>>> outputs = ['print1', 'print2']
>>> dsk1, dependencies = cull(dsk, outputs)
Looking at the task graph above, there are multiple accesses to constants such
as 'val1'
or 'val2'
in the Dask graph. These can be inlined into the
tasks to improve efficiency using the inline
function. For example:
>>> from dask.optimization import inline
>>> dsk2 = inline(dsk1, dependencies=dependencies)
>>> results = get(dsk2, outputs)
word list has 2 occurrences of apple, out of 7 words
word list has 2 occurrences of orange, out of 7 words
Now we have two sets of almost linear task chains. The only link between them
is the word counting function. For cheap operations like this, the
serialization cost may be larger than the actual computation, so it may be
faster to do the computation more than once, rather than passing the results to
all nodes. To perform this function inlining, the inline_functions
function
can be used:
>>> from dask.optimization import inline_functions
>>> dsk3 = inline_functions(dsk2, outputs, [len, str.split],
... dependencies=dependencies)
>>> results = get(dsk3, outputs)
word list has 2 occurrences of apple, out of 7 words
word list has 2 occurrences of orange, out of 7 words
Now we have a set of purely linear tasks. We’d like to have the scheduler run
all of these on the same worker to reduce data serialization between workers.
One option is just to merge these linear chains into one big task using the
fuse
function:
>>> from dask.optimization import fuse
>>> dsk4, dependencies = fuse(dsk3)
>>> results = get(dsk4, outputs)
word list has 2 occurrences of apple, out of 7 words
word list has 2 occurrences of orange, out of 7 words
Putting it all together:
>>> def optimize_and_get(dsk, keys):
... dsk1, deps = cull(dsk, keys)
... dsk2 = inline(dsk1, dependencies=deps)
... dsk3 = inline_functions(dsk2, keys, [len, str.split],
... dependencies=deps)
... dsk4, deps = fuse(dsk3)
... return get(dsk4, keys)
>>> optimize_and_get(dsk, outputs)
word list has 2 occurrences of apple, out of 7 words
word list has 2 occurrences of orange, out of 7 words
In summary, the above operations accomplish the following:
Removed tasks unnecessary for the desired output using
cull
Inlined constants using
inline
Inlined cheap computations using
inline_functions
, improving parallelismFused linear tasks together to ensure they run on the same worker using
fuse
As stated previously, these optimizations are already performed automatically in the Dask collections. Users not working with custom graphs or computations should rarely need to directly interact with them.
These are just a few of the optimizations provided in dask.optimization
. For
more information, see the API below.
Rewrite Rules¶
For context based optimizations, dask.rewrite
provides functionality for
pattern matching and term rewriting. This is useful for replacing expensive
computations with equivalent, cheaper computations. For example, Dask Array
uses the rewrite functionality to replace series of array slicing operations
with a more efficient single slice.
The interface to the rewrite system consists of two classes:
RewriteRule(lhs, rhs, vars)
Given a left-hand-side (
lhs
), a right-hand-side (rhs
), and a set of variables (vars
), a rewrite rule declaratively encodes the following operation:lhs -> rhs if task matches lhs over variables
RuleSet(*rules)
A collection of rewrite rules. The design of
RuleSet
class allows for efficient “many-to-one” pattern matching, meaning that there is minimal overhead for rewriting with multiple rules in a rule set.
Example¶
Here we create two rewrite rules expressing the following mathematical transformations:
a + a -> 2*a
a * a -> a**2
where 'a'
is a variable:
>>> from dask.rewrite import RewriteRule, RuleSet
>>> from operator import add, mul, pow
>>> variables = ('a',)
>>> rule1 = RewriteRule((add, 'a', 'a'), (mul, 'a', 2), variables)
>>> rule2 = RewriteRule((mul, 'a', 'a'), (pow, 'a', 2), variables)
>>> rs = RuleSet(rule1, rule2)
The RewriteRule
objects describe the desired transformations in a
declarative way, and the RuleSet
builds an efficient automata for applying
that transformation. Rewriting can then be done using the rewrite
method:
>>> rs.rewrite((add, 5, 5))
(mul, 5, 2)
>>> rs.rewrite((mul, 5, 5))
(pow, 5, 2)
>>> rs.rewrite((mul, (add, 3, 3), (add, 3, 3)))
(pow, (mul, 3, 2), 2)
The whole task is traversed by default. If you only want to apply a transform
to the top-level of the task, you can pass in strategy='top_level'
as shown:
# Transforms whole task
>>> rs.rewrite((sum, [(add, 3, 3), (mul, 3, 3)]))
(sum, [(mul, 3, 2), (pow, 3, 2)])
# Only applies to top level, no transform occurs
>>> rs.rewrite((sum, [(add, 3, 3), (mul, 3, 3)]), strategy='top_level')
(sum, [(add, 3, 3), (mul, 3, 3)])
The rewriting system provides a powerful abstraction for transforming computations at a task level. Again, for many users, directly interacting with these transformations will be unnecessary.
Keyword Arguments¶
Some optimizations take optional keyword arguments. To pass keywords from the
compute call down to the right optimization, prepend the keyword with the name
of the optimization. For example, to send a keys=
keyword argument to the
fuse
optimization from a compute call, use the fuse_keys=
keyword:
def fuse(dsk, keys=None):
...
x.compute(fuse_keys=['x', 'y', 'z'])
Customizing Optimization¶
Dask defines a default optimization strategy for each collection type (Array, Bag, DataFrame, Delayed). However, different applications may have different needs. To address this variability of needs, you can construct your own custom optimization function and use it instead of the default. An optimization function takes in a task graph and list of desired keys and returns a new task graph:
def my_optimize_function(dsk, keys):
new_dsk = {...}
return new_dsk
You can then register this optimization class against whichever collection type you prefer and it will be used instead of the default scheme:
with dask.config.set(array_optimize=my_optimize_function):
x, y = dask.compute(x, y)
You can register separate optimization functions for different collections, or
you can register None
if you do not want particular types of collections to
be optimized:
with dask.config.set(array_optimize=my_optimize_function,
dataframe_optimize=None,
delayed_optimize=my_other_optimize_function):
...
You do not need to specify all collections. Collections will default to their standard optimization scheme (which is usually a good choice).
API¶
Top level optimizations
|
Return new dask with only the tasks required to calculate keys. |
|
Fuse tasks that form reductions; more advanced than |
|
Return new dask with the given keys inlined with their values. |
|
Inline cheap functions into larger operations |
Utility functions
|
Set of functions contained within nested task |
Rewrite Rules
|
A rewrite rule. |
|
A set of rewrite rules. |
Definitions¶
- dask.optimization.cull(dsk, keys)[source]¶
Return new dask with only the tasks required to calculate keys.
In other words, remove unnecessary tasks from dask.
keys
may be a single key or list of keys.- Returns
- dsk: culled dask graph
- dependencies: Dict mapping {key: [deps]}. Useful side effect to accelerate
other optimizations, notably fuse.
Examples
>>> def inc(x): ... return x + 1
>>> def add(x, y): ... return x + y
>>> d = {'x': 1, 'y': (inc, 'x'), 'out': (add, 'x', 10)} >>> dsk, dependencies = cull(d, 'out') >>> dsk {'out': (<function add at ...>, 'x', 10), 'x': 1} >>> dependencies {'out': ['x'], 'x': []}
- dask.optimization.fuse(dsk, keys=None, dependencies=None, ave_width=Default.token, max_width=Default.token, max_height=Default.token, max_depth_new_edges=Default.token, rename_keys=Default.token, fuse_subgraphs=Default.token)[source]¶
Fuse tasks that form reductions; more advanced than
fuse_linear
This trades parallelism opportunities for faster scheduling by making tasks less granular. It can replace
fuse_linear
in optimization passes.This optimization applies to all reductions–tasks that have at most one dependent–so it may be viewed as fusing “multiple input, single output” groups of tasks into a single task. There are many parameters to fine tune the behavior, which are described below.
ave_width
is the natural parameter with which to compare parallelism to granularity, so it should always be specified. Reasonable values for other parameters will be determined usingave_width
if necessary.- Parameters
- dsk: dict
dask graph
- keys: list or set, optional
Keys that must remain in the returned dask graph
- dependencies: dict, optional
{key: [list-of-keys]}. Must be a list to provide count of each key This optional input often comes from
cull
- ave_width: float (default 1)
Upper limit for
width = num_nodes / height
, a good measure of parallelizability. dask.config key:optimization.fuse.ave-width
- max_width: int (default infinite)
Don’t fuse if total width is greater than this. dask.config key:
optimization.fuse.max-width
- max_height: int or None (default None)
Don’t fuse more than this many levels. Set to None to dynamically adjust to
1.5 + ave_width * log(ave_width + 1)
. dask.config key:optimization.fuse.max-height
- max_depth_new_edges: int or None (default None)
Don’t fuse if new dependencies are added after this many levels. Set to None to dynamically adjust to ave_width * 1.5. dask.config key:
optimization.fuse.max-depth-new-edges
- rename_keys: bool or func, optional (default True)
Whether to rename the fused keys with
default_fused_keys_renamer
or not. Renaming fused keys can keep the graph more understandable and comprehensive, but it comes at the cost of additional processing. If False, then the top-most key will be used. For advanced usage, a function to create the new name is also accepted. dask.config key:optimization.fuse.rename-keys
- fuse_subgraphsbool or None, optional (default None)
Whether to fuse multiple tasks into
SubgraphCallable
objects. Set to None to let the default optimizer of individual dask collections decide. If no collection-specific default exists, None defaults to False. dask.config key:optimization.fuse.subgraphs
- Returns
- dsk
output graph with keys fused
- dependencies
dict mapping dependencies after fusion. Useful side effect to accelerate other downstream optimizations.
- dask.optimization.inline(dsk, keys=None, inline_constants=True, dependencies=None)[source]¶
Return new dask with the given keys inlined with their values.
Inlines all constants if
inline_constants
keyword is True. Note that the constant keys will remain in the graph, to remove them followinline
withcull
.Examples
>>> def inc(x): ... return x + 1
>>> def add(x, y): ... return x + y
>>> d = {'x': 1, 'y': (inc, 'x'), 'z': (add, 'x', 'y')} >>> inline(d) {'x': 1, 'y': (<function inc at ...>, 1), 'z': (<function add at ...>, 1, 'y')}
>>> inline(d, keys='y') {'x': 1, 'y': (<function inc at ...>, 1), 'z': (<function add at ...>, 1, (<function inc at ...>, 1))}
>>> inline(d, keys='y', inline_constants=False) {'x': 1, 'y': (<function inc at ...>, 'x'), 'z': (<function add at ...>, 'x', (<function inc at ...>, 'x'))}
- dask.optimization.inline_functions(dsk, output, fast_functions=None, inline_constants=False, dependencies=None)[source]¶
Inline cheap functions into larger operations
Examples
>>> inc = lambda x: x + 1 >>> add = lambda x, y: x + y >>> double = lambda x: x * 2 >>> dsk = {'out': (add, 'i', 'd'), ... 'i': (inc, 'x'), ... 'd': (double, 'y'), ... 'x': 1, 'y': 1} >>> inline_functions(dsk, [], [inc]) {'out': (add, (inc, 'x'), 'd'), 'd': (double, 'y'), 'x': 1, 'y': 1}
Protect output keys. In the example below
i
is not inlined because it is marked as an output key.>>> inline_functions(dsk, ['i', 'out'], [inc, double]) {'out': (add, 'i', (double, 'y')), 'i': (inc, 'x'), 'x': 1, 'y': 1}
- dask.optimization.functions_of(task)[source]¶
Set of functions contained within nested task
Examples
>>> inc = lambda x: x + 1 >>> add = lambda x, y: x + y >>> mul = lambda x, y: x * y >>> task = (add, (mul, 1, 2), (inc, 3)) >>> functions_of(task) set([add, mul, inc])
- dask.rewrite.RewriteRule(lhs, rhs, vars=())[source]¶
A rewrite rule.
Expresses lhs -> rhs, for variables vars.
- Parameters
- lhstask
The left-hand-side of the rewrite rule.
- rhstask or function
The right-hand-side of the rewrite rule. If it’s a task, variables in rhs will be replaced by terms in the subject that match the variables in lhs. If it’s a function, the function will be called with a dict of such matches.
- vars: tuple, optional
Tuple of variables found in the lhs. Variables can be represented as any hashable object; a good convention is to use strings. If there are no variables, this can be omitted.
Examples
Here’s a RewriteRule to replace all nested calls to list, so that (list, (list, ‘x’)) is replaced with (list, ‘x’), where ‘x’ is a variable.
>>> import dask.rewrite as dr >>> lhs = (list, (list, 'x')) >>> rhs = (list, 'x') >>> variables = ('x',) >>> rule = dr.RewriteRule(lhs, rhs, variables)
Here’s a more complicated rule that uses a callable right-hand-side. A callable rhs takes in a dictionary mapping variables to their matching values. This rule replaces all occurrences of (list, ‘x’) with ‘x’ if ‘x’ is a list itself.
>>> lhs = (list, 'x') >>> def repl_list(sd): ... x = sd['x'] ... if isinstance(x, list): ... return x ... else: ... return (list, x) >>> rule = dr.RewriteRule(lhs, repl_list, variables)
- dask.rewrite.RuleSet(*rules)[source]¶
A set of rewrite rules.
Forms a structure for fast rewriting over a set of rewrite rules. This allows for syntactic matching of terms to patterns for many patterns at the same time.
Examples
>>> import dask.rewrite as dr >>> def f(*args): pass >>> def g(*args): pass >>> def h(*args): pass >>> from operator import add
>>> rs = dr.RuleSet( ... dr.RewriteRule((add, 'x', 0), 'x', ('x',)), ... dr.RewriteRule((f, (g, 'x'), 'y'), ... (h, 'x', 'y'), ... ('x', 'y')))
>>> rs.rewrite((add, 2, 0)) 2
>>> rs.rewrite((f, (g, 'a', 3))) (<function h at ...>, 'a', 3)
>>> dsk = {'a': (add, 2, 0), ... 'b': (f, (g, 'a', 3))}
>>> from toolz import valmap >>> valmap(rs.rewrite, dsk) {'a': 2, 'b': (<function h at ...>, 'a', 3)}
- Attributes
- ruleslist
A list of RewriteRule`s included in the `RuleSet.