Overlapping Computations

Some array operations require communication of borders between neighboring blocks. Example operations include the following:

  • Convolve a filter across an image

  • Sliding sum/mean/max, …

  • Search for image motifs like a Gaussian blob that might span the border of a block

  • Evaluate a partial derivative

  • Play the game of Life

Dask Array supports these operations by creating a new array where each block is slightly expanded by the borders of its neighbors. This costs an excess copy and the communication of many small chunks, but allows localized functions to evaluate in an embarrassingly parallel manner.

The main API for these computations is the map_overlap method defined below:

map_overlap(func, *args[, depth, boundary, ...])

Map a function over blocks of arrays with some overlap

dask.array.map_overlap(func, *args, depth=None, boundary=None, trim=True, align_arrays=True, allow_rechunk=True, **kwargs)[source]

Map a function over blocks of arrays with some overlap

We share neighboring zones between blocks of the array, map a function, and then trim away the neighboring strips. If depth is larger than any chunk along a particular axis, then the array is rechunked.

Note that this function will attempt to automatically determine the output array type before computing it, please refer to the meta keyword argument in map_blocks if you expect that the function will not succeed when operating on 0-d arrays.

Parameters
func: function

The function to apply to each extended block. If multiple arrays are provided, then the function should expect to receive chunks of each array in the same order.

argsdask arrays
depth: int, tuple, dict or list, keyword only

The number of elements that each block should share with its neighbors If a tuple or dict then this can be different per axis. If a list then each element of that list must be an int, tuple or dict defining depth for the corresponding array in args. Asymmetric depths may be specified using a dict value of (-/+) tuples. Note that asymmetric depths are currently only supported when boundary is ‘none’. The default value is 0.

boundary: str, tuple, dict or list, keyword only

How to handle the boundaries. Values include ‘reflect’, ‘periodic’, ‘nearest’, ‘none’, or any constant value like 0 or np.nan. If a list then each element must be a str, tuple or dict defining the boundary for the corresponding array in args. The default value is ‘reflect’.

trim: bool, keyword only

Whether or not to trim depth elements from each block after calling the map function. Set this to False if your mapping function already does this for you

align_arrays: bool, keyword only

Whether or not to align chunks along equally sized dimensions when multiple arrays are provided. This allows for larger chunks in some arrays to be broken into smaller ones that match chunk sizes in other arrays such that they are compatible for block function mapping. If this is false, then an error will be thrown if arrays do not already have the same number of blocks in each dimension.

allow_rechunk: bool, keyword only

Allows rechunking, otherwise chunk sizes need to match and core dimensions are to consist only of one chunk.

**kwargs:

Other keyword arguments valid in map_blocks

Examples

>>> import numpy as np
>>> import dask.array as da
>>> x = np.array([1, 1, 2, 3, 3, 3, 2, 1, 1])
>>> x = da.from_array(x, chunks=5)
>>> def derivative(x):
...     return x - np.roll(x, 1)
>>> y = x.map_overlap(derivative, depth=1, boundary=0)
>>> y.compute()
array([ 1,  0,  1,  1,  0,  0, -1, -1,  0])
>>> x = np.arange(16).reshape((4, 4))
>>> d = da.from_array(x, chunks=(2, 2))
>>> d.map_overlap(lambda x: x + x.size, depth=1, boundary='reflect').compute()
array([[16, 17, 18, 19],
       [20, 21, 22, 23],
       [24, 25, 26, 27],
       [28, 29, 30, 31]])
>>> func = lambda x: x + x.size
>>> depth = {0: 1, 1: 1}
>>> boundary = {0: 'reflect', 1: 'none'}
>>> d.map_overlap(func, depth, boundary).compute()  
array([[12,  13,  14,  15],
       [16,  17,  18,  19],
       [20,  21,  22,  23],
       [24,  25,  26,  27]])

The da.map_overlap function can also accept multiple arrays.

>>> func = lambda x, y: x + y
>>> x = da.arange(8).reshape(2, 4).rechunk((1, 2))
>>> y = da.arange(4).rechunk(2)
>>> da.map_overlap(func, x, y, depth=1, boundary='reflect').compute() 
array([[ 0,  2,  4,  6],
       [ 4,  6,  8,  10]])

When multiple arrays are given, they do not need to have the same number of dimensions but they must broadcast together. Arrays are aligned block by block (just as in da.map_blocks) so the blocks must have a common chunk size. This common chunking is determined automatically as long as align_arrays is True.

>>> x = da.arange(8, chunks=4)
>>> y = da.arange(8, chunks=2)
>>> r = da.map_overlap(func, x, y, depth=1, boundary='reflect', align_arrays=True)
>>> len(r.to_delayed())
4
>>> da.map_overlap(func, x, y, depth=1, boundary='reflect', align_arrays=False).compute()
Traceback (most recent call last):
    ...
ValueError: Shapes do not align {'.0': {2, 4}}

Note also that this function is equivalent to map_blocks by default. A non-zero depth must be defined for any overlap to appear in the arrays provided to func.

>>> func = lambda x: x.sum()
>>> x = da.ones(10, dtype='int')
>>> block_args = dict(chunks=(), drop_axis=0)
>>> da.map_blocks(func, x, **block_args).compute()
10
>>> da.map_overlap(func, x, **block_args, boundary='reflect').compute()
10
>>> da.map_overlap(func, x, **block_args, depth=1, boundary='reflect').compute()
12

For functions that may not handle 0-d arrays, it’s also possible to specify meta with an empty array matching the type of the expected result. In the example below, func will result in an IndexError when computing meta:

>>> x = np.arange(16).reshape((4, 4))
>>> d = da.from_array(x, chunks=(2, 2))
>>> y = d.map_overlap(lambda x: x + x[2], depth=1, boundary='reflect', meta=np.array(()))
>>> y
dask.array<_trim, shape=(4, 4), dtype=float64, chunksize=(2, 2), chunktype=numpy.ndarray>
>>> y.compute()
array([[ 4,  6,  8, 10],
       [ 8, 10, 12, 14],
       [20, 22, 24, 26],
       [24, 26, 28, 30]])

Similarly, it’s possible to specify a non-NumPy array to meta:

>>> import cupy  
>>> x = cupy.arange(16).reshape((4, 4))  
>>> d = da.from_array(x, chunks=(2, 2))  
>>> y = d.map_overlap(lambda x: x + x[2], depth=1, boundary='reflect', meta=cupy.array(()))  
>>> y  
dask.array<_trim, shape=(4, 4), dtype=float64, chunksize=(2, 2), chunktype=cupy.ndarray>
>>> y.compute()  
array([[ 4,  6,  8, 10],
       [ 8, 10, 12, 14],
       [20, 22, 24, 26],
       [24, 26, 28, 30]])

Explanation

Consider two neighboring blocks in a Dask array:

Two neighboring blocks which do not overlap.

We extend each block by trading thin nearby slices between arrays:

Two neighboring block with thin strips along their shared border representing data shared between them.

We do this in all directions, including also diagonal interactions with the overlap function:

A two-dimensional grid of blocks where each one has thin strips around their borders representing data shared from their neighbors. They include small corner bits for data shared from diagonal neighbors as well.
>>> import dask.array as da
>>> import numpy as np

>>> x = np.arange(64).reshape((8, 8))
>>> d = da.from_array(x, chunks=(4, 4))
>>> d.chunks
((4, 4), (4, 4))

>>> g = da.overlap.overlap(d, depth={0: 2, 1: 1},
...                       boundary={0: 100, 1: 'reflect'})
>>> g.chunks
((8, 8), (6, 6))

>>> np.array(g)
array([[100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
       [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
       [  0,   0,   1,   2,   3,   4,   3,   4,   5,   6,   7,   7],
       [  8,   8,   9,  10,  11,  12,  11,  12,  13,  14,  15,  15],
       [ 16,  16,  17,  18,  19,  20,  19,  20,  21,  22,  23,  23],
       [ 24,  24,  25,  26,  27,  28,  27,  28,  29,  30,  31,  31],
       [ 32,  32,  33,  34,  35,  36,  35,  36,  37,  38,  39,  39],
       [ 40,  40,  41,  42,  43,  44,  43,  44,  45,  46,  47,  47],
       [ 16,  16,  17,  18,  19,  20,  19,  20,  21,  22,  23,  23],
       [ 24,  24,  25,  26,  27,  28,  27,  28,  29,  30,  31,  31],
       [ 32,  32,  33,  34,  35,  36,  35,  36,  37,  38,  39,  39],
       [ 40,  40,  41,  42,  43,  44,  43,  44,  45,  46,  47,  47],
       [ 48,  48,  49,  50,  51,  52,  51,  52,  53,  54,  55,  55],
       [ 56,  56,  57,  58,  59,  60,  59,  60,  61,  62,  63,  63],
       [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
       [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100]])

Boundaries

With respect to overlapping, you can specify how to handle the boundaries. Current policies include the following:

  • periodic - wrap borders around to the other side

  • reflect - reflect each border outwards

  • any-constant - pad the border with this value

An example boundary kind argument might look like the following:

{0: 'periodic',
 1: 'reflect',
 2: np.nan}

Alternatively, you can use dask.array.pad() for other types of paddings.

Map a function across blocks

Overlapping goes hand-in-hand with mapping a function across blocks. This function can now use the additional information copied over from the neighbors that is not stored locally in each block:

>>> from scipy.ndimage import gaussian_filter
>>> def func(block):
...    return gaussian_filter(block, sigma=1)

>>> filt = g.map_blocks(func)

While in this case we used a SciPy function, any arbitrary function could have been used instead. This is a good interaction point with Numba.

If your function does not preserve the shape of the block, then you will need to provide a chunks keyword argument. If your block size is regular, then this argument can take a block shape of, for example, (1000, 1000). In case of irregular block sizes, it must be a tuple with the full chunks shape like ((1000, 700, 1000), (200, 300)).

>>> g.map_blocks(myfunc, chunks=(5, 5))

If your function needs to know the location of the block on which it operates, you can give your function a keyword argument block_id:

def func(block, block_id=None):
    ...

This extra keyword argument will be given a tuple that provides the block location like (0, 0) for the upper-left block or (0, 1) for the block just to the right of that block.

Trim Excess

After mapping a blocked function, you may want to trim off the borders from each block by the same amount by which they were expanded. The function trim_internal is useful here and takes the same depth argument given to overlap:

>>> x.chunks
((10, 10, 10, 10), (10, 10, 10, 10))

>>> y = da.overlap.trim_internal(x, {0: 2, 1: 1})
>>> y.chunks
((6, 6, 6, 6), (8, 8, 8, 8))

Full Workflow

And so, a pretty typical overlapping workflow includes overlap, map_blocks and trim_internal:

>>> x = ...
>>> g = da.overlap.overlap(x, depth={0: 2, 1: 2},
...                       boundary={0: 'periodic', 1: 'periodic'})
>>> g2 = g.map_blocks(myfunc)
>>> result = da.overlap.trim_internal(g2, {0: 2, 1: 2})