# Command Line¶

This is the most fundamental way to deploy Dask on multiple machines. In production environments, this process is often automated by some other resource manager. Hence, it is rare that people need to follow these instructions explicitly. Instead, these instructions are useful for IT professionals who may want to set up automated services to deploy Dask within their institution.

A dask.distributed network consists of one dask-scheduler process and several dask-worker processes that connect to that scheduler. These are normal Python processes that can be executed from the command line. We launch the dask-scheduler executable in one process and the dask-worker executable in several processes, possibly on different machines.

To accomplish this, launch dask-scheduler on one node:

$dask-scheduler Scheduler at: tcp://192.0.0.100:8786 Then, launch dask-worker on the rest of the nodes, providing the address to the node that hosts dask-scheduler:$ dask-worker tcp://192.0.0.100:8786
Start worker at:  tcp://192.0.0.1:12345
Registered to:    tcp://192.0.0.100:8786

$dask-worker tcp://192.0.0.100:8786 Start worker at: tcp://192.0.0.2:40483 Registered to: tcp://192.0.0.100:8786$ dask-worker tcp://192.0.0.100:8786
Start worker at:  tcp://192.0.0.3:27372
Registered to:    tcp://192.0.0.100:8786

The workers connect to the scheduler, which then sets up a long-running network connection back to the worker. The workers will learn the location of other workers from the scheduler.

## Handling Ports¶

The scheduler and workers both need to accept TCP connections on an open port. By default, the scheduler binds to port 8786 and the worker binds to a random open port. If you are behind a firewall then you may have to open particular ports or tell Dask to listen on particular ports with the --port and --worker-port keywords.:

## Nanny Processes¶

Dask workers are run within a nanny process that monitors the worker process and restarts it if necessary.

## Diagnostic Web Servers¶

Additionally, Dask schedulers and workers host interactive diagnostic web servers using Bokeh. These are optional, but generally useful to users. The diagnostic server on the scheduler is particularly valuable, and is served on port 8787 by default (configurable with the --dashboard-address keyword).

## Automated Tools¶

There are various mechanisms to deploy these executables on a cluster, ranging from manually SSH-ing into all of the machines to more automated systems like SGE/SLURM/Torque or Yarn/Mesos. Additionally, cluster SSH tools exist to send the same commands to many machines. We recommend searching online for “cluster ssh” or “cssh”.

## CLI Options¶

Note

The command line documentation here may differ depending on your installed version. We recommend referring to the output of dask-scheduler --help and dask-worker --help.

Options

--host <host>

URI, IP or hostname of this server

--port <port>

Serving port

--interface <interface>

Preferred network interface like ‘eth0’ or ‘ib0’

--protocol <protocol>

Protocol like tcp, tls, or ucx

--tls-ca-file <tls_ca_file>

CA cert(s) file for TLS (in PEM format)

--tls-cert <tls_cert>

certificate file for TLS (in PEM format)

--tls-key <tls_key>

private key file for TLS (in PEM format)

--bokeh-port <bokeh_port>

Address on which to listen for diagnostics dashboard

Default: :8787
--dashboard, --no-dashboard

Launch the Dashboard [default: –dashboard]

--bokeh, --no-bokeh

Deprecated. See –dashboard/–no-dashboard.

--show, --no-show

Show web UI [default: –show]

--dashboard-prefix <dashboard_prefix>

Prefix for the dashboard app

Default: False
--pid-file <pid_file>

File to write the process PID

--scheduler-file <scheduler_file>

File to write connection information. This may be a good way to share connection information if your cluster is on a shared network file system.

Module that should be loaded by the scheduler process like “foo.bar” or “/path/to/foo.py”.

--idle-timeout <idle_timeout>

Time of inactivity after which to kill the scheduler

--version

Show the version and exit.

Arguments

Optional argument(s)

Options

--tls-ca-file <tls_ca_file>

CA cert(s) file for TLS (in PEM format)

--tls-cert <tls_cert>

certificate file for TLS (in PEM format)

--tls-key <tls_key>

private key file for TLS (in PEM format)

--worker-port <worker_port>

Serving computation port, defaults to random. When creating multiple workers with –nprocs, a sequential range of worker ports may be used by specifying the first and last available ports like <first-port>:<last-port>. For example, –worker-port=3000:3026 will use ports 3000, 3001, …, 3025, 3026.

--nanny-port <nanny_port>

Serving nanny port, defaults to random. When creating multiple nannies with –nprocs, a sequential range of nanny ports may be used by specifying the first and last available ports like <first-port>:<last-port>. For example, –nanny-port=3000:3026 will use ports 3000, 3001, …, 3025, 3026.

--bokeh-port <bokeh_port>

Address on which to listen for diagnostics dashboard

--dashboard, --no-dashboard

Launch the Dashboard [default: –dashboard]

--bokeh, --no-bokeh

Deprecated. See –dashboard/–no-dashboard.

The address to which the worker binds. Example: tcp://0.0.0.0:9000

The address the worker advertises to the scheduler for communication with it and other workers. Example: tcp://127.0.0.1:9000

--host <host>

Serving host. Should be an ip address that is visible to the scheduler and other workers. See –listen-address and –contact-address if you need different listen and contact addresses. See –interface.

--interface <interface>

Network interface like ‘eth0’ or ‘ib0’

--protocol <protocol>

Protocol like tcp, tls, or ucx

--nprocs <nprocs>

Number of worker processes to launch. If negative, then (CPU_COUNT + 1 + nprocs) is used. Set to ‘auto’ to set nprocs and nthreads dynamically based on CPU_COUNT

 Default: 1
--name <name>

A unique name for this worker like ‘worker-1’. If used with –nprocs then the process number will be appended like name-0, name-1, name-2, …

--memory-limit <memory_limit>

 Bytes of memory per process that the worker can use. This can be: - an integer (bytes), note 0 is a special case for no memory management. - a float (fraction of total system memory). - a string (like 5GB or 5000M). - ‘auto’ for automatically computing the memory limit.

Default: auto
--reconnect, --no-reconnect

Reconnect to scheduler if disconnected [default: –reconnect]

--nanny, --no-nanny

Start workers in nanny process for management [default: –nanny]

--pid-file <pid_file>

File to write the process PID

--local-directory <local_directory>

Directory to place worker files

--resources <resources>

Resources for task constraints like “GPU=2 MEM=10e9”. Resources are applied separately to each worker process (only relevant when starting multiple worker processes with ‘–nprocs’).

--scheduler-file <scheduler_file>

Filename to JSON encoded scheduler information. Use with dask-scheduler –scheduler-file

--death-timeout <death_timeout>

Seconds to wait for a scheduler before closing

--dashboard-prefix <dashboard_prefix>

Prefix for the dashboard

If provided, shut down the worker after this duration.

Random amount by which to stagger lifetime values

Default: 0 seconds
--worker-class <worker_class>

Worker class used to instantiate workers from.

Whether or not to restart the worker after the lifetime lapses. This assumes that you are using the –lifetime and –nanny keywords

Default: False

Module that should be loaded by each worker process like “foo.bar” or “/path/to/foo.py”

Module that should be loaded by each nanny like “foo.bar” or “/path/to/foo.py”

--version

Show the version and exit.

Arguments

SCHEDULER

Optional argument