Local Cluster¶
For convenience you can start a local cluster from your Python session.
>>> from distributed import Client, LocalCluster
>>> cluster = LocalCluster()
LocalCluster("127.0.0.1:8786", workers=8, ncores=8)
>>> client = Client(cluster)
<Client: scheduler=127.0.0.1:8786 processes=8 cores=8>
You can dynamically scale this cluster up and down:
>>> worker = cluster.add_worker()
>>> cluster.remove_worker(worker)
Alternatively, a LocalCluster
is made for you automatically if you create
an Client
with no arguments:
>>> from distributed import Client
>>> client = Client()
>>> client
<Client: scheduler=127.0.0.1:8786 processes=8 cores=8>
API¶
-
class
distributed.deploy.local.
LocalCluster
(n_workers=None, threads_per_worker=None, processes=True, loop=None, start=None, ip=None, scheduler_port=0, silence_logs=30, diagnostics_port=8787, services=None, worker_services=None, service_kwargs=None, asynchronous=False, security=None, **worker_kwargs)[source]¶ Create local Scheduler and Workers
This creates a “cluster” of a scheduler and workers running on the local machine.
Parameters: n_workers: int
Number of workers to start
processes: bool
Whether to use processes (True) or threads (False). Defaults to True
threads_per_worker: int
Number of threads per each worker
scheduler_port: int
Port of the scheduler. 8786 by default, use 0 to choose a random port
silence_logs: logging level
Level of logs to print out to stdout.
logging.WARN
by default. Use a falsey value like False or None for no change.ip: string
IP address on which the scheduler will listen, defaults to only localhost
diagnostics_port: int
Port on which the Web Interface will be provided. 8787 by default, use 0 to choose a random port,
None
to disable it, or an(ip:port)
tuple to listen on a different IP address than the scheduler.asynchronous: bool (False by default)
Set to True if using this cluster within async/await functions or within Tornado gen.coroutines. This should remain False for normal use.
kwargs: dict
Extra worker arguments, will be passed to the Worker constructor.
service_kwargs: Dict[str, Dict]
Extra keywords to hand to the running services
security : Security
Examples
>>> c = LocalCluster() # Create a local cluster with as many workers as cores >>> c LocalCluster("127.0.0.1:8786", workers=8, ncores=8)
>>> c = Client(c) # connect to local cluster
Add a new worker to the cluster
>>> w = c.start_worker(ncores=2)
Shut down the extra worker
>>> c.stop_worker(w)
Pass extra keyword arguments to Bokeh
>>> LocalCluster(service_kwargs={'bokeh': {'prefix': '/foo'}})
-
scale_down
(workers)[source]¶ Remove
workers
from the clusterGiven a list of worker addresses this function should remove those workers from the cluster. This may require tracking which jobs are associated to which worker address.
This can be implemented either as a function or as a Tornado coroutine.
-
scale_up
(n, **kwargs)[source]¶ Bring the total count of workers up to
n
This function/coroutine should bring the total number of workers up to the number
n
.This can be implemented either as a function or as a Tornado coroutine.
-
start_worker
(**kwargs)[source]¶ Add a new worker to the running cluster
Parameters: port: int (optional)
Port on which to serve the worker, defaults to 0 or random
ncores: int (optional)
Number of threads to use. Defaults to number of logical cores
Returns: The created Worker or Nanny object. Can be discarded.
Examples
>>> c = LocalCluster() >>> c.start_worker(ncores=2)
-