parsl.executors.IPyParallelExecutor

class parsl.executors.IPyParallelExecutor(provider=LocalProvider( channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/0.9.0/docs' ), cmd_timeout=30, init_blocks=4, launcher=SingleNodeLauncher(), max_blocks=10, min_blocks=0, move_files=None, nodes_per_block=1, parallelism=1, walltime='00:15:00', worker_init='' ), label='ipp', working_dir=None, controller=Controller( interfaces=None, ipython_dir='~/.ipython', log=True, mode='auto', port=None, port_range=None, profile='default', public_ip=None, reuse=False ), container_image=None, engine_dir=None, storage_access=None, engine_debug_level=None, workers_per_node=1, managed=True)[source]

The IPython Parallel executor.

This executor uses IPythonParallel’s pilot execution system to manage multiple processes running locally or remotely.

Parameters:
  • provider (ExecutionProvider) – Provider to access computation resources. Can be one of EC2Provider, Cobalt, Condor, GoogleCloud, GridEngine, Jetstream, Local, GridEngine, Slurm, or Torque.
  • label (str) – Label for this executor instance.
  • controller (Controller) – Which Controller instance to use. Default is Controller().
  • workers_per_node (int) – Number of workers to be launched per node. Default=1
  • container_image (str) – Launch tasks in a container using this docker image. If set to None, no container is used. Default is None.
  • engine_dir (str) – Directory where engine logs and configuration files will be stored.
  • working_dir (str) – Directory where input data should be staged to.
  • storage_access (list of Staging) – Specifications for accessing data this executor remotely.
  • managed (bool) – If True, parsl will control dynamic scaling of this executor, and be responsible. Otherwise, this is managed by the user.
  • engine_debug_level (int | str) – Sets engine logging to specified debug level. Choices: (0, 10, 20, 30, 40, 50, ‘DEBUG’, ‘INFO’, ‘WARN’, ‘ERROR’, ‘CRITICAL’)
:param .. note:::

Some deficiencies with this executor are:

  1. Ipengines execute one task at a time. This means one engine per core is necessary to exploit the full parallelism of a node.
  2. No notion of remaining walltime.
  3. Lack of throttling means tasks could be queued up on a worker.
__init__(provider=LocalProvider( channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/0.9.0/docs' ), cmd_timeout=30, init_blocks=4, launcher=SingleNodeLauncher(), max_blocks=10, min_blocks=0, move_files=None, nodes_per_block=1, parallelism=1, walltime='00:15:00', worker_init='' ), label='ipp', working_dir=None, controller=Controller( interfaces=None, ipython_dir='~/.ipython', log=True, mode='auto', port=None, port_range=None, profile='default', public_ip=None, reuse=False ), container_image=None, engine_dir=None, storage_access=None, engine_debug_level=None, workers_per_node=1, managed=True)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([provider, label, working_dir, …]) Initialize self.
compose_containerized_launch_cmd(filepath, …) Reads the json contents from filepath and uses that to compose the engine launch command.
compose_launch_cmd(filepath, engine_dir, …) Reads the json contents from filepath and uses that to compose the engine launch command.
scale_in(blocks) Scale in the number of active blocks by the specified number.
scale_out([blocks]) Scales out the number of active workers by 1.
shutdown([hub, targets, block]) Shutdown the executor, including all workers and controllers.
start() Start the executor.
status() Returns the status of the executor via probing the execution providers.
submit(*args, **kwargs) Submits work to the thread pool.

Attributes

connected_workers
outstanding
run_dir Path to the run directory.
scaling_enabled Specify if scaling is enabled.