parsl.providers.SlurmProvider

class parsl.providers.SlurmProvider(partition, channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/latest/docs' ), nodes_per_block=1, init_blocks=1, min_blocks=0, max_blocks=10, parallelism=1, walltime='00:10:00', scheduler_options='', worker_init='', cmd_timeout=10, exclusive=True, launcher=SingleNodeLauncher())[source]

Slurm Execution Provider

This provider uses sbatch to submit, squeue for status and scancel to cancel jobs. The sbatch script to be used is created from a template file in this same module.

Parameters:
  • partition (str) – Slurm partition to request blocks from.
  • channel (Channel) – Channel for accessing this provider. Possible channels include LocalChannel (the default), SSHChannel, or SSHInteractiveLoginChannel.
  • nodes_per_block (int) – Nodes to provision per block.
  • min_blocks (int) – Minimum number of blocks to maintain.
  • max_blocks (int) – Maximum number of blocks to maintain.
  • parallelism (float) – Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive scaling where as many resources as possible are used; parallelism close to 0 represents the opposite situation in which as few resources as possible (i.e., min_blocks) are used.
  • walltime (str) – Walltime requested per block in HH:MM:SS.
  • scheduler_options (str) – String to prepend to the #SBATCH blocks in the submit script to the scheduler.
  • worker_init (str) – Command to be run before starting a worker, such as ‘module load Anaconda; source activate env’.
  • exclusive (bool (Default = True)) – Requests nodes which are not shared with other running jobs.
  • launcher (Launcher) – Launcher for this provider. Possible launchers include SingleNodeLauncher (the default), SrunLauncher, or AprunLauncher
__init__(partition, channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/latest/docs' ), nodes_per_block=1, init_blocks=1, min_blocks=0, max_blocks=10, parallelism=1, walltime='00:10:00', scheduler_options='', worker_init='', cmd_timeout=10, exclusive=True, launcher=SingleNodeLauncher())[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(partition[, channel, …]) Initialize self.
cancel(job_ids) Cancels the jobs specified by a list of job ids
execute_wait(cmd[, timeout])
status(job_ids) Get the status of a list of jobs identified by the job identifiers returned from the submit request.
submit(command, blocksize, tasks_per_node[, …]) Submit the command as a slurm job of blocksize parallel elements.

Attributes

current_capacity Returns the currently provisioned blocks.
scaling_enabled The callers of ParslExecutors need to differentiate between Executors and Executors wrapped in a resource provider