parsl.providers.SlurmProvider

class parsl.providers.SlurmProvider(partition, channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/0.9.0/docs' ), nodes_per_block=1, cores_per_node=None, mem_per_node=None, init_blocks=1, min_blocks=0, max_blocks=10, parallelism=1, walltime='00:10:00', scheduler_options='', worker_init='', cmd_timeout=10, exclusive=True, move_files=True, launcher=SingleNodeLauncher())[source]

Slurm Execution Provider

This provider uses sbatch to submit, squeue for status and scancel to cancel jobs. The sbatch script to be used is created from a template file in this same module.

Parameters:
  • partition (str) – Slurm partition to request blocks from.
  • channel (Channel) – Channel for accessing this provider. Possible channels include LocalChannel (the default), SSHChannel, or SSHInteractiveLoginChannel.
  • nodes_per_block (int) – Nodes to provision per block.
  • cores_per_node (int) – Specify the number of cores to provision per node. If set to None, executors will assume all cores on the node are available for computation. Default is None.
  • mem_per_node (float) – Specify the real memory to provision per node in GB. If set to None, no explicit request to the scheduler will be made. Default is None.
  • min_blocks (int) – Minimum number of blocks to maintain.
  • max_blocks (int) – Maximum number of blocks to maintain.
  • parallelism (float) – Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive scaling where as many resources as possible are used; parallelism close to 0 represents the opposite situation in which as few resources as possible (i.e., min_blocks) are used.
  • walltime (str) – Walltime requested per block in HH:MM:SS.
  • scheduler_options (str) – String to prepend to the #SBATCH blocks in the submit script to the scheduler.
  • worker_init (str) – Command to be run before starting a worker, such as ‘module load Anaconda; source activate env’.
  • exclusive (bool (Default = True)) – Requests nodes which are not shared with other running jobs.
  • launcher (Launcher) –
    Launcher for this provider. Possible launchers include
    SingleNodeLauncher (the default), SrunLauncher, or AprunLauncher

    move_files : Optional[Bool]: should files be moved? by default, Parsl will try to move files.

__init__(partition, channel=LocalChannel( envs={}, script_dir=None, userhome='/home/docs/checkouts/readthedocs.org/user_builds/parsl/checkouts/0.9.0/docs' ), nodes_per_block=1, cores_per_node=None, mem_per_node=None, init_blocks=1, min_blocks=0, max_blocks=10, parallelism=1, walltime='00:10:00', scheduler_options='', worker_init='', cmd_timeout=10, exclusive=True, move_files=True, launcher=SingleNodeLauncher())[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(partition[, channel, …]) Initialize self.
cancel(job_ids) Cancels the jobs specified by a list of job ids
execute_wait(cmd[, timeout])
status(job_ids) Get the status of a list of jobs identified by the job identifiers returned from the submit request.
submit(command, tasks_per_node[, job_name]) Submit the command as a slurm job.

Attributes

current_capacity Returns the currently provisioned blocks.
label Provides the label for this provider
SlurmProvider.scaling_enabled