API Reference guide
Core
Decorator function for making python apps. |
|
Decorator function for making bash apps. |
|
Decorator function for making join apps |
|
An AppFuture wraps a sequence of Futures which may fail and be retried. |
|
Manage which DataFlowKernel is active. |
|
A DependencyResolver describes how app dependencies can be resolved. |
|
|
A DependencyResolver describes how app dependencies can be resolved. |
|
A DependencyResolver describes how app dependencies can be resolved. |
Configuration
Specification of Parsl configuration options. |
|
Add a stream log handler. |
|
Add a file log handler. |
|
Returns the hostname of the local host. |
|
Returns the IP address of the given interface name, e.g. |
|
Finds an address for the local host by querying ipify. |
|
Finds an address for the local host by querying the local routing table for the route to Google DNS. |
|
Uses a combination of methods to determine possible addresses. |
|
Uses a combination of methods to find any address of the local machine. |
|
Finds the checkpoints from all runs in the rundir. |
|
Finds the checkpoint from the last run, if one exists. |
Channels
Channels are deprecated in Parsl. See issue 3515 for further discussion.
Data management
A datafuture points at an AppFuture. |
|
The DataManager is responsible for transferring input and output data. |
|
This class defines the interface for file staging providers. |
|
The Parsl File Class. |
|
Performs FTP staging as a separate parsl level task. |
|
Performs FTP staging as a wrapper around the application task. |
|
Specification for accessing data on a remote executor via Globus. |
|
A staging provider that Performs HTTP and HTTPS staging as a separate parsl-level task. |
|
A staging provider that performs HTTP and HTTPS staging as in a wrapper around each task. |
|
This staging provider will execute rsync on worker nodes to stage in files from a remote location. |
Executors
Executors are abstractions that represent available compute resources to which you could submit arbitrary App tasks. |
|
A base class for executors which scale using blocks. |
|
A thread-based executor. |
|
Executor designed for cluster-scale |
|
A version of |
|
Executor to use Work Queue batch system |
|
Executor to use TaskVine dynamic workflow system |
|
Executor that uses Flux to schedule and run jobs. |
|
Executor is designed for executing heterogeneous tasks |
Manager Selectors
|
Returns a shuffled list of interesting_managers |
|
Returns an interesting_managers list sorted by block ID |
Launchers
Launchers are basically wrappers for user submitted scripts as they are submitted to a specific execution resource. |
|
Does no wrapping. |
|
Worker launcher that wraps the user's command with the framework to launch multiple command invocations in parallel. |
|
Worker launcher that wraps the user's command with the SRUN launch framework to launch multiple cmd invocations in parallel on a single job allocation. |
|
Worker launcher that wraps the user's command with the Aprun launch framework to launch multiple cmd invocations in parallel on a single job allocation |
|
Launches as many workers as MPI tasks to be executed concurrently within a block. |
|
Worker launcher that wraps the user's command with the framework to launch multiple command invocations via GNU parallel sshlogin. |
|
Worker launcher that wraps the user's command with the framework to launch multiple command invocations via mpiexec. |
|
Worker launcher that wraps the user's command with the framework to launch multiple command invocations via mpirun. |
|
Worker launcher that wraps the user's command with the Jsrun launch framework to launch multiple cmd invocations in parallel on a single job allocation |
|
Wraps the command by prepending commands before a user's command |
Providers
A provider for using Amazon Elastic Compute Cloud (EC2) resources. |
|
HTCondor Execution Provider. |
|
A provider for using resources from the Google Compute Engine. |
|
A provider for the Grid Engine scheduler. |
|
Local Execution Provider |
|
LSF Execution Provider |
|
Slurm Execution Provider |
|
Torque Execution Provider |
|
Kubernetes execution provider |
|
PBS Pro Execution Provider |
|
Execution providers are responsible for managing execution resources that have a Local Resource Manager (LRM). |
|
This class defines behavior common to all cluster/supercompute-style scheduler systems. |
Batch jobs
Defines a set of states that a job can be in |
|
Encapsulates a job state together with other details: |
|
Exceptions
An error raised during formatting of a bash function. |
|
An error raised during execution of an app. |
|
An error raised during execution of an app when it exceeds its allotted walltime. |
|
Error raised due to bad filepaths specified for STDOUT/ STDERR. |
|
Bash app returned no string. |
|
A non-zero exit code returned from a @bash_app |
|
Error raised at the end of app execution due to missing output files. |
|
Base class for all exceptions. |
|
Raised when a component receives an invalid configuration. |
|
Error raised when a required module is missing for a optional/extra component |
|
Base class for executor related exceptions. |
|
Scaling failed due to error in Execution provider. |
|
Mangled/Poorly formatted/Unsupported message received |
|
Base class for all exceptions. |
|
Error raised at the end of app execution due to missing output files. |
|
Error raised if an app cannot run because there was an error |
|
Error raised if apps joining into a join_app raise exceptions. |
|
Error raised when an object of inappropriate type is supplied as a Launcher |
|
Base class for all exceptions Only to be invoked when only a more specific error is not available. |
|
Scale out failed in the submit phase on the provider side |
|
Error raised when the template used to compose the submit script to the local resource manager is missing required arguments |
|
Error raised when the template used to compose the submit script to the local resource manager is missing required arguments |
|
Exception raised when a worker is lost |
|
Task lost due to manager loss. |
|
Failure at the deserialization of results/exceptions from remote workers. |
|
Failure to serialize task objects. |
Internal
This is the base class that defines the two external facing functions that an App must define. |
|
Extends AppBase to cover the Python App. |
|
The DataFlowKernel adds dependency awareness to an existing executor. |
|
This should return a byte sequence which identifies the supplied value for memoization purposes: for any two calls of id_for_memo, the byte sequence should be the same when the "same" value is supplied, and different otherwise. |
|
Memoizer is responsible for ensuring that identical work is not repeated. |
|
States from which we will never move to another state, because the job has either definitively completed or failed. |
|
Enumerates the states a parsl task may be in. |
|
This stores most information about a Parsl task |
|
Scaling strategy. |
|
This class will make a callback periodically, with a period specified by the interval parameter. |
Task Vine configuration
Configuration of a TaskVine manager |
|
Configuration of a TaskVine factory. |