parsl.dataflow.taskrecord.TaskRecord
- class parsl.dataflow.taskrecord.TaskRecord[source]
This stores most information about a Parsl task
Methods
__init__
(*args, **kwargs)clear
()copy
()fromkeys
([value])Create a new dictionary with keys from iterable and values set to value.
get
(key[, default])Return the value for key if key is in the dictionary, else default.
items
()keys
()pop
(k[,d])If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem
()Remove and return a (key, value) pair as a 2-tuple.
setdefault
(key[, default])Insert key with a value of default if key is not in the dictionary.
update
([E, ]**F)If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values
()Attributes
The DataFlowKernel which is managing this task.
The Future which was returned to the user when an app was invoked.
When a task has been launched on an executor, stores the Future returned by that executor.
The name of the executor which this task will be/is being/was executed on.
Should this task be checkpointed?
The hash used for checkpointing and memoisation.
This lock is used to ensure that task launch only happens once.
Should this task be memoized?
Dictionary containing relevant info for a task execution.
Is this a join_app?
If this is a join app and the python body has executed, then this contains the Future or list of Futures that the join app will join.
Restricts access to end-of-join behavior to ensure that joins only complete once, even if several joining Futures complete close together in time.
Event tracing span for this task.
- dfk: dflow.DataFlowKernel[source]
The DataFlowKernel which is managing this task.
- exec_fu: Future | None[source]
When a task has been launched on an executor, stores the Future returned by that executor.
- hashsum: str | None[source]
The hash used for checkpointing and memoisation. This is not known until at least all relevant dependencies have completed, and will be None before that.
- join_lock: threading.Lock[source]
Restricts access to end-of-join behavior to ensure that joins only complete once, even if several joining Futures complete close together in time.
- joins: None | Future | List[Future][source]
If this is a join app and the python body has executed, then this contains the Future or list of Futures that the join app will join.
- resource_specification: Dict[str, Any][source]
Dictionary containing relevant info for a task execution. Includes resources to allocate and execution mode as a given executor permits.
- task_launch_lock: threading.Lock[source]
This lock is used to ensure that task launch only happens once. A task can be launched by dependencies completing from arbitrary threads, and a race condition would exist when dependencies complete in multiple threads very close together in time, which this lock prevents.
- time_invoked: datetime.datetime | None[source]
- time_returned: datetime.datetime | None[source]
- try_time_launched: datetime.datetime | None[source]
- try_time_returned: datetime.datetime | None[source]