parsl.dataflow.taskrecord.TaskRecord

class parsl.dataflow.taskrecord.TaskRecord[source]

This stores most information about a Parsl task

__init__(*args, **kwargs)[source]

Methods

__init__(*args, **kwargs)

clear()

copy()

fromkeys([value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k[,d])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem()

Remove and return a (key, value) pair as a 2-tuple.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

Attributes

dfk

The DataFlowKernel which is managing this task.

func_name

status

depends

app_fu

The Future which was returned to the user when an app was invoked.

exec_fu

When a task has been launched on an executor, stores the Future returned by that executor.

executor

The name of the executor which this task will be/is being/was executed on.

retries_left

fail_count

fail_cost

fail_history

checkpoint

Should this task be checkpointed?

hashsum

The hash used for checkpointing and memoisation.

task_launch_lock

This lock is used to ensure that task launch only happens once.

func

fn_hash

args

kwargs

time_invoked

time_returned

try_time_launched

try_time_returned

memoize

Should this task be memoized?

ignore_for_cache

from_memo

id

try_id

resource_specification

Dictionary containing relevant info for a task execution.

join

Is this a join_app?

joins

If this is a join app and the python body has executed, then this contains the Future or list of Futures that the join app will join.

join_lock

Restricts access to end-of-join behavior to ensure that joins only complete once, even if several joining Futures complete close together in time.

span

Event tracing span for this task.

app_fu: AppFuture[source]

The Future which was returned to the user when an app was invoked.

args: Sequence[Any][source]
checkpoint: bool[source]

Should this task be checkpointed?

depends: List[Future][source]
dfk: dflow.DataFlowKernel[source]

The DataFlowKernel which is managing this task.

exec_fu: Future | None[source]

When a task has been launched on an executor, stores the Future returned by that executor.

executor: str[source]

The name of the executor which this task will be/is being/was executed on.

fail_cost: float[source]
fail_count: int[source]
fail_history: List[str][source]
fn_hash: str[source]
from_memo: bool | None[source]
func: Callable[source]
func_name: str[source]
hashsum: str | None[source]

The hash used for checkpointing and memoisation. This is not known until at least all relevant dependencies have completed, and will be None before that.

id: int[source]
ignore_for_cache: Sequence[str][source]
join: bool[source]

Is this a join_app?

join_lock: threading.Lock[source]

Restricts access to end-of-join behavior to ensure that joins only complete once, even if several joining Futures complete close together in time.

joins: None | Future | List[Future][source]

If this is a join app and the python body has executed, then this contains the Future or list of Futures that the join app will join.

kwargs: Dict[str, Any][source]
memoize: bool[source]

Should this task be memoized?

resource_specification: Dict[str, Any][source]

Dictionary containing relevant info for a task execution. Includes resources to allocate and execution mode as a given executor permits.

retries_left: int[source]
span: Span[source]

Event tracing span for this task.

status: States[source]
task_launch_lock: threading.Lock[source]

This lock is used to ensure that task launch only happens once. A task can be launched by dependencies completing from arbitrary threads, and a race condition would exist when dependencies complete in multiple threads very close together in time, which this lock prevents.

time_invoked: datetime.datetime | None[source]
time_returned: datetime.datetime | None[source]
try_id: int[source]
try_time_launched: datetime.datetime | None[source]
try_time_returned: datetime.datetime | None[source]