parsl.dataflow.taskrecord.TaskRecord

class parsl.dataflow.taskrecord.TaskRecord(**kwargs)[source]

This stores most information about a Parsl task

__init__(*args, **kwargs)[source]

Methods

__init__(*args, **kwargs)

clear()

copy()

fromkeys([value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised

popitem()

2-tuple; but raise KeyError if D is empty.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

Attributes

func_name

status

depends

app_fu

The Future which was returned to the user when an app was invoked.

exec_fu

When a task has been launched on an executor, stores the Future returned by that executor.

executor

The name of the executor which this task will be/is being/was executed on.

retries_left

fail_count

fail_cost

fail_history

checkpoint

Should this task be checkpointed?

hashsum

The hash used for checkpointing and memoisation.

task_launch_lock

This lock is used to ensure that task launch only happens once.

func

fn_hash

args

kwargs

time_invoked

time_returned

try_time_launched

try_time_returned

memoize

Should this task be memoized?

ignore_for_cache

from_memo

id

try_id

resource_specification

join

Is this a join_app?

joins

If this is a join app and the python body has executed, then this contains the Future that the join app will join.

app_fu: AppFuture[source]

The Future which was returned to the user when an app was invoked.

args: Sequence[Any][source]
checkpoint: bool[source]

Should this task be checkpointed?

depends: List[Future][source]
exec_fu: Optional[Future][source]

When a task has been launched on an executor, stores the Future returned by that executor.

executor: str[source]

The name of the executor which this task will be/is being/was executed on.

fail_cost: float[source]
fail_count: int[source]
fail_history: List[str][source]
fn_hash: str[source]
from_memo: Optional[bool][source]
func: Callable[source]
func_name: str[source]
hashsum: Optional[str][source]

The hash used for checkpointing and memoisation. This is not known until at least all relevant dependencies have completed, and will be None before that.

id: int[source]
ignore_for_cache: Sequence[str][source]
join: bool[source]

Is this a join_app?

joins: Optional[Future][source]

If this is a join app and the python body has executed, then this contains the Future that the join app will join.

kwargs: Dict[str, Any][source]
memoize: bool[source]

Should this task be memoized?

resource_specification: Dict[str, Any][source]
retries_left: int[source]
status: States[source]
task_launch_lock: threading.Lock[source]

This lock is used to ensure that task launch only happens once. A task can be launched by dependencies completing from arbitrary threads, and a race condition would exist when dependencies complete in multiple threads very close together in time, which this lock prevents.

time_invoked: Optional[datetime.datetime][source]
time_returned: Optional[datetime.datetime][source]
try_id: int[source]
try_time_launched: Optional[datetime.datetime][source]
try_time_returned: Optional[datetime.datetime][source]