parsl.dataflow.memoization.id_for_memo

parsl.dataflow.memoization.id_for_memo(obj, output_ref=False)[source]
parsl.dataflow.memoization.id_for_memo(obj: None, output_ref=False)
parsl.dataflow.memoization.id_for_memo(obj: float, output_ref=False)
parsl.dataflow.memoization.id_for_memo(obj: int, output_ref=False)
parsl.dataflow.memoization.id_for_memo(obj: str, output_ref=False)
parsl.dataflow.memoization.id_for_memo(denormalized_list: list, output_ref=False)
parsl.dataflow.memoization.id_for_memo(denormalized_tuple: tuple, output_ref=False)
parsl.dataflow.memoization.id_for_memo(denormalized_dict: dict, output_ref=False)
parsl.dataflow.memoization.id_for_memo(function: function, output_ref=False)

This should return a byte sequence which identifies the supplied value for memoization purposes: for any two calls of id_for_memo, the byte sequence should be the same when the “same” value is supplied, and different otherwise.

“same” is in quotes about because sameness is not as straightforward as serialising out the content.

For example, for two dicts x, y:

x = {“a”:3, “b”:4} y = {“b”:4, “a”:3}

then: x == y, but their serialization is not equal, and some other functions on x and y are not equal: list(x.keys()) != list(y.keys())

id_for_memo is invoked with output_ref=True when the parameter is an output reference (a value in the outputs=[] parameter of an app invocation).

Memo hashing might be different for such parameters: for example, a user might choose to hash input File content so that changing the content of an input file invalidates memoization. This does not make sense to do for output files: there is no meaningful content stored where an output filename points at memoization time.