Changelog

Parsl 0.5.1

Released. May 15th, 2018.

New functionality

Bug Fixes

  • Usage tracking with certain missing network causes 20s startup delay. issue#220
  • Checkpoints will not reload from a run that was Ctrl-C’ed issue#232
  • Checkpoints will not reload from a run that was Ctrl-C’ed issue#220
  • Race condition in task checkpointing issue#234
  • task_exit checkpointing repeatedly truncates checkpoint file during run issue#230
  • Make dfk.cleanup() not cause kernel to restart with Jupyter on Mac issue#212
  • Fix automatic IPP controller creation on OS X issue#206
  • Passing Files breaks over IPP issue#200
  • repr call after AppException instantiation raises AttributeError issue#197
  • Allow DataFuture to be initialized with a str file object issue#185
  • Error for globus transfer failure issue#162

Parsl 0.5.0

Released. Apr 16th, 2018.

New functionality

  • Support for Globus file transfers issue#71

    Caution

    This feature is available from Parsl v0.5.0 in an experimental state.

  • PathLike behavior for Files issue#174
    • Files behave like strings here :
    myfile = File("hello.txt")
    f = open(myfile, 'r')
    
  • Automatic checkpointing modes issue#106

    config = {
        "globals": {
            "lazyErrors": True,
            "memoize": True,
            "checkpointMode": "dfk_exit"
        }
    }
    
  • Support for containers with docker issue#45

        localDockerIPP = {
             "sites": [
                 {"site": "Local_IPP",
                  "auth": {"channel": None},
                  "execution": {
                      "executor": "ipp",
                      "container": {
                          "type": "docker",     # <----- Specify Docker
                          "image": "app1_v0.1", # <------Specify docker image
                      },
                      "provider": "local",
                      "block": {
                          "initBlocks": 2,  # Start with 4 workers
                      },
                  }
                  }],
             "globals": {"lazyErrors": True}        }
    
    .. caution::
      This feature is available from Parsl ``v0.5.0`` in an ``experimental`` state.
    
  • Cleaner logging issue#85
    • Logs are now written by default to runinfo/RUN_ID/parsl.log.
    • INFO log lines are more readable and compact
  • Local configs are now packaged issue#96

    from parsl.configs.local import localThreads
    from parsl.configs.local import localIPP
    

Bug Fixes

  • Passing Files over IPP broken issue#200
  • Fix DataFuture.__repr__ for default instantiation issue#164
  • Results added to appCache before retries exhausted issue#130
  • Missing documentation added for Multisite and Error handling issue#116
  • TypeError raised when a bad stdout/stderr path is provided. issue#104
  • Race condition in DFK issue#102
  • Cobalt provider broken on Cooley.alfc issue#101
  • No blocks provisioned if parallelism/blocks = 0 issue#97
  • Checkpoint restart assumes rundir issue#95
  • Logger continues after cleanup is called issue#93

Parsl 0.4.1

Released. Feb 23rd, 2018.

New functionality

  • GoogleCloud provider support via libsubmit
  • GridEngine provider support via libsubmit

Bug Fixes

  • Cobalt provider issues with job state issue#101
  • Parsl updates config inadvertently issue#98
  • No blocks provisioned if parallelism/blocks = 0 issue#97
  • Checkpoint restart assumes rundir bug issue#95
  • Logger continues after cleanup called enhancement issue#93
  • Error checkpointing when no cache enabled issue#92
  • Several fixes to libsubmit.

Parsl 0.4.0

Here are the major changes included in the Parsl 0.4.0 release.

New functionality

  • Elastic scaling in response to workflow pressure. issue#46 Options minBlocks, maxBlocks, and parallelism now work and controls workflow execution.

    Documented in: Elasticity

  • Multisite support, enables targetting apps within a single workflow to different sites issue#48

    @App('python', dfk, sites=['SITE1', 'SITE2'])
    def my_app(...):
       ...
    
  • Anonymized usage tracking added. issue#34

    Documented in: Usage Statistics Collection

  • AppCaching and Checkpointing issue#43

    # Set cache=True to enable appCaching
    @App('python', dfk, cache=True)
    def my_app(...):
        ...
    
    
    # To checkpoint a workflow:
    dfk.checkpoint()
    

    Documented in: Checkpointing, AppCaching

  • Parsl now creates a new directory under ./runinfo/ with an incrementing number per workflow invocation

  • Troubleshooting guide and more documentation

  • PEP8 conformance tests added to travis testing issue#72

Bug Fixes

  • Missing documentation from libsubmit was added back issue#41
  • Fixes for script_dir | scriptDir inconsistencies issue#64
    • We now use scriptDir exclusively.
  • Fix for caching not working on jupyter notebooks issue#90
  • Config defaults module failure when part of the option set is provided issue#74
  • Fixes for network errors with usage_tracking issue#70
  • PEP8 conformance of code and tests with limited exclusions issue#72
  • Doc bug in recommending max_workers instead of maxThreads issue#73

Parsl 0.3.1

This is a point release with mostly minor features and several bug fixes

  • Fixes for remote side handling
  • Support for specifying IPythonDir for IPP controllers
  • Several tests added that test provider launcher functionality from libsubmit
  • This upgrade will also push the libsubmit requirement from 0.2.4 -> 0.2.5.

Several critical fixes from libsubmit are brought in:

  • Several fixes and improvements to Condor from @annawoodard.
  • Support for Torque scheduler
  • Provider script output paths are fixed
  • Increased walltimes to deal with slow scheduler system
  • Srun launcher for slurm systems
  • SSH channels now support file_pull() method
    While files are not automatically staged, the channels provide support for bi-directional file transport.

Parsl 0.3.0

Here are the major changes that are included in the Parsl 0.3.0 release.

New functionality

  • Arguments to DFK has changed:

    # Old dfk(executor_obj)

    # New, pass a list of executors dfk(executors=[list_of_executors])

    # Alternatively, pass the config from which the DFK will #instantiate resources dfk(config=config_dict)

  • Execution providers have been restructured to a separate repo: libsubmit

  • Bash app styles have changes to return the commandline string rather than be assigned to the special keyword cmd_line. Please refer to RFC #37 for more details. This is a non-backward compatible change.

  • Output files from apps are now made available as an attribute of the AppFuture. Please refer #26 for more details. This is a non-backward compatible change

    # This is the pre 0.3.0 style
    app_fu, [file1, file2] = make_files(x, y, outputs=['f1.txt', 'f2.txt'])
    
    #This is the style that will be followed going forward.
    app_fu = make_files(x, y, outputs=['f1.txt', 'f2.txt'])
    [file1, file2] = app_fu.outputs
    
  • DFK init now supports auto-start of IPP controllers

  • Support for channels via libsubmit. Channels enable execution of commands from execution providers either locally, or remotely via ssh.

  • Bash apps now support timeouts.

  • Support for cobalt execution provider.

Bug fixes

  • Futures have inconsistent behavior in bash app fn body #35
  • Parsl dflow structure missing dependency information #30

Parsl 0.2.0

Here are the major changes that are included in the Parsl 0.2.0 release.

New functionality

  • Support for execution via IPythonParallel executor enabling distributed execution.
  • Generic executors

Parsl 0.1.0

Here are the major changes that are included in the Parsl 0.1.0 release.

New functionality

  • Support for Bash and Python apps
  • Support for chaining of apps via futures handled by the DataFlowKernel.
  • Support for execution over threads.
  • Arbitrary DAGs can be constructed and executed asynchronously.

Bug Fixes

  • Initial release, no listed bugs.