Common Workflow Language reference implementation
==================================================================
|Linux Status| |Windows Status| |Coverage Status| |Downloads|
.. |Linux Status| image:: https://img.shields.io/travis/common-workflow-language/cwltool/main.svg?label=Linux%20builds :target: https://travis-ci.org/common-workflow-language/cwltool
.. |Windows Status| image:: https://img.shields.io/appveyor/ci/mr-c/cwltool/main.svg?label=Windows%20builds :target: https://ci.appveyor.com/project/mr-c/cwltool
.. |Coverage Status| image:: https://img.shields.io/codecov/c/github/common-workflow-language/cwltool.svg :target: https://codecov.io/gh/common-workflow-language/cwltool
.. |Downloads| image:: https://pepy.tech/badge/cwltool/month :target: https://pepy.tech/project/cwltool
This is the reference implementation of the Common Workflow Language. It is intended to be feature complete and provide comprehensive validation of CWL files as well as provide other tools related to working with CWL.
This is written and tested for
Python_
3.x {x = 6, 7, 8, 9}
The reference implementation consists of two packages. The
cwltoolpackage is the primary Python module containing the reference implementation in the
cwltoolmodule and console executable by the same name.
The
cwlref-runnerpackage is optional and provides an additional entry point under the alias
cwl-runner, which is the implementation-agnostic name for the default CWL interpreter installed on a host.
cwltoolis provided by the CWL project,
a member project of Software Freedom Conservancy_ and our
many contributors_.
cwltoolpackages ^^^^^^^^^^^^^^^^^^^^
Your operating system may offer cwltool directly. For
Debian,
Ubuntu, and similar Linux distribution try
.. code:: bash
sudo apt-get install cwltool
If you are running MacOS X or other UNIXes and you want to use packages prepared by the conda-forge project, then please follow the install instructions for
conda-forge_ (if you haven't already) and then
.. code:: bash
conda install -c conda-forge cwltool
All of the above methods of installing
cwltooluse packages which might contain bugs already fixed in newer versions, or be missing features that you desire. If the packaged version of
cwltoolavailable to you is too old, then we recommend installing using
pipand `
venv::
.. code:: bash
python3 -m venv env # Create a virtual environment named 'env' in the current directory source env/bin/activate # Activate environment before installing
cwltool
Then install the latest
cwlref-runnerpackage from PyPi (which will install the latest
cwltoolpackage as well)
.. code:: bash
pip install cwlref-runner
If installing alongside another CWL implementation (like
toil-cwl-runneror
arvados-cwl-runner) then instead run
.. code:: bash
pip install cwltool
MS Windows users ^^^^^^^^^^^^^^^^
"Windows Subsystem for Linux 2" (WSL2) and Docker Desktop_
Debian from the Microsoft Store_
wsl --set-default debian
apt-get install cwltoolor use the
venvmethod)
cwltooldevelopment version ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Or you can skip the direct
pipcommands above and install the latest development version of
cwltool:
.. code:: bash
git clone https://github.com/common-workflow-language/cwltool.git # clone (copy) the cwltool git repository cd cwltool # Change to source directory that git clone just downloaded pip install . # Installs
cwltoolfrom source cwltool --version # Check if the installation works correctly
Remember, if co-installing multiple CWL implementations then you need to maintain which implementation
cwl-runnerpoints to via a symbolic file system link or
another facility_.
Recommended Software ^^^^^^^^^^^^^^^^^^^^
You may also want to have the following installed: -
node.js_ - Docker, udocker, or Singularity (optional)
Without these, some examples in the CWL tutorials at http://www.commonwl.org/user_guide/ may not work.
Simple command::
cwl-runner [tool-or-workflow-description] [input-job-settings]
Or if you have multiple CWL implementations installed and you want to override the default cwl-runner then use::
cwltool [tool-or-workflow-description] [input-job-settings]
You can set cwltool options in the environment with CWLTOOL_OPTIONS, these will be inserted at the beginning of the command line::
export CWLTOOL_OPTIONS="--debug"
boot2docker runs Docker inside a virtual machine and it only mounts
Userson it. The default behavior of CWL is to create temporary directories under e.g.
/Varwhich is not accessible to Docker containers.
To run CWL successfully with boot2docker you need to set the
--tmpdir-prefixand
--tmp-outdir-prefixto somewhere under
/Users::
$ cwl-runner --tmp-outdir-prefix=/Users/username/project --tmpdir-prefix=/Users/username/project wc-tool.cwl wc-job.json
Some shared computing environments don't support Docker software containers for technical or policy reasons. As a work around, the CWL reference runner supports using alternative
dockerimplementations on Linux with the
--user-space-docker-cmdoption.
One such "user space" friendly docker replacement is
udockerhttps://github.com/indigo-dc/udocker
udocker installation: https://github.com/indigo-dc/udocker/blob/master/doc/installation_manual.md#22-install-from-udockertools-tarball
Run
cwltooljust as you normally would, but with the new option, e.g. from the conformance tests:
.. code:: bash
cwltool --user-space-docker-cmd=udocker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/main/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/raw/main/v1.0/v1.0/empty.json
cwltoolcan also use
Singularity_ version 2.6.1 or later as a Docker container runtime.
cwltoolwith Singularity will run software containers specified in
DockerRequirementand therefore works with Docker images only, native Singularity images are not supported. To use Singularity as the Docker container runtime, provide
--singularitycommand line option to
cwltool. With Singularity,
cwltoolcan pass all CWL v1.0 conformance tests, except those involving Docker container ENTRYPOINTs.
Example: .. code:: bash
cwltool --singularity https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/main/v1.0/v1.0/v1.0/cat3-tool-mediumcut.cwl https://github.com/common-workflow-language/common-workflow-language/blob/main/v1.0/v1.0/cat-job.json
cwltoolcan run tool and workflow descriptions on both local and remote systems via its support for HTTP[S] URLs.
Input job files and Workflow steps (via the
rundirective) can reference CWL documents using absolute or relative local filesytem paths. If a relative path is referenced and that document isn't found in the current directory then the following locations will be searched: http://www.commonwl.org/v1.0/CommandLineTool.html#DiscoveringCWLdocumentsonalocalfilesystem
You can also use
cwldepto manage dependencies on external tools and workflows.
Sometimes a workflow needs additional requirements to run in a particular environment or with a particular dataset. To avoid the need to modify the underlying workflow, cwltool supports requirement "overrides".
The format of the "overrides" object is a mapping of item identifier (workflow, workflow step, or command line tool) to the process requirements that should be applied.
.. code:: yaml
cwltool:overrides: echo.cwl: requirements: EnvVarRequirement: envDef: MESSAGE: override_value
Overrides can be specified either on the command line, or as part of the job input document. Workflow steps are identified using the name of the workflow file followed by the step name as a document fragment identifier "#id". Override identifiers are relative to the toplevel workflow document.
.. code:: bash
cwltool --overrides overrides.yml my-tool.cwl my-job.yml
.. code:: yaml
inputparameter1: value1 inputparameter2: value2 cwltool:overrides: workflow.cwl#step1: requirements: EnvVarRequirement: envDef: MESSAGE: override_value
.. code:: bash
cwltool my-tool.cwl my-job-with-overrides.yml
Use
--packto combine a workflow made up of multiple files into a single compound document. This operation takes all the CWL files referenced by a workflow and builds a new CWL document with all Process objects (CommandLineTool and Workflow) in a list in the
$graphfield. Cross references (such as
run:and
source:fields) are updated to internal references within the new packed document. The top level workflow is named
#main.
.. code:: bash
cwltool --pack my-wf.cwl > my-packed-wf.cwl
You can run a partial workflow with the
--target(
-t) option. This takes the name of an output parameter, workflow step, or input parameter in the top level workflow. You may provide multiple targets.
.. code:: bash
cwltool --target step3 my-wf.cwl
If a target is an output parameter, it will only run only the steps that contribute to that output. If a target is a workflow step, it will run the workflow starting from that step. If a target is an input parameter, it will only run only the steps that are connected to that input.
Use
--print-targetsto get a listing of the targets of a workflow. To see exactly which steps will run, use
--print-subgraphwith
--targetto get a printout of the workflow subgraph for the selected targets.
.. code:: bash
cwltool --print-targets my-wf.cwl
cwltool --target step3 --print-subgraph my-wf.cwl > my-wf-starting-from-step3.cwl
The
--print-dotoption will print a file suitable for Graphviz
dotprogram. Here is a bash onliner to generate a Scalable Vector Graphic (SVG) file:
.. code:: bash
cwltool --print-dot my-wf.cwl | dot -Tsvg > my-wf.svg
CWL documents can be expressed as RDF triple graphs.
.. code:: bash
cwltool --print-rdf --rdf-serializer=turtle mywf.cwl
CWL tools may be decorated with
SoftwareRequirementhints that cwltool may in turn use to resolve to packages in various package managers or dependency management systems such as
Environment Modules__.
Utilizing
SoftwareRequirementhints using cwltool requires an optional dependency, for this reason be sure to use specify the
depsmodifier when installing cwltool. For instance::
$ pip install 'cwltool[deps]'
Installing cwltool in this fashion enables several new command line options. The most general of these options is
--beta-dependency-resolvers-configuration. This option allows one to specify a dependency resolver's configuration file. This file may be specified as either XML or YAML and very simply describes various plugins to enable to "resolve"
SoftwareRequirementdependencies.
To discuss some of these plugins and how to configure them, first consider the following
hintdefinition for an example CWL tool.
.. code:: yaml
SoftwareRequirement: packages: - package: seqtk version: - r93
Now imagine deploying cwltool on a cluster with Software Modules installed and that a
seqtkmodule is available at version
r93. This means cluster users likely won't have the binary
seqtkon their
PATHby default, but after sourcing this module with the command
modulecmd sh load seqtk/r93
seqtkis available on the
PATH. A simple dependency resolvers configuration file, called
dependency-resolvers-conf.ymlfor instance, that would enable cwltool to source the correct module environment before executing the above tool would simply be:
.. code:: yaml
The outer list indicates that one plugin is being enabled, the plugin parameters are defined as a dictionary for this one list item. There is only one required parameter for the plugin above, this is
typeand defines the plugin type. This parameter is required for all plugins. The available plugins and the parameters available for each are documented (incompletely)
here__. Unfortunately, this documentation is in the context of Galaxy tool
requirements instead of CWL
SoftwareRequirements, but the concepts map fairly directly.
cwltool is distributed with an example of such seqtk tool and sample corresponding job. It could executed from the cwltool root using a dependency resolvers configuration file such as the above one using the command::
cwltool --beta-dependency-resolvers-configuration /path/to/dependency-resolvers-conf.yml \ tests/seqtkseq.cwl \ tests/seqtkseq_job.json
This example demonstrates both that cwltool can leverage existing software installations and also handle workflows with dependencies on different versions of the same software and libraries. However the above example does require an existing module setup so it is impossible to test this example "out of the box" with cwltool. For a more isolated test that demonstrates all the same concepts - the resolver plugin type
galaxy_packagescan be used.
"Galaxy packages" are a lighter weight alternative to Environment Modules that are really just defined by a way to lay out directories into packages and versions to find little scripts that are sourced to modify the environment. They have been used for years in Galaxy community to adapt Galaxy tools to cluster environments but require neither knowledge of Galaxy nor any special tools to setup. These should work just fine for CWL tools.
The cwltool source code repository's test directory is setup with a very simple directory that defines a set of "Galaxy packages" (but really just defines one package named
random-lines). The directory layout is simply::
tests/testdepsenv/ random-lines/ 1.0/ env.sh
If the
galaxy_packagesplugin is enabled and pointed at the
tests/test_deps_envdirectory in cwltool's root and a
SoftwareRequirementsuch as the following is encountered.
.. code:: yaml
hints: SoftwareRequirement: packages: - package: 'random-lines' version: - '1.0'
Then cwltool will simply find that
env.shfile and source it before executing the corresponding tool. That
env.shscript is only responsible for modifying the job's
PATHto add the required binaries.
This is a full example that works since resolving "Galaxy packages" has no external requirements. Try it out by executing the following command from cwltool's root directory::
cwltool --beta-dependency-resolvers-configuration tests/testdepsenvresolversconf.yml \ tests/randomlines.cwl \ tests/randomlines_job.json
The resolvers configuration file in the above example was simply:
.. code:: yaml
It is possible that the
SoftwareRequirements in a given CWL tool will not match the module names for a given cluster. Such requirements can be re-mapped to specific deployed packages and/or versions using another file specified using the resolver plugin parameter
mapping_files. We will demonstrate this using
galaxy_packagesbut the concepts apply equally well to Environment Modules or Conda packages (described below) for instance.
So consider the resolvers configuration file (
tests/test_deps_env_resolvers_conf_rewrite.yml):
.. code:: yaml
And the corresponding mapping configuraiton file (
tests/test_deps_mapping.yml):
.. code:: yaml
This is saying if cwltool encounters a requirement of
randomLinesat version
1.0.0-rc1in a tool, to rewrite to our specific plugin as
random-linesat version
1.0. cwltool has such a test tool called
random_lines_mapping.cwlthat contains such a source
SoftwareRequirement. To try out this example with mapping, execute the following command from the cwltool root directory::
cwltool --beta-dependency-resolvers-configuration tests/testdepsenvresolversconfrewrite.yml \ tests/randomlinesmapping.cwl \ tests/randomlines_job.json
The previous examples demonstrated leveraging existing infrastructure to provide requirements for CWL tools. If instead a real package manager is used cwltool has the opportunity to install requirements as needed. While initial support for Homebrew/Linuxbrew plugins is available, the most developed such plugin is for the
Conda__ package manager. Conda has the nice properties of allowing multiple versions of a package to be installed simultaneously, not requiring evaluated permissions to install Conda itself or packages using Conda, and being cross platform. For these reasons, cwltool may run as a normal user, install its own Conda environment and manage multiple versions of Conda packages on both Linux and Mac OS X.
The Conda plugin can be endlessly configured, but a sensible set of defaults that has proven a powerful stack for dependency management within the Galaxy tool development ecosystem can be enabled by simply passing cwltool the
--beta-conda-dependenciesflag.
With this we can use the seqtk example above without Docker and without any externally managed services - cwltool should install everything it needs and create an environment for the tool. Try it out with the follwing command::
cwltool --beta-conda-dependencies tests/seqtkseq.cwl tests/seqtkseq_job.json
The CWL specification allows URIs to be attached to
SoftwareRequirements that allow disambiguation of package names. If the mapping files described above allow deployers to adapt tools to their infrastructure, this mechanism allows tools to adapt their requirements to multiple package managers. To demonstrate this within the context of the seqtk, we can simply break the package name we use and then specify a specific Conda package as follows:
.. code:: yaml
hints: SoftwareRequirement: packages: - package: seqtk_seq version: - '1.2' specs: - https://anaconda.org/bioconda/seqtk - https://packages.debian.org/sid/seqtk
The example can be executed using the command::
cwltool --beta-conda-dependencies tests/seqtkseqwrongname.cwl tests/seqtkseq_job.json
The plugin framework for managing resolution of these software requirements as maintained as part of
galaxy-tool-util__ - a small, portable subset of the Galaxy project. More information on configuration and implementation can be found at the following links:
Dependency Resolvers in Galaxy__
Conda for [Galaxy] Tool Dependencies__
Mapping Files - Implementation__
Specifications - Implementation__
Initial cwltool Integration Pull Request__
Cwltool can launch tools directly from
GA4GH Tool Registry API_ endpoints.
By default, cwltool searches https://dockstore.org/ . Use
--add-tool-registryto add other registries to the search path.
For example ::
cwltool quay.io/collaboratory/dockstore-tool-bamstats:develop test.json
and (defaults to latest when a version is not specified) ::
cwltool quay.io/collaboratory/dockstore-tool-bamstats test.json
For this example, grab the test.json (and input file) from https://github.com/CancerCollaboratory/dockstore-tool-bamstats ::
wget https://dockstore.org/api/api/ga4gh/v2/tools/quay.io%2Fbriandoconnor%2Fdockstore-tool-bamstats/versions/develop/PLAIN-CWL/descriptor/test.json wget https://github.com/CancerCollaboratory/dockstore-tool-bamstats/raw/develop/rna.SRR948778.bam
.. _
GA4GH Tool Registry API: https://github.com/ga4gh/tool-registry-schemas
Cwltool supports an extension to the CWL spec
http://commonwl.org/cwltool#MPIRequirement. When the tool definition has this in its
requirements/
hintssection, and cwltool has been run with
--enable-ext, then the tool's command line will be extended with the commands needed to launch it with
mpirunor similar. You can specify the number of processes to start as either a literal integer or an expression (that will result in an integer). For example::
#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: CommandLineTool $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:MPIRequirement: processes: $(inputs.nproc) inputs: nproc: type: int
Interaction with containers: the MPIRequirement currently prepends its commands to the front of the command line that is constructed. If you wish to run a containerised application in parallel, for simple use cases this does work with Singularity, depending upon the platform setup. However this combination should be considered "alpha" -- please do report any issues you have! This does not work with Docker at the moment. (More precisely, you get
ncopies of the same single process image run at the same time that cannot communicate with each other.)
The host-specific parameters are configured in a simple YAML file (specified with the
--mpi-config-fileflag). The allowed keys are given in the following table; all are optional.
+----------------+------------------+----------+------------------------------+ | Key | Type | Default | Description | +================+==================+==========+==============================+ | runner | str | "mpirun" | The primary command to use. | +----------------+------------------+----------+------------------------------+ | nprocflag | str | "-n" | Flag to set number of | | | | | processes to start. | +----------------+------------------+----------+------------------------------+ | defaultnproc | int | 1 | Default number of processes. | +----------------+------------------+----------+------------------------------+ | extraflags | List[str] | [] | A list of any other flags to | | | | | be added to the runner's | | | | | command line before | | | | | the
baseCommand. | +----------------+------------------+----------+------------------------------+ | envpass | List[str] | [] | A list of environment | | | | | variables that should be | | | | | passed from the host | | | | | environment through to the | | | | | tool (e.g. giving the | | | | | nodelist as set by your | | | | | scheduler). | +----------------+------------------+----------+------------------------------+ | envpassregex | List[str] | [] | A list of python regular | | | | | expressions that will be | | | | | matched against the host's | | | | | environment. Those that match| | | | | will be passed through. | +----------------+------------------+----------+------------------------------+ | env_set | Mapping[str,str] | {} | A dictionary whose keys are | | | | | the environment variables set| | | | | and the values being the | | | | | values. | +----------------+------------------+----------+------------------------------+
===========
(/tests):
To run the basic tests after installing
cwltoolexecute the following:
.. code:: bash
pip install -rtest-requirements.txt py.test --ignore cwltool/schemas/ --pyarg cwltool
To run various tests in all supported Python environments we use
tox_. To run the test suite in all supported Python environments first downloading the complete code repository (see the
git cloneinstructions above) and then run the following in the terminal:
pip install tox; tox
List of all environment can be seen using:
tox --listenvsand running a specfic test env using:
tox -eand additionally run a specific test using this format:
tox -e py36-unit -- tests/test_examples.py::TestParamMatching
The GitHub repository for the CWL specifications contains a script that tests a CWL implementation against a wide array of valid CWL files using the
cwltest_ program
Instructions for running these tests can be found in the Common Workflow Language Specification repository at https://github.com/common-workflow-language/common-workflow-language/blob/main/CONFORMANCE_TESTS.md
Add
.. code:: python
import cwltool
to your script.
The easiest way to use cwltool to run a tool or workflow from Python is to use a Factory
.. code:: python
import cwltool.factory fac = cwltool.factory.Factory()
echo = fac.make("echo.cwl") result = echo(inp="foo")
# result["out"] == "foo"
Technical outline of how cwltool works internally, for maintainers.
load_tool()to load document.
#. Fetches the document from file or URL #. Applies preprocessing (syntax/identifier expansion and normalization) #. Validates the document based on cwlVersion #. If necessary, updates the document to latest spec #. Constructs a Process object using
make_tool()
callback. This yields a CommandLineTool, Workflow, or ExpressionTool. For workflows, this recursively constructs each workflow step. #. To construct custom types for CommandLineTool, Workflow, or ExpressionTool, provide a custom
make_tool()`
job()method of the Process object to get back runnable jobs.
#.
job()is a generator method (uses the Python iterator protocol) #. Each time the
job()method is invoked in an iteration, it returns one of: a runnable item (an object with a
run()method),
None(indicating there is currently no work ready to run) or end of iteration (indicating the process is complete.) #. Invoke the runnable item by calling
run(). This runs the tool and gets output. #. Output of a process is reported by an output callback. #.
job()may be iterated over multiple times. It will yield all the work that is currently ready to run and then yield None.
Workflowobjects create a corresponding
WorkflowJoband
WorkflowJobStepobjects to hold the workflow state for the duration of the job invocation.
#. The WorkflowJob iterates over each WorkflowJobStep and determines if the inputs the step are ready. #. When a step is ready, it constructs an input object for that step and iterates on the
job()method of the workflow job step. #. Each runnable item is yielded back up to top level run loop #. When a step job completes and receives an output callback, the job outputs are assigned to the output of the workflow step. #. When all steps are complete, the intermediate files are moved to a final workflow output, intermediate directories are deleted, and the output callback for the workflow is called.
CommandLineTooljob() objects yield a single runnable object.
#. The CommandLineTool
job()method calls
make_job_runner()to create a
CommandLineJobobject #. The job method configures the CommandLineJob object by setting public attributes #. The job method iterates over file and directories inputs to the CommandLineTool and creates a "path map". #. Files are mapped from their "resolved" location to a "target" path where they will appear at tool invocation (for example, a location inside a Docker container.) The target paths are used on the command line. #. Files are staged to targets paths using either Docker volume binds (when using containers) or symlinks (if not). This staging step enables files to be logically rearranged or renamed independent of their source layout. #. The
run()method of CommandLineJob executes the command line tool or Docker container, waits for it to complete, collects output, and makes the output callback.
The following functions can be passed to main() to override or augment the listed behaviors.
executor ::
executor(tool, job_order_object, runtimeContext, logger) (Process, Dict[Text, Any], RuntimeContext) -> Tuple[Dict[Text, Any], Text]
An implementation of the toplevel workflow execution loop, should synchronously run a process object to completion and return the output object.
versionfunc ::
() () -> Text
Return version string.
logger_handler ::
logger_handler logging.Handler
Handler object for logging.
The following functions can be set in LoadingContext to override or augment the listed behaviors.
fetcher_constructor ::
fetcher_constructor(cache, session) (Dict[unicode, unicode], requests.sessions.Session) -> Fetcher
Construct a Fetcher object with the supplied cache and HTTP session.
resolver ::
resolver(document_loader, document) (Loader, Union[Text, dict[Text, Any]]) -> Text
Resolve a relative document identifier to an absolute one which can be fetched.
The following functions can be set in RuntimeContext to override or augment the listed behaviors.
constructtoolobject ::
construct_tool_object(toolpath_object, loadingContext) (MutableMapping[Text, Any], LoadingContext) -> Process
Hook to construct a Process object (eg CommandLineTool) object from a document.
select_resources ::
selectResources(request) (Dict[str, int], RuntimeContext) -> Dict[Text, int]
Take a resource request and turn it into a concrete resource assignment.
makefsaccess ::
make_fs_access(basedir) (Text) -> StdFsAccess
Return a file system access object.
In addition, when providing custom subclasses of Process objects, you can override the following methods:
CommandLineTool.makejobrunner ::
make_job_runner(RuntimeContext) (RuntimeContext) -> Type[JobBase]
Create and return a job runner object (this implements concrete execution of a command line tool).
Workflow.makeworkflowstep ::
make_workflow_step(toolpath_object, pos, loadingContext, parentworkflowProv) (Dict[Text, Any], int, LoadingContext, Optional[ProvenanceProfile]) -> WorkflowStep
Create and return a workflow step object.