reV pipeline
Execute multiple steps in an analysis pipeline.
The general structure for calling this CLI command is given below (add --help
to print help info to the terminal).
reV pipeline [OPTIONS]
Options
- -c, --config_file <config_file>
Path to the
pipeline
configuration file. This argument can be left out, but one and only one file with “pipeline” in the name should exist in the directory and contain the config information. Below is a sample template config{ "pipeline": [ { "bespoke": "./config_bespoke.json" }, { "generation": "./config_generation.json" }, { "econ": "./config_econ.json" }, { "collect": "./config_collect.json" }, { "multi-year": "./config_multi_year.json" }, { "supply-curve-aggregation": "./config_supply_curve_aggregation.json" }, { "supply-curve": "./config_supply_curve.json" }, { "rep-profiles": "./config_rep_profiles.json" }, { "hybrids": "./config_hybrids.json" }, { "nrwal": "./config_nrwal.json" }, { "qa-qc": "./config_qa_qc.json" }, { "script": "./config_script.json" } ], "logging": { "log_file": null, "log_level": "INFO" } }
pipeline: - bespoke: ./config_bespoke.json - generation: ./config_generation.json - econ: ./config_econ.json - collect: ./config_collect.json - multi-year: ./config_multi_year.json - supply-curve-aggregation: ./config_supply_curve_aggregation.json - supply-curve: ./config_supply_curve.json - rep-profiles: ./config_rep_profiles.json - hybrids: ./config_hybrids.json - nrwal: ./config_nrwal.json - qa-qc: ./config_qa_qc.json - script: ./config_script.json logging: log_file: null log_level: INFO
[[pipeline]] bespoke = "./config_bespoke.json" [[pipeline]] generation = "./config_generation.json" [[pipeline]] econ = "./config_econ.json" [[pipeline]] collect = "./config_collect.json" [[pipeline]] multi-year = "./config_multi_year.json" [[pipeline]] supply-curve-aggregation = "./config_supply_curve_aggregation.json" [[pipeline]] supply-curve = "./config_supply_curve.json" [[pipeline]] rep-profiles = "./config_rep_profiles.json" [[pipeline]] hybrids = "./config_hybrids.json" [[pipeline]] nrwal = "./config_nrwal.json" [[pipeline]] qa-qc = "./config_qa_qc.json" [[pipeline]] script = "./config_script.json" [logging] log_level = "INFO"
Parameters
- pipelinelist of dicts
A list of dictionaries, where each dictionary represents one step in the pipeline. Each dictionary should have one of two configurations:
A single key-value pair, where the key is the name of the CLI command to run, and the value is the path to a config file containing the configuration for that command
Exactly two key-value pairs, where one of the keys is
"command"
, with a value that points to the name of a command to execute, while the second key is a _unique_ user-defined name of the pipeline step to execute, with a value that points to the path to a config file containing the configuration for the command specified by the other key. This configuration allows users to specify duplicate commands as part of their pipeline execution.
- loggingdict, optional
Dictionary containing keyword-argument pairs to pass to init_logger. This initializes logging for the submission portion of the pipeline. Note, however, that each step (command) will also record the submission step log output to a common “project” log file, so it’s only ever necessary to use this input if you want a different (lower) level of verbosity than the log_level specified in the config for the step of the pipeline being executed.
- --cancel
Flag to cancel all jobs associated with a given pipeline.
- --monitor
Flag to monitor pipeline jobs continuously. Default is not to monitor (kick off jobs and exit).
- -r, --recursive
Flag to recursively submit pipelines, starting from the current directory and checking every sub-directory therein. The -c option will be completely ignored if you use this option. Instead, the code will check every sub-directory for exactly one file with the word pipeline in it. If found, that file is assumed to be the pipeline config and is used to kick off the pipeline. In any other case, the directory is skipped.
- --background
Flag to monitor pipeline jobs continuously in the background. Note that the stdout/stderr will not be captured, but you can set a pipeline ‘log_file’ to capture logs.