CLI Reference

torc

torc commands

torc [OPTIONS] COMMAND [ARGS]...

Options

-c, --console-level <console_level>

Console log level.

-f, --file-level <file_level>

File log level. Set to ‘trace’ for increased verbosity.

-k, --workflow-key <workflow_key>

Workflow key, required for many commands. User will be prompted if it is missing unless –no-prompts is set.

-n, --no-prompts

Disable all user prompts.

Default:

False

-F, --output-format <output_format>

Output format for get/list commands. Not all commands support all formats.

Options:

text | csv | json

--timings <timings>

Enable tracking of function timings.

Options:

true | false

-U, --user <user>

Username

Default:

'runner'

-u, --database-url <database_url>

Database URL. Ex: http://localhost:8529/_db/workflows/torc-service

--version

Show the version and exit.

Environment variables

TORC_WORKFLOW_KEY

Provide a default for -k

TORC_DATABASE_URL

Provide a default for -u

collections

Collections commands

torc collections [OPTIONS] COMMAND [ARGS]...

join

Perform a join of collections from a pre-defined configuration. Refer to show-join-configurations command for details.

Examples:
1. Show jobs and results in a table.
$ torc collections join job-results
2. Show jobs and results in JSON format.
$ torc -F json collections join job-results
3. Show results for one job. This uses the job key.
$ torc -F json collections join job-results -f return_code=0
torc collections join [OPTIONS] {compute-node-executed-jobs|compute-node-
                      utilization|job-blocks|job-needs-file|job-produces-
                      file|job-requirements|job-results|job-schedulers|job-
                      process-utilization|job-consumes-data|job-stores-data}

Options

-f, --filters <filters>

Filter the values according to each key=value pair on the primary collection.

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

Arguments

NAME

Required argument

join-by-edge

Join a collection with one or more other collections connected by an edge.

Examples:
1. Show jobs and results in a table.
$ torc collections join-by-edge jobs returned
2. Show jobs and results in JSON format.
$ torc -F json collections join-by-edge jobs returned
torc collections join-by-edge [OPTIONS] COLLECTION EDGE

Options

-f, --filters <filters>

Filter the values according to each key=value pair on the primary collection.

--outbound, --inbound

Inbound or outbound edge.

Default:

True

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

-x, --exclude-from <exclude_from>

Exclude this base column name on the from side. Accepts multiple

-y, --exclude-to <exclude_to>

Exclude this base column name on the to side. Accepts multiple

Arguments

COLLECTION

Required argument

EDGE

Required argument

list

List workflow collections.

torc collections list [OPTIONS]

Options

-r, --raw

List raw names

Default:

False

show-join-configurations

Show the pre-defined configurations for use in the join command.

torc collections show-join-configurations [OPTIONS]

compute-nodes

Compute node commands

torc compute-nodes [OPTIONS] COMMAND [ARGS]...

list

List all compute nodes that partipcated in a workflow.

torc compute-nodes list [OPTIONS]

list-resource-stats

Show resource statistics from a workflow run.

Examples:
1. List resource stats from all compute nodes in tables by resource type.
$ torc compute-nodes list-resource-stats
2. List resource stats from all compute nodes in JSON format -
one array keyed by ‘stats’.
$ torc -F json compute-nodes list-resource-stats
torc compute-nodes list-resource-stats [OPTIONS]

Options

-x, --exclude-process

Exclude job process stats (show compute node stats only).

Default:

False

config

Config commands

torc config [OPTIONS] COMMAND [ARGS]...

create

Create a local torc runtime configuration file.

torc config create [OPTIONS]

Options

-F, --output-format <output_format>

Output format for get/list commands. Not all commands support all formats.

Default:

'text'

Options:

text | json

-d, --directory <directory>

Directory in which to store the config file.

Default:

PosixPath('/home/runner')

-f, --filter-workflows-by-user, --no-filter-workflows-by-user

Whether to filter workflows by the current user

Default:

True

-k, --workflow-key <workflow_key>

Workflow key. User will be prompted if it is missing unless –no-prompts is set.

--timings, --no-timings

Enable tracking of function timings.

Default:

False

-u, --database-url <database_url>

Database URL. Note the database name in this example: http://localhost:8529/_db/database_name/torc-service

--console-level <console_level>

Console log level.

Default:

'info'

--file-level <file_level>

File log level. Set to ‘trace’ for increased verbosity.

Default:

'debug'

events

Event commands

torc events [OPTIONS] COMMAND [ARGS]...

get-latest-event-timestamp

Return the timestamp of the latest event.

torc events get-latest-event-timestamp [OPTIONS]

list

List all events in a workflow.

Examples:
1. List all events.
$ torc events 91388876 list events
2. List only events with a category of job.
$ torc events 91388876 list events -f category=job
torc events list [OPTIONS]

Options

-A, --after-timestamp-ms <after_timestamp_ms>

Only return events that occurred after this timestamp expressed as millseconds since the epoch in UTC.

-a, --after-datetime <after_datetime>

Only return events that occurred after this local datetime (format = YYYY-MM-DD HH:MM:SS.ddd).

-f, --filters <filters>

Filter the values according to each key=value pair. Only ‘category’ is supported.

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

monitor

Monitor events.

torc events monitor [OPTIONS]

Options

-c, --category <category>

Filter events by this category.

-d, --duration <duration>

Duration in seconds to monitor. Default is forever.

-p, --poll-interval <poll_interval>

Poll interval in seconds. Please be mindful of impacts to the database.

export

Export commands

torc export [OPTIONS] COMMAND [ARGS]...

sqlite

Export workflows stored in the database to this SQLite file. By default, export all workflows. Limit the output tables by passing specific workflow keys as positional arguments.

torc export sqlite [OPTIONS] [WORKFLOW_KEYS]...

Options

-F, --filename <filename>

SQLite filename

Default:

'workflow.sqlite'

-f, --force

Overwrite file if it exists.

Default:

False

Arguments

WORKFLOW_KEYS

Optional argument(s)

files

File commands

torc files [OPTIONS] COMMAND [ARGS]...

add

Add a file to the workflow.

torc files add [OPTIONS]

Options

-n, --name <name>

file name

-p, --path <path>

Required Path of file

delete

Delete one or more files by key.

torc files delete [OPTIONS] [FILE_KEYS]...

Arguments

FILE_KEYS

Optional argument(s)

delete-all

Delete all files in the workflow.

torc files delete-all [OPTIONS]

list

List all files in a workflow.

Examples:
1. List all files in a table.
$ torc files list
2. List only files with name=file1
$ torc files list -f name=file1
3. List all files in JSON format.
$ torc -F json files list
torc files list [OPTIONS]

Options

-f, --filters <filters>

Filter the values according to each key=value pair.

--sort-by <sort_by>

Sort results by this column.

--reverse-sort

Reverse the sort order if –sort-by is set.

Default:

False

graphs

Graph commands

torc graphs [OPTIONS] COMMAND [ARGS]...

plot

Make a plot from an exported graph.

Example:
$ torc graphs plot job_job_dependencies
torc graphs plot [OPTIONS] [[job_job_dependencies|job_file_dependencies|job_us
                 er_data_dependencies]]...

Options

-k, --keep-dot-file

Keep the intermediate DOT file

Default:

False

-o, --output <output>

Output directory

Default:

'output'

Arguments

NAMES

Optional argument(s)

plot-xgmml

Make a plot from an XGMML graph file exported with arangoexport.

Example:
$ torc graphs plot-xgmml export/job-blocks.xgmml
torc graphs plot-xgmml [OPTIONS] GRAPH_FILE

Options

-k, --keep-dot-file

Keep the intermediate DOT file

Default:

False

Arguments

GRAPH_FILE

Required argument

hpc

HPC commands

torc hpc [OPTIONS] COMMAND [ARGS]...

slurm

Slurm commands

torc hpc slurm [OPTIONS] COMMAND [ARGS]...
add-config

Add a Slurm config to the database.

torc hpc slurm add-config [OPTIONS]

Options

-N, --name <name>

Required Name of config

-a, --account <account>

Required HPC account

-g, --gres <gres>

Request nodes that have at least this number of GPUs. Ex: ‘gpu:2’

-m, --mem <mem>

Request nodes that have at least this amount of memory. Ex: ‘180G’

-n, --nodes <nodes>

Number of nodes to use for each job

Default:

1

-p, --partition <partition>

HPC partition. Default is determined by the scheduler

-q, --qos <qos>

Controls priority of the jobs.

Default:

'normal'

-t, --tmp <tmp>

Request nodes that have at least this amount of storage scratch space.

-w, --walltime <walltime>

Slurm job walltime.

Default:

'04:00:00'

-e, --extra <extra>

Add extra Slurm parameters, for example –extra=’–reservation=my-reservation’.

list-configs

Show the current Slurm configs in the database.

torc hpc slurm list-configs [OPTIONS]
modify-config

Modify a Slurm config in the database.

torc hpc slurm modify-config [OPTIONS] SLURM_CONFIG_KEY

Options

-N, --name <name>

Name of config

-a, --account <account>

HPC account

-g, --gres <gres>

Request nodes that have at least this number of GPUs. Ex: ‘gpu:2’

-m, --mem <mem>

Request nodes that have at least this amount of memory. Ex: ‘180G’

-n, --nodes <nodes>

Number of nodes to use for each job

-p, --partition <partition>

HPC partition. Default is determined by the scheduler

-q, --qos <qos>

Controls priority of the jobs.

-t, --tmp <tmp>

Request nodes that have at least this amount of storage scratch space.

-w, --walltime <walltime>

Slurm job walltime.

Arguments

SLURM_CONFIG_KEY

Required argument

recommend-nodes

Recommend compute nodes to schedule.

torc hpc slurm recommend-nodes [OPTIONS]

Options

-c, --num-cpus <num_cpus>

Number of CPUs per node

Default:

104

-m, --memory-gb <memory_gb>

Amount of memory in GB per node

Default:

240

-s, --scheduler-config-key <scheduler_config_key>

Limit output to jobs assigned this scheduler config key.

run-jobs

Run workflow jobs on a Slurm compute node.

torc hpc slurm run-jobs [OPTIONS]

Options

-c, --cpu-affinity-cpus-per-job <cpu_affinity_cpus_per_job>

Enable CPU affinity for this number of CPUs per job.

-m, --max-parallel-jobs <max_parallel_jobs>

Maximum number of parallel jobs. Default is to use resource availability.

-o, --output <output>
Default:

'output'

-p, --poll-interval <poll_interval>

Poll interval for job completions

Default:

60

--is-subtask

Set to True if this is a subtask and multiple workers are running on one node.

Default:

False

-w, --wait-for-healthy-database-minutes <wait_for_healthy_database_minutes>

Wait this number of minutes if the database is offline.

Default:

0

schedule-nodes

Schedule nodes with Slurm to run jobs.

torc hpc slurm schedule-nodes [OPTIONS]

Options

-c, --cpu-affinity-cpus-per-job <cpu_affinity_cpus_per_job>

Enable CPU affinity for this number of CPUs per job.

-j, --job-prefix <job_prefix>

Prefix for HPC job names

Default:

'worker'

-k, --keep-submission-scripts

Keep Slurm submission scripts on the filesystem.

Default:

False

-m, --max-parallel-jobs <max_parallel_jobs>

Maximum number of parallel jobs. Default is to use resource availability.

-n, --num-hpc-jobs <num_hpc_jobs>

Required Number of HPC jobs to schedule

-o, --output <output>

Output directory for compute nodes

Default:

'output'

-p, --poll-interval <poll_interval>

Poll interval for job completions

Default:

60

-s, --scheduler-config-key <scheduler_config_key>

SlurmScheduler config key. Auto-selected if possible.

-S, --start-one-worker-per-node

Start a torc worker on each compute node. The default behavior starts a worker on the first compute node but no others. That defers control of the nodes to the user job. Setting this flag means that every compute node in the allocation will run jobs concurrently. This flag has no effect if each Slurm allocation has one compute node (default).

jobs

Job commands

torc jobs [OPTIONS] COMMAND [ARGS]...

add

Add a job to the workflow.

torc jobs add [OPTIONS]

Options

--cancel-on-blocking-job-failure, --no-cancel-on-blocking-job-failure

Cancel the job if a blocking job fails.

Default:

True

-c, --command <command>

Required Command to run

-k, --key <key>

Job key. Default is to auto-generate

-n, --name <name>

Job name

assign-resource-requirements

Assign resource requirements to one or more jobs.

torc jobs assign-resource-requirements [OPTIONS] RESOURCE_REQUIREMENTS_KEY
                                       [JOB_KEYS]...

Arguments

RESOURCE_REQUIREMENTS_KEY

Required argument

JOB_KEYS

Optional argument(s)

delete

Delete one or more jobs by key.

torc jobs delete [OPTIONS] [JOB_KEYS]...

Arguments

JOB_KEYS

Optional argument(s)

delete-all

Delete all jobs in the workflow.

torc jobs delete-all [OPTIONS]

disable

Set the status of one or more jobs to disabled.

torc jobs disable [OPTIONS] [JOB_KEYS]...

Arguments

JOB_KEYS

Optional argument(s)

list

List all jobs in a workflow.

Examples:
1. List all jobs in a table.
$ torc jobs list jobs
2. List only jobs with run_id=1 and status=done.
$ torc jobs list jobs -f run_id=1 -f status=done
3. List all jobs in JSON format.
$ torc -F json jobs list
torc jobs list [OPTIONS]

Options

-f, --filters <filters>

Filter the values according to each key=value pair.

-x, --exclude <exclude>

Exclude this column name. Accepts multiple

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

--sort-by <sort_by>

Sort results by this column.

--reverse-sort

Reverse the sort order if –sort-by is set.

Default:

False

list-process-stats

List per-job process resource statistics from a workflow run.

Examples:
1. List stats for all jobs in a table.
$ torc jobs list-process-stats
2. List all stats in JSON format.
$ torc -F json jobs list-process-stats
torc jobs list-process-stats [OPTIONS]

Options

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

list-user-data

List all user data stored or consumed for a job.

torc jobs list-user-data [OPTIONS] JOB_KEY

Options

--stores, --consumes

List data that is either stored by the job or consumed by the job.

Default:

True

Arguments

JOB_KEY

Required argument

reset-status

Reset the status of one or more jobs.

torc jobs reset-status [OPTIONS] [JOB_KEYS]...

Arguments

JOB_KEYS

Optional argument(s)

run

Run workflow jobs on the current system.

torc jobs run [OPTIONS]

Options

-c, --cpu-affinity-cpus-per-job <cpu_affinity_cpus_per_job>

Enable CPU affinity for this number of CPUs per job.

-m, --max-parallel-jobs <max_parallel_jobs>

Maximum number of parallel jobs. Default is to use resource availability.

-o, --output <output>
Default:

'output'

-p, --poll-interval <poll_interval>

Poll interval for job completions

Default:

10

-s, --scheduler-config-id <scheduler_config_id>

Only run jobs with this scheduler config id.

-t, --time-limit <time_limit>

Time limit ISO 8601 time duration format (like ‘P0DT24H’), defaults to no limit.

-w, --wait-for-healthy-database-minutes <wait_for_healthy_database_minutes>

Wait this number of minutes if the database is offline. Applies only to the initial connection.

Default:

0

run-function

Run a function on one set of inputs stored in the workflow database. Only called by the torc worker application as part of the mapped-function workflow.

torc jobs run-function [OPTIONS]

run-postprocess

Run a postprocess function on the results of a mapped-function workflow.

torc jobs run-postprocess [OPTIONS]

reports

Report commands

torc reports [OPTIONS] COMMAND [ARGS]...

results

Report information about job results and log files.

torc reports results [OPTIONS] [JOB_KEYS]...

Options

-o, --output <output>
Default:

'output'

-r, --run-id <run_id>

Enter one or more run IDs to limit output to specific runs. Default is to show all.

Arguments

JOB_KEYS

Optional argument(s)

resource-requirements

Job resource requirements commands

torc resource-requirements [OPTIONS] COMMAND [ARGS]...

add

Add a resource requirements definition to the workflow.

torc resource-requirements add [OPTIONS]

Options

-n, --name <name>

Resource requirements name

-c, --num-cpus <num_cpus>

Number of CPUs required by a job

Default:

1

-m, --memory <memory>

Amount of memory required by a job, such as ‘20g’

Default:

'1m'

-r, --runtime <runtime>

ISO 8601 encoding for job runtime

Default:

'P0DT1M'

-N, --num-nodes <num_nodes>

Number of compute nodes required by a job

Default:

1

-a, --apply-to-all-jobs

Apply these requirements to all jobs in the workflow.

Default:

False

delete

Delete one or more resource requirements by key.

torc resource-requirements delete [OPTIONS] [RESOURCE_REQUIREMENT_KEYS]...

Arguments

RESOURCE_REQUIREMENT_KEYS

Optional argument(s)

delete-all

Delete all resource_requirements in the workflow.

torc resource-requirements delete-all [OPTIONS]

list

List all resource_requirements in a workflow.

Examples:
1. List all resource_requirements in a table.
$ torc resource_requirements list resource_requirements
2. List only resource_requirements with num_cpus=4.
$ torc resource_requirements list resource_requirements -f num_cpus=4
3. List all resource_requirements in JSON format.
$ torc -F json resource_requirements list
torc resource-requirements list [OPTIONS]

Options

-f, --filters <filters>

Filter the values according to each key=value pair.

-l, --limit <limit>

Limit the output to this number of resource_requirements.

-s, --skip <skip>

Skip this number of resource_requirements.

--sort-by <sort_by>

Sort results by this column.

--reverse-sort

Reverse the sort order if –sort-by is set.

Default:

False

modify

Modify a resource requirements definition.

torc resource-requirements modify [OPTIONS] RESOURCE_REQUIREMENTS_KEY

Options

-n, --name <name>

Resource requirements name

-c, --num-cpus <num_cpus>

Number of CPUs required by a job

-m, --memory <memory>

Amount of memory required by a job, such as ‘20g’

-r, --runtime <runtime>

ISO 8601 encoding for job runtime

-N, --num-nodes <num_nodes>

Number of compute nodes required by a job

Arguments

RESOURCE_REQUIREMENTS_KEY

Required argument

results

Result commands

torc results [OPTIONS] COMMAND [ARGS]...

delete

Delete all results for one or more workflows.

torc results delete [OPTIONS]

list

List all results in a workflow.

Examples:
1. List all results in a table.
$ torc results list
2. List only results with a return_code of 1.
$ torc results list -f return_code=1
3. List the latest result for each job.
$ torc results list –latest-only
4. List all results in JSON format.
$ torc -F json results 91388876 list results
torc results list [OPTIONS]

Options

-f, --filters <filters>

Filter the values according to each key=value pair.

-L, --latest-only

Limit output to the latest result for each job.

Default:

False

-l, --limit <limit>

Limit the output to this number of jobs.

-s, --skip <skip>

Skip this number of jobs.

-x, --exclude-job-names

Exclude job names from the output. Set this flag if you need to deserialize the objects into Result classes or to speed up the query.

Default:

False

--sort-by <sort_by>

Sort results by this column.

--reverse-sort

Reverse the sort order if –sort-by is set.

Default:

False

stats

Stats commands

torc stats [OPTIONS] COMMAND [ARGS]...

concatenate-process

Concatenate job process stats from all compute nodes into one file. output_dir must be the directory that contains the existing .sqlite files.

torc stats concatenate-process [OPTIONS] OUTPUT_DIR

Arguments

OUTPUT_DIR

Required argument

tui

Starts a terminal-based management console.

torc tui [OPTIONS]

user-data

User data commands

torc user-data [OPTIONS] COMMAND [ARGS]...

add

Add user data to the workflow. Could be a placeholder or could contain data.

Example:
1. Add a placeholder for data that will be stored by one job and consumed by another.
$ torc user-data add -n output_data_1 -s 96117190 -c 96117191 -c 96117192
2. Add a placeholder for data that will be stored by one job and consumed by another.
$ torc user-data add -n output_data_1 -d “{key1: ‘val1’, key2: ‘val2’}”
torc user-data add [OPTIONS]

Options

-d, --data <data>

Object encoded in a JSON5 string.

--ephemeral, --not-ephemeral

Whether the data is ephemeral and should be cleared on every run of the workflow.

Default:

False

-n, --name <name>

User data name

-s, --stores <stores>

Key of job that will store the data.

-c, --consumes <consumes>

Key of job or jobs that will consume the data. Accepts multiple.

delete

Delete one or more user_data objects by key.

torc user-data delete [OPTIONS] [USER_DATA_KEYS]...

Arguments

USER_DATA_KEYS

Optional argument(s)

delete-all

Delete all user_data objects in the workflow.

torc user-data delete-all [OPTIONS]

get

Get one user_data object by key.

torc user-data get [OPTIONS] KEY

Arguments

KEY

Required argument

list

List all user data in a workflow.

torc user-data list [OPTIONS]

Options

-f, --filters <filters>

Filter the values according to each key=value pair.

-l, --limit <limit>

Limit the output to this number of items.

-s, --skip <skip>

Skip this number of items.

modify

Modify user data.

torc user-data modify [OPTIONS] USER_DATA_KEY

Options

-n, --name <name>

User data name

-d, --data <data>

Object encoded in a JSON5 string

--ephemeral, --not-ephemeral

Whether the data is ephemeral and should be cleared on every run of the workflow.

Arguments

USER_DATA_KEY

Required argument

workflows

Workflow commands

torc workflows [OPTIONS] COMMAND [ARGS]...

add-jobs-from-commands-file

Add jobs to a workflow from a text file containing job CLI commands.

torc workflows add-jobs-from-commands-file [OPTIONS] FILENAME

Options

-c, --cpus-per-job <cpus_per_job>

Number of CPUs required for each job.

Default:

1

-m, --memory-per-job <memory_per_job>

Amount of memory required for each job. Use ‘100m’ for 100 MB, ‘1g’ for 1 GB, etc.

Default:

'1m'

-r, --runtime-per-job <runtime_per_job>

Runtime required for each job in ISO8601 format. Example: P0DT1H is one hour.

Default:

'P0DT1m'

Arguments

FILENAME

Required argument

cancel

Cancel one or more workflows.

torc workflows cancel [OPTIONS] [WORKFLOW_KEYS]...

Arguments

WORKFLOW_KEYS

Optional argument(s)

create

Create a new workflow.

torc workflows create [OPTIONS]

Options

-d, --description <description>

Workflow description

-k, --key <key>

Workflow key. Default is to auto-generate

-n, --name <name>

Workflow name

create-from-commands-file

Create a workflow from a text file containing job CLI commands.

torc workflows create-from-commands-file [OPTIONS] FILENAME

Options

-c, --cpus-per-job <cpus_per_job>

Number of CPUs required for each job.

Default:

1

-d, --description <description>

Workflow description

-k, --key <key>

Workflow key. Default is to auto-generate

-m, --memory-per-job <memory_per_job>

Amount of memory required for each job. Use ‘100m’ for 100 MB, ‘1g’ for 1 GB, etc.

Default:

'1m'

-n, --name <name>

Workflow name

-r, --runtime-per-job <runtime_per_job>

Runtime required for each job in ISO8601 format. Example: P0DT1H is one hour.

Default:

'P0DT1m'

Arguments

FILENAME

Required argument

create-from-json-file

Create a workflow from a JSON/JSON5 file.

torc workflows create-from-json-file [OPTIONS] FILENAME

Arguments

FILENAME

Required argument

delete

Delete one or more workflows by key.

torc workflows delete [OPTIONS] [WORKFLOW_KEYS]...

Arguments

WORKFLOW_KEYS

Optional argument(s)

delete-all

Delete all workflows stored by the user.

torc workflows delete-all [OPTIONS]

example

Show the example workflow.

torc workflows example [OPTIONS]

list

List all workflows stored by the user.

1. List all workflows for the current user in a table.
$ torc workflows list
2. List all workflows in JSON format.
$ torc -o json workflows list
3. List only archived workflows.
$ torc workflows list –only-archived
4. List all workflows for all users, including archived workflows.
$ torc workflows list –all-users –include-archived
torc workflows list [OPTIONS]

Options

-A, --only-archived

List only workflows that have been archived.

Default:

False

-i, --include-archived

Include archived workflows in the list.

Default:

False

-a, --all-users

List workflows for all users. Default is only for the current user.

-f, --filters <filters>

Filter the values according to each key=value pair.

--sort-by <sort_by>

Sort results by this column.

--reverse-sort

Reverse the sort order if –sort-by is set.

Default:

False

list-scheduler-configs

List the scheduler configs in the database.

torc workflows list-scheduler-configs [OPTIONS]

modify

Modify the workflow parameters.

torc workflows modify [OPTIONS] WORKFLOW_KEY

Options

-a, --archive <archive>

Set to ‘true’ to archive the workflow or ‘false’ to enable it.

-d, --description <description>

Workflow description

-n, --name <name>

Workflow name

Arguments

WORKFLOW_KEY

Required argument

process-auto-tune-resource-requirements-results

Process the results of the first round of auto-tuning resource requirements.

torc workflows process-auto-tune-resource-requirements-results 
    [OPTIONS]

reset-status

Reset the status of the workflow(s) and all jobs.

torc workflows reset-status [OPTIONS] [WORKFLOW_KEYS]...

Options

-f, --failed-only

Only reset the status of failed jobs.

Default:

False

-r, --restart

Send the ‘workflows restart’ command after resetting status.

Default:

False

Arguments

WORKFLOW_KEYS

Optional argument(s)

restart

Restart the workflow defined in the database specified by the URL. Resets all jobs with a status of canceled, submitted, submitted_pending, and terminated. Does not affect jobs with a status of done unless an input file has changed.

torc workflows restart [OPTIONS]

Options

-d, --dry-run

Perform a dry run. Show status changes but do not change any database values.

Default:

False

-i, --ignore-missing-data

Ignore checks for missing files and user data documents.

Default:

False

--only-uninitialized

Only initialize jobs with a status of uninitialized.

Default:

False

set-compute-node-parameters

Set parameters that control how the torc worker app behaves on compute nodes. Run ‘torc workflows show-config’ to see the current values.

torc workflows set-compute-node-parameters [OPTIONS]

Options

-e, --expiration-buffer <expiration_buffer>

Set the number of seconds before the expiration time at which torc will terminate jobs.

-h, --wait-for-healthy-db <wait_for_healthy_db>

Set the number of minutes that torc will tolerate an offline database.

-i, --ignore-workflow-completion <ignore_workflow_completion>

Set to ‘true’ to cause torc to ignore workflow completions and hold onto compute node allocations indefinitely. Useful for debugging failed jobs. Set to ‘false’ to revert to the default behavior.

-w, --wait-for-new-jobs <wait_for_new_jobs>

Set the number of seconds that torc will wait for new jobs before exiting. Does not apply if the workflow is complete.

show

Show the workflow.

torc workflows show [OPTIONS]

Options

--sanitize, --no-santize

Remove all database fields from workflow objects.

Default:

True

show-config

Show the workflow config.

torc workflows show-config [OPTIONS]

show-status

Show the workflow status.

torc workflows show-status [OPTIONS]

start

Start the workflow defined in the database specified by the URL.

torc workflows start [OPTIONS]

Options

-a, --auto-tune-resource-requirements

Setup the workflow such that only one job from each resource group is run in the first round. Upon completion torc will look at actual resource utilization of those jobs and apply the results to the resource requirements definitions. When jobs finish, please call ‘torc workflows process_auto_tune_resource_requirements_results’ to update the requirements.

Default:

False

-i, --ignore-missing-data

Ignore checks for missing files and user data documents.

Default:

False

template

Show the workflow template.

torc workflows template [OPTIONS]