reV nrwal

Execute the nrwal step from a config file.

reV NRWAL analysis runs reV data through the NRWAL compute library. Everything in this module operates on the spatiotemporal resolution of the reV generation output file (usually the wind or solar resource resolution but could also be the supply curve resolution after representative profiles is run).

The general structure for calling this CLI command is given below (add --help to print help info to the terminal).

reV nrwal [OPTIONS]

Options

-c, --config_file <config_file>

Required Path to the nrwal configuration file. Below is a sample template config

{
    "execution_control": {
        "option": "local",
        "allocation": "[REQUIRED IF ON HPC]",
        "walltime": "[REQUIRED IF ON HPC]",
        "qos": "normal",
        "memory": null,
        "queue": null,
        "feature": null,
        "conda_env": null,
        "module": null,
        "sh_script": null
    },
    "log_directory": "./logs",
    "log_level": "INFO",
    "gen_fpath": "[REQUIRED]",
    "site_data": "[REQUIRED]",
    "sam_files": "[REQUIRED]",
    "nrwal_configs": "[REQUIRED]",
    "output_request": "[REQUIRED]",
    "save_raw": true,
    "meta_gid_col": "gid",
    "site_meta_cols": null,
    "csv_output": false
}

Parameters

execution_controldict

Dictionary containing execution control arguments. Allowed arguments are:

option:

({‘local’, ‘kestrel’, ‘eagle’, ‘awspc’, ‘slurm’, ‘peregrine’}) Hardware run option. Determines the type of job scheduler to use as well as the base AU cost. The “slurm” option is a catchall for HPC systems that use the SLURM scheduler and should only be used if desired hardware is not listed above. If “local”, no other HPC-specific keys in are required in execution_control (they are ignored if provided).

allocation:

(str) HPC project (allocation) handle.

walltime:

(int) Node walltime request in hours.

qos:

(str, optional) Quality-of-service specifier. For Kestrel users: This should be one of {‘standby’, ‘normal’, ‘high’}. Note that ‘high’ priority doubles the AU cost. By default, "normal".

memory:

(int, optional) Node memory max limit (in GB). By default, None, which uses the scheduler’s default memory limit. For Kestrel users: If you would like to use the full node memory, leave this argument unspecified (or set to None) if you are running on standard nodes. However, if you would like to use the bigmem nodes, you must specify the full upper limit of memory you would like for your job, otherwise you will be limited to the standard node memory size (250GB).

queue:

(str, optional; PBS ONLY) HPC queue to submit job to. Examples include: ‘debug’, ‘short’, ‘batch’, ‘batch-h’, ‘long’, etc. By default, None, which uses “test_queue”.

feature:

(str, optional) Additional flags for SLURM job (e.g. “-p debug”). By default, None, which does not specify any additional flags.

conda_env:

(str, optional) Name of conda environment to activate. By default, None, which does not load any environments.

module:

(str, optional) Module to load. By default, None, which does not load any modules.

sh_script:

(str, optional) Extra shell script to run before command call. By default, None, which does not run any scripts.

Only the option key is required for local execution. For execution on the HPC, the allocation and walltime keys are also required. All other options are populated with default values, as seen above.

log_directorystr

Path to directory where logs should be written. Path can be relative and does not have to exist on disk (it will be created if missing). By default, "./logs".

log_level{“DEBUG”, “INFO”, “WARNING”, “ERROR”}

String representation of desired logger verbosity. Suitable options are DEBUG (most verbose), INFO (moderately verbose), WARNING (only log warnings and errors), and ERROR (only log errors). By default, "INFO".

gen_fpathstr

Full filepath to HDF5 file with reV generation or rep_profiles output. Anything in the output_request input is added to and/or manipulated within this file.

Note

If executing reV from the command line, this input can also be "PIPELINE" to parse the output of one of the previous step and use it as input to this call. However, note that duplicate executions of reV commands prior to this one within the pipeline may invalidate this parsing, meaning the gen_fpath input will have to be specified manually.

site_datastr | pd.DataFrame

Site-specific input data for NRWAL calculation.If this input is a string, it should be a path that points to a CSV file. Otherwise, this input should be a DataFrame with pre-extracted site data. Rows in this table should match the meta_gid_col in the gen_fpath meta data input sites via a gid column. A config column must also be provided that corresponds to the nrwal_configs input. Only sites with a gid in this file’s gid column will be run through NRWAL.

sam_filesdict | str

A dictionary mapping SAM input configuration ID(s) to SAM configuration(s). Keys are the SAM config ID(s) which correspond to the keys in the nrwal_configs input. Values for each key are either a path to a corresponding SAM config file or a full dictionary of SAM config inputs. For example:

sam_files = {
    "default": "/path/to/default/sam.json",
    "onshore": "/path/to/onshore/sam_config.yaml",
    "offshore": {
        "sam_key_1": "sam_value_1",
        "sam_key_2": "sam_value_2",
        ...
    },
    ...
}

This input can also be a string pointing to a single SAM config file. In this case, the config column of the CSV points input should be set to None or left out completely. See the documentation for the reV SAM class (e.g. reV.SAM.generation.WindPower, reV.SAM.generation.PvWattsv8, reV.SAM.generation.Geothermal, etc.) for documentation on the allowed and/or required SAM config file inputs.

nrwal_configsdict

A dictionary mapping SAM input configuration ID(s) to NRWAL configuration(s). Keys are the SAM config ID(s) which correspond to the keys in the sam_files input. Values for each key are either a path to a corresponding NRWAL YAML or JSON config file or a full dictionary of NRWAL config inputs. For example:

nrwal_configs = {
    "default": "/path/to/default/nrwal.json",
    "onshore": "/path/to/onshore/nrwal_config.yaml",
    "offshore": {
        "nrwal_key_1": "nrwal_value_1",
        "nrwal_key_2": "nrwal_value_2",
        ...
    },
    ...
}
output_requestlist | tuple

List of output dataset names to be written to the gen_fpath file. Any key from the NRWAL configs or any of the inputs (site_data or sam_files) is available to be exported as an output dataset. If you want to manipulate a dset like cf_mean from gen_fpath and include it in the output_request, you should set save_raw=True and then use cf_mean_raw in the NRWAL equations as the input. This allows you to define an equation in the NRWAL configs for a manipulated cf_mean output that can be included in the output_request list.

save_rawbool, optional

Flag to save an initial (“raw”) copy of input datasets from gen_fpath that are also part of the output_request. For example, if you request cf_mean in output_request but also manipulate the cf_mean dataset in the NRWAL equations, the original cf_mean will be archived under the cf_mean_raw dataset in gen_fpath. By default, True.

meta_gid_colstr, optional

Column label in the source meta data from gen_fpath that contains the unique gid identifier. This will be joined to the site_data gid column. By default, "gid".

site_meta_colslist | tuple, optional

Column labels from site_data to be added to the meta data table in gen_fpath. If None, only the columns in DEFAULT_META_COLS will be added. Any columns requested via this input will be considered in addition to the DEFAULT_META_COLS. By default, None.

csv_outputbool, optional

Option to write H5 file meta + all requested outputs to CSV file instead of storing in the HDF5 file directly. This can be useful if the same HDF5 file is used for multiple sets of NRWAL runs. Note that all requested output datasets must be 1-dimensional in order to fir within the CSV output.

Important

This option is not compatible with save_raw=True. If you set csv_output=True, then the save_raw option is forced to be False. Therefore, make sure that you do not have any references to “input_dataset_name_raw” in your NRWAL config. If you need to manipulate an input dataset, save it to a different output name in the NRWAL config or manually add an “input_dataset_name_raw” dataset to your generation HDF5 file before running NRWAL.

By default, False.

Note that you may remove any keys with a null value if you do not intend to update them yourself.