reV econ

Execute the econ step from a config file.

reV econ analysis runs SAM econ calculations, typically to compute LCOE (using PySAM.Lcoefcr.Lcoefcr), though PySAM.Singleowner.Singleowner or PySAM.Windbos.Windbos calculations can also be performed simply by requesting outputs from those computation modules. See the keys of Econ.OPTIONS for all available econ outputs. Econ computations rely on an input a generation (i.e. capacity factor) profile. You can request reV to run the analysis for one or more “sites”, which correspond to the meta indices in the generation data.

The general structure for calling this CLI command is given below (add --help to print help info to the terminal).

reV econ [OPTIONS]

Options

-c, --config_file <config_file>

Required Path to the econ configuration file. Below is a sample template config

{
    "execution_control": {
        "option": "local",
        "allocation": "[REQUIRED IF ON HPC]",
        "walltime": "[REQUIRED IF ON HPC]",
        "qos": "normal",
        "memory": null,
        "nodes": 1,
        "queue": null,
        "feature": null,
        "conda_env": null,
        "module": null,
        "sh_script": null,
        "num_test_nodes": null,
        "max_workers": 1,
        "sites_per_worker": 100,
        "memory_utilization_limit": 0.4,
        "timeout": 1800,
        "pool_size": null
    },
    "log_directory": "./logs",
    "log_level": "INFO",
    "project_points": "[REQUIRED]",
    "sam_files": "[REQUIRED]",
    "cf_file": "[REQUIRED]",
    "site_data": null,
    "output_request": [
        "lcoe_fcr"
    ],
    "append": false,
    "analysis_years": null
}

Parameters

execution_controldict

Dictionary containing execution control arguments. Allowed arguments are:

option:

({‘local’, ‘kestrel’, ‘eagle’, ‘awspc’, ‘slurm’, ‘peregrine’}) Hardware run option. Determines the type of job scheduler to use as well as the base AU cost. The “slurm” option is a catchall for HPC systems that use the SLURM scheduler and should only be used if desired hardware is not listed above. If “local”, no other HPC-specific keys in are required in execution_control (they are ignored if provided).

allocation:

(str) HPC project (allocation) handle.

walltime:

(int) Node walltime request in hours.

qos:

(str, optional) Quality-of-service specifier. For Kestrel users: This should be one of {‘standby’, ‘normal’, ‘high’}. Note that ‘high’ priority doubles the AU cost. By default, "normal".

memory:

(int, optional) Node memory max limit (in GB). By default, None, which uses the scheduler’s default memory limit. For Kestrel users: If you would like to use the full node memory, leave this argument unspecified (or set to None) if you are running on standard nodes. However, if you would like to use the bigmem nodes, you must specify the full upper limit of memory you would like for your job, otherwise you will be limited to the standard node memory size (250GB).

nodes:

(int, optional) Number of nodes to split the project points across. Note that the total number of requested nodes for a job may be larger than this value if the command splits across other inputs. Default is 1.

max_workers:

(int, optional) Number of local workers to run on. By default, 1.

sites_per_worker:

(int, optional) Number of sites to run in series on a worker. None defaults to the resource file chunk size. By default, None.

memory_utilization_limit:

(float, optional) Memory utilization limit (fractional). Must be a value between 0 and 1. This input sets how many site results will be stored in-memory at any given time before flushing to disk. By default, 0.4.

timeout:

(int, optional) Number of seconds to wait for parallel run iteration to complete before returning zeros. By default, 1800 seconds.

pool_size:

(int, optional) Number of futures to submit to a single process pool for parallel futures. If None, the pool size is set to os.cpu_count() * 2. By default, None.

queue:

(str, optional; PBS ONLY) HPC queue to submit job to. Examples include: ‘debug’, ‘short’, ‘batch’, ‘batch-h’, ‘long’, etc. By default, None, which uses “test_queue”.

feature:

(str, optional) Additional flags for SLURM job (e.g. “-p debug”). By default, None, which does not specify any additional flags.

conda_env:

(str, optional) Name of conda environment to activate. By default, None, which does not load any environments.

module:

(str, optional) Module to load. By default, None, which does not load any modules.

sh_script:

(str, optional) Extra shell script to run before command call. By default, None, which does not run any scripts.

num_test_nodes:

(str, optional) Number of nodes to submit before terminating the submission process. This can be used to test a new submission configuration without sumbitting all nodes (i.e. only running a handful to ensure the inputs are specified correctly and the outputs look reasonable). By default, None, which submits all node jobs.

Only the option key is required for local execution. For execution on the HPC, the allocation and walltime keys are also required. All other options are populated with default values, as seen above.

log_directorystr

Path to directory where logs should be written. Path can be relative and does not have to exist on disk (it will be created if missing). By default, "./logs".

log_level{“DEBUG”, “INFO”, “WARNING”, “ERROR”}

String representation of desired logger verbosity. Suitable options are DEBUG (most verbose), INFO (moderately verbose), WARNING (only log warnings and errors), and ERROR (only log errors). By default, "INFO".

project_pointsint | list | tuple | str | dict | pd.DataFrame | slice

Input specifying which sites to process. A single integer representing the GID of a site may be specified to evaluate reV at a single location. A list or tuple of integers (or slice) representing the GIDs of multiple sites can be specified to evaluate reV at multiple specific locations. A string pointing to a project points CSV file may also be specified. Typically, the CSV contains the following columns:

  • gid: Integer specifying the generation GID of each site.

  • config: Key in the sam_files input dictionary (see below) corresponding to the SAM configuration to use for each particular site. This value can also be None (or left out completely) if you specify only a single SAM configuration file as the sam_files input.

  • capital_cost_multiplier: This is an optional multiplier input that, if included, will be used to regionally scale the capital_cost input in the SAM config. If you include this column in your CSV, you do not need to specify capital_cost, unless you would like that value to vary regionally and independently of the multiplier (i.e. the multiplier will still be applied on top of the capital_cost input).

The CSV file may also contain other site-specific inputs by including a column named after a config keyword (e.g. a column called wind_turbine_rotor_diameter may be included to specify a site-specific turbine diameter for each location). Columns that do not correspond to a config key may also be included, but they will be ignored. A DataFrame following the same guidelines as the CSV input (or a dictionary that can be used to initialize such a DataFrame) may be used for this input as well.

sam_filesdict | str

A dictionary mapping SAM input configuration ID(s) to SAM configuration(s). Keys are the SAM config ID(s) which correspond to the config column in the project points CSV. Values for each key are either a path to a corresponding SAM config file or a full dictionary of SAM config inputs. For example:

sam_files = {
    "default": "/path/to/default/sam.json",
    "onshore": "/path/to/onshore/sam_config.yaml",
    "offshore": {
        "sam_key_1": "sam_value_1",
        "sam_key_2": "sam_value_2",
        ...
    },
    ...
}

This input can also be a string pointing to a single SAM config file. In this case, the config column of the CSV points input should be set to None or left out completely. See the documentation for the reV SAM class (e.g. reV.SAM.generation.WindPower, reV.SAM.generation.PvWattsv8, reV.SAM.generation.Geothermal, etc.) for documentation on the allowed and/or required SAM config file inputs.

cf_filestr

Path to reV output generation file containing a capacity factor output.

Note

If executing reV from the command line, this path can contain brackets {} that will be filled in by the analysis_years input. Alternatively, this input can be set to "PIPELINE" to parse the output of the previous step (reV generation) and use it as input to this call. However, note that duplicate executions of reV generation within the pipeline may invalidate this parsing, meaning the cf_file input will have to be specified manually.

site_datastr | pd.DataFrame, optional

Site-specific input data for SAM calculation. If this input is a string, it should be a path that points to a CSV file. Otherwise, this input should be a DataFrame with pre-extracted site data. Rows in this table should match the input sites via a gid column. The rest of the columns should match configuration input keys that will take site-specific values. Note that some or all site-specific inputs can be specified via the project_points input table instead. If None, no site-specific data is considered. By default, None.

output_requestlist | tuple, optional

List of output variables requested from SAM. Can be any of the parameters in the “Outputs” group of the PySAM module (e.g. PySAM.Windpower.Windpower.Outputs, PySAM.Pvwattsv8.Pvwattsv8.Outputs, PySAM.Geothermal.Geothermal.Outputs, etc.) being executed. This list can also include a select number of SAM config/resource parameters to include in the output: any key in any of the output attribute JSON files may be requested. Time-series profiles requested via this input are output in UTC. By default, ('lcoe_fcr',).

appendbool

Option to append econ datasets to source cf_file. By default, False.

log_directorystr

Path to log output directory.

analysis_yearsint | list, optional

A single year or list of years to perform analysis for. These years will be used to fill in any brackets {} in the resource_file input. If None, the resource_file input is assumed to be the full path to the single resource file to be processed. By default, None.

Note that you may remove any keys with a null value if you do not intend to update them yourself.