reVX
reVX command line interface.
reVX [OPTIONS] COMMAND [ARGS]...
Options
- --version
Show the version and exit.
- -v, --verbose
Flag to turn on debug logging. Default is not verbose.
add-reeds-cols
Execute the add-reeds-cols
step from a config file.
This method will add columns like “cnty_fips”, “state”, “county”, “nrel_region”, “eos_mult”, and “reg_mult”. This method also allows you to add extra columns from H5 or JSON files.
The general structure for calling this CLI command is given below
(add --help
to print help info to the terminal).
reVX add-reeds-cols [OPTIONS]
Options
- -c, --config_file <config_file>
Required Path to the
add-reeds-cols
configuration file. Below is a sample template config{ "execution_control": { "option": "local", "allocation": "[REQUIRED IF ON HPC]", "walltime": "[REQUIRED IF ON HPC]", "qos": "normal", "memory": null, "queue": null, "feature": null, "conda_env": null, "module": null, "sh_script": null, "num_test_nodes": null }, "log_directory": "./logs", "log_level": "INFO", "supply_curve_fpath": "[REQUIRED]", "out_fp": null, "capacity_col": "capacity", "extra_data": null, "merge_col": "sc_point_gid", "filter_out_zero_capacity": true, "rename_mapping": null, "regions": "https://www2.census.gov/geo/tiger/TIGER2021/COUNTY/tl_2021_us_county.zip" }
execution_control: option: local allocation: '[REQUIRED IF ON HPC]' walltime: '[REQUIRED IF ON HPC]' qos: normal memory: null queue: null feature: null conda_env: null module: null sh_script: null num_test_nodes: null log_directory: ./logs log_level: INFO supply_curve_fpath: '[REQUIRED]' out_fp: null capacity_col: capacity extra_data: null merge_col: sc_point_gid filter_out_zero_capacity: true rename_mapping: null regions: https://www2.census.gov/geo/tiger/TIGER2021/COUNTY/tl_2021_us_county.zip
log_directory = "./logs" log_level = "INFO" supply_curve_fpath = "[REQUIRED]" capacity_col = "capacity" merge_col = "sc_point_gid" filter_out_zero_capacity = true regions = "https://www2.census.gov/geo/tiger/TIGER2021/COUNTY/tl_2021_us_county.zip" [execution_control] option = "local" allocation = "[REQUIRED IF ON HPC]" walltime = "[REQUIRED IF ON HPC]" qos = "normal"
Parameters
- execution_controldict
Dictionary containing execution control arguments. Allowed arguments are:
- option:
({‘local’, ‘kestrel’, ‘eagle’, ‘awspc’, ‘slurm’, ‘peregrine’}) Hardware run option. Determines the type of job scheduler to use as well as the base AU cost. The “slurm” option is a catchall for HPC systems that use the SLURM scheduler and should only be used if desired hardware is not listed above. If “local”, no other HPC-specific keys in are required in execution_control (they are ignored if provided).
- allocation:
(str) HPC project (allocation) handle.
- walltime:
(int) Node walltime request in hours.
- qos:
(str, optional) Quality-of-service specifier. For Kestrel users: This should be one of {‘standby’, ‘normal’, ‘high’}. Note that ‘high’ priority doubles the AU cost. By default,
"normal"
.- memory:
(int, optional) Node memory max limit (in GB). By default,
None
, which uses the scheduler’s default memory limit. For Kestrel users: If you would like to use the full node memory, leave this argument unspecified (or set toNone
) if you are running on standard nodes. However, if you would like to use the bigmem nodes, you must specify the full upper limit of memory you would like for your job, otherwise you will be limited to the standard node memory size (250GB).- queue:
(str, optional; PBS ONLY) HPC queue to submit job to. Examples include: ‘debug’, ‘short’, ‘batch’, ‘batch-h’, ‘long’, etc. By default,
None
, which uses “test_queue”.- feature:
(str, optional) Additional flags for SLURM job (e.g. “-p debug”). By default,
None
, which does not specify any additional flags.- conda_env:
(str, optional) Name of conda environment to activate. By default,
None
, which does not load any environments.- module:
(str, optional) Module to load. By default,
None
, which does not load any modules.- sh_script:
(str, optional) Extra shell script to run before command call. By default,
None
, which does not run any scripts.- num_test_nodes:
(str, optional) Number of nodes to submit before terminating the submission process. This can be used to test a new submission configuration without sumbitting all nodes (i.e. only running a handful to ensure the inputs are specified correctly and the outputs look reasonable). By default,
None
, which submits all node jobs.
Only the option key is required for local execution. For execution on the HPC, the allocation and walltime keys are also required. All other options are populated with default values, as seen above.
- log_directorystr
Path to directory where logs should be written. Path can be relative and does not have to exist on disk (it will be created if missing). By default,
"./logs"
.- log_level{“DEBUG”, “INFO”, “WARNING”, “ERROR”}
String representation of desired logger verbosity. Suitable options are
DEBUG
(most verbose),INFO
(moderately verbose),WARNING
(only log warnings and errors), andERROR
(only log errors). By default,"INFO"
.- supply_curve_fpathstr
Path to input supply curve. Should have standard reV supply curve output columns (e.g. latitude, longitude, capacity, sc_point_gid, etc.). If running from CLI, this can be a list of supply curve paths.
- out_fpstr, optional
Path to output file for supply curve with new columns. If
None
, the supply curve will be overwritten (i.e. the data will be written to supply_curve_fpath). If running from CLI, this can be a list output paths (length must match length of supply_curve_fpath). By default,None
.- capacity_colstr, optional
Name of capacity column. This is used to filter out sites with zero capacity, if that option is selected. By default,
"capacity"
.- extra_datalist of dicts, optional
A list of dictionaries, where each dictionary contains two keys. The first key is “source”, and its value must either be a dictionary of field: value pairs or a path to the extra data being extracted. The latter must be a path pointing to an HDF5 or JSON file (i.e. it must end in “.h5” or “.json”). The second key is “dsets”, and it points to a list of dataset names to extract from source. For JSON and dictionary data extraction, the values of the datasets must either be scalars or must match the length of the input data_frame. For HDF5 data, the datasets must be 1D datasets, and they will be merged with the input data_frame on merge_col (column must be in the HDF5 file meta). By default,
None
.- merge_colstr, optional
Name of column used to merge the data in the input supply curve with the data in the HDF5 file if extra_data is specified. Note that this column must be present in both the input supply curve as well as the HDF5 file meta. By default,
"sc_point_gid"
.- filter_out_zero_capacitybool, optional
Flag to filter out sites with zero capacity. By default,
True
.- rename_mappingdict, optional
Optional mapping of old column names to new column names. This mapping will be used to rename the columns in the supply curve towards the end of the procedure (after all extra columns except
eos_mult
andreg_mult
have been added). By default,None
(no renaming).- regionsstr, optional
Path to a regions shapefile containing county geometries labeled with county FIPS values. Default value pulls the data from
www2.census.gov
.
Note that you may remove any keys with a
null
value if you do not intend to update them yourself.
correct-forecast
Bias correct and blend (if requested) forecasts using actuals: - Bias correct forecast data using bias correction factor: total actual generation / total forecasted generation - Blend fcst_perc of forecast generation with (1 - fcst_perc) of actuals generation
reVX correct-forecast [OPTIONS]
Options
- -fcst, --fcst_h5 <fcst_h5>
Required Path to forecast .h5 file
- -fdset, --fcst_dset <fcst_dset>
Required Dataset to correct
- -out, --out_h5 <out_h5>
Required Output path for corrected .h5 file
- -actuals, --actuals_h5 <actuals_h5>
Path to forecast to .h5 file, by default None
- -adset, --actuals_dset <actuals_dset>
Actuals dataset, by default None
- -perc, --fcst_perc <fcst_perc>
Percentage of forecast to use for blending, by default None
exclusions
Extract from or create exclusions .h5 file
reVX exclusions [OPTIONS] COMMAND [ARGS]...
Options
- -h5, --excl_h5 <excl_h5>
Required Path to .h5 file containing or to contain exclusion layers
layers-from-h5
Extract layers from excl .h5 file and save to disk as geotiffs
reVX exclusions layers-from-h5 [OPTIONS]
Options
- -o, --out_dir <out_dir>
Required Output directory to save layers into
- -l, --layers <layers>
List of layers to extract, if None extract all
- -hsds, --hsds
Extract layers from HSDS
layers-to-h5
Add layers to exclusions .h5 file
reVX exclusions layers-to-h5 [OPTIONS]
Options
- -l, --layers <layers>
Required .json file containing mapping of layer names to geotiffs. Json can also contain layer descriptions and/or scale factors
- -check_tiff, -ct
Flag to check tiff profile, CRS, and shape against exclusion .h5 profile, CRS, and shape
- -sb, --setbacks
Flag to convert setbacks to exclusion layers
- -dtp, --distance_to_ports
Flag to convert distances to ports to exclusion layers
- -tatol, --transform_atol <transform_atol>
Absolute tolerance parameter when comparing geotiff transform data.
- Default:
0.01
- -r, --purge
Remove existing .h5 file before loading layers
mask
Compute Setbacks locally
reVX exclusions mask [OPTIONS]
Options
- -ed, --excl_dict_fpath <excl_dict_fpath>
Required Path to JSON file containing the
"excl_dict"
key which points to the exclusion dictionary defining the mask that should be generated. A typical reV aggregation config satisfied this requirement. If this file also contains an"excl_fpath"
key, the value from the file will override the--excl_h5
CLI argument input.
- -o, --out <out>
Required Output name. If this string value ends in “.tif” or “.tiff”, this input is assumed to be a path to an output tiff file, and the mask will be written to that destination. Otherwise, this input is assumed to be the name of the layer in the exclusion file to write the mask to.
- -ma, --min_area <min_area>
Minimum required contiguous area in sq-km.
- -k, --kernel <kernel>
Contiguous filter method to use on final exclusion.
- Default:
'queen'
- -hsds, --hsds
Flag to use h5pyd to handle .h5 domain hosted on AWS behind HSDS
extract-output-year
Extract all datasets for a give year from multi-year output file
reVX extract-output-year [OPTIONS]
Options
- -src, --my_fpath <my_fpath>
Required Path to multi-year output .h5 file
- -out, --out_fpath <out_fpath>
Required Path to output .h5 file
- -yr, --year <year>
Year to extract, if None parse from out_fpath
region-classifier
Region Classifier - Used to classify meta points with a label from a shapefile
reVX region-classifier [OPTIONS]
Options
- -mp, --meta_path <meta_path>
Required Path to meta CSV file, resource .h5 file containing lat/lon points
- -rp, --regions_path <regions_path>
Required Path to regions shapefile containing labeled geometries
- -rl, --regions_label <regions_label>
Attribute to use as label in the regions shapefile
- -o, --fout <fout>
Required Output CSV file path for labeled meta CSV file
- -f, --force
Force outlier classification by finding nearest.