dwind.mp#
Provides the MultiProcess
class for running a model on `NREL's Kestrel HPC system`_.
Classes
|
Multiprocessing interface for running batch jobs via |
- class dwind.mp.MultiProcess(location, sector, scenario, year, env, n_nodes, memory, walltime, allocation, feature, repository, model_config, stdout_path=None, dir_out=None)[source]#
Multiprocessing interface for running batch jobs via
SLURM
.- Parameters:
location (str) – The state name, or an underscore-separated string of “state_county”
sector (str) – One of “fom” (front of meter) or “btm” (back of the meter).
scenario (str) – An underscore-separated string for the scenario to be run.
year (int) – The year-basis for the scenario.
env (str | Path) – The path to the
dwind
Python environment that should be used to run the model.n_nodes (int) – Number of nodes to request from the HPC when running an
sbatch
job.memory (int) – Node memory, in GB.
walltime (int) – Node walltime request, in hours.
alloc (str) – The HPC project (allocation) handle that will be charged for running the analysis.
feature (str) – Additional flags for the SLURM job, using formatting such as
--qos=high
or--depend=[state:job_id]
.model_config (str) – The full file path and name of where the model configuration file is located.
stdout_path (str | Path | None, optional) – The path where all the stdout logs should be written to, by default None. When None, “
dir_out
/ logs” is used.dir_out (_type_, optional) – The path to save the chunked results files, by default Path.getcwd() (current working directory).