dGen Code Base

The dGen source code is located within the python/ directory. This page contains information on the classes, functions, attributes, and methods used by dGen. This page is useful for debugging errors, or examining how different elements interact incase a special run is desired.

Submodules

python.config module

python.decorators module

Module of accessory decorators, mainly for logging purposes.

class python.decorators.fid(i)[source]

Bases: object

class python.decorators.fn_info(info, logger=None, tab_level=0)[source]

Bases: object

class python.decorators.fn_timer(logger=None, verbose=True, tab_level=0, prefix='')[source]

Bases: object

Decorater class for profiling the run-time of functions.

python.decorators.shared(f)[source]
python.decorators.unshared(f)[source]

python.dgen_model module

Distributed Generation Market Demand Model (dGen) - Open Source Release National Renewable Energy Lab

This is the main module of the dGen Model. Running this module requires a properly installed environment with applicable scenario files.

python.dgen_model.main(mode=None, resume_year=None, endyear=None, ReEDS_inputs=None)[source]

Compute the economic adoption of distributed generation resources on an agent-level basis. Model output is saved to a /runs file within the dGen directory as well as in the “agent_outputs” table within the new schema created upon each model run.

python.data_functions module

Functions for pulling data

python.data_functions.create_model_years(start_year, end_year, increment=2)[source]

Return a list of model years ranging between the specified model start year and end year that increments by 2 year time steps.

Parameters
  • start_year ('int') – starting year of the model (e.g. 2014)

  • end_year ('int') – ending year of the model (e.g. 2050)

Returns

model_years – list of model years ranging between the specified model start year and end year that increments by 2 year time steps.

Return type

‘list’

python.data_functions.create_output_schema(pg_conn_string, role, suffix, scenario_list, source_schema='diffusion_template', include_data=True)[source]

Creates output schema that will be dropped into the database

Parameters
  • pg_conn_string ('string') – String to connect to pgAdmin database

  • role ('string') – Owner of schema

  • suffix ('string') – String to mark the time that model is kicked off. Added to end of schema to act as a unique indentifier

  • source_schema ('SQL schema') – Schema to be used as template for the output schema

  • include_data ('bool') – If True includes data from diffusion_shared schema. Default is False

Returns

dest_schema – Output schema that will house the final results

Return type

‘SQL schema’

python.data_functions.create_scenario_results_folder(input_scenario, scen_name, scenario_names, out_dir, dup_n=0)[source]

Creates scenario results directories

Parameters
  • input_scenario ('directory') – Scenario inputs pulled from excel file within diffusion/inputs_scenarios folder

  • scen_name ('string') – Scenario Name

  • scenario_names ('list') – List of scenario names

  • out_dir ('directory') – Output directory for scenario subfolders

  • dup_n ('int') – Number to track duplicate scenarios in scenario_names. Default is 0 unless otherwise specified.

Returns

  • out_scen_path (‘directory’) – Path for the scenario subfolders to send results

  • scenario_names – Populated list of scenario names

  • dup_n (‘int’) – Number to track duplicate scenarios, stepped up by 1 from original value if there is a duplicate

python.data_functions.create_tech_subfolders(out_scen_path, techs, out_subfolders)[source]

Creates subfolders for results of each specified technology

Parameters
  • out_scen_path ('directory') – Path for the scenario folder to send results

  • techs ('string') – Technology type

  • out_subfolders ('dict') – Dictionary of empty subfolder paths for solar

Returns

out_subfolders – Dictionary with subfolder paths for solar

Return type

‘dict’

python.data_functions.drop_output_schema(pg_conn_string, schema, delete_output_schema)[source]

Deletes output schema from database if set to true

Parameters
  • pg_conn_string ('string') – String to connect to pgAdmin database

  • schema ('SQL schema') – Schema that will be deleted

  • delete_output_schema ('bool') – If set to True in config.py, deletes output schema

python.data_functions.get_agent_file_scenario(con, schema)[source]
python.data_functions.get_annual_inflation(con, schema)[source]

Return the inflation rate set in the input sheet. Constant for all years & sectors.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – diffusion_shared.input_main_market_inflation

Returns

df.values[0][0] – Float object that represents the inflation rate (e.g. 0.025 which corresponds to 2.5%).

Return type

‘float’

python.data_functions.get_bass_params(con, schema)[source]

Return the bass diffusion parameters to use in the model from table view in postgres.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema in which the sectors exist

Returns

bass_df – Pandas DataFrame of state abbreviation, p, q, teq_yr1 (time equivalency), sector abbreviation, and the technology.

Return type

‘pd.df’

python.data_functions.get_input_scenarios()[source]

Returns a list of the input scenario excel files specified in the input_scenarios directory.

Returns

scenarios – a list of the input scenario excel files specified in the input_scenarios directory.

Return type

‘list’

python.data_functions.get_itc_incentives(con, schema)[source]

Return the Investment Tax Credit incentives to use in the model from table view in postgres.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema in which the sectors exist

Returns

itc_options – Pandas DataFrame of ITC financial incentives.

Return type

‘pd.df’

python.data_functions.get_load_growth(con, schema)[source]

Return rate load growth values applied to electricity load.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with year, county_id, sector_abbr, nerc_region_abbr, load_multiplier as columns.

Return type

‘pd.df’

python.data_functions.get_max_market_share(con, schema)[source]

Return the max market share from database, select curve based on scenario_options, and interpolate to tenth of a year. Use passed parameters to determine ownership typ.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema - string - for technology i.e. diffusion_solar

Returns

max_market_share – Pandas DataFrame to join on main df to determine max share keys are sector & payback period.

Return type

‘pd.df’

python.data_functions.get_nem_state(con, schema)[source]

Returns net metering data for states with available data. Note, many states don’t have net metering and or the data in diffusion_shared.nem_state_limits_2019 may be incomplete or out of date.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with net metering data.

Return type

‘pd.df’

python.data_functions.get_nem_state_by_sector(con, schema)[source]

Returns net metering data for states by sector with available data. Note, many states don’t have net metering and or the data in diffusion_shared.nem_scenario_bau_2019 may be incomplete or out of date.

Special handling of DC: System size is unknown until bill calculator runs and differing compensation styles can potentially result in different optimal system sizes. Here we assume only res customers (assumed system_size_kw < 100) are eligible for full retail net metering; com/ind (assumed system_size_kw >= 100) only eligible for net billing.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with net metering data.

Return type

‘pd.df’

python.data_functions.get_nem_utility_by_sector(con, schema)[source]

Returns net metering data for utility by sector with available data. Note, many utilities don’t have net metering and or the data in diffusion_shared.nem_scenario_bau_by_utility_2019 may be incomplete or out of date.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with net metering data.

Return type

‘pd.df’

python.data_functions.get_rate_escalations(con, schema)[source]

Return rate escalation multipliers from database. Escalations are filtered and applied in calc_economics, resulting in an average real compounding rate growth. This rate is then used to calculate cash flows.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

rate_escalations – Pandas DataFrame with county_id, sector, year, escalation_factor, and source as columns.

Return type

‘pd.df’

python.data_functions.get_scenario_options(cur, schema, pg_params)[source]

Pull scenario options and log the user running the scenario from dB

python.data_functions.get_sectors(cur, schema)[source]

Return the sectors to model from table view in postgres.

Parameters
  • cur ('SQL cursor') – Cursor

  • schema ('SQL schema') – Schema in which the sectors exist

Returns

sectors – Dictionary of sectors to be modeled in table view in postgres

Return type

‘dict’

python.data_functions.get_selected_scenario(con, schema)[source]

Returns net metering scenario selected in the input sheet. Note, net metering data and or scenarios may be incomplete or out of date.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with net metering data.

Return type

‘pd.df’

python.data_functions.get_state_incentives(con)[source]

Return the state incentives to use in the model from table view in postgres.

Parameters

con ('SQL connection') – Connection

Returns

state_incentives – Pandas DataFrame of state financial incentives.

Return type

‘pd.df’

python.data_functions.get_state_to_model(con, schema)[source]

Returns the region to model as specified in the input sheet. Note, selecting an ISO will select the proper geographies (counties and or states) in import_agent_file() in ‘input_data_functions.py’. Selecting the United States (national run) will result in every state, excluding Alaska and Hawaii, but including D.C., being returned as a list.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

state_to_model – List of states to model.

Return type

‘list’

python.data_functions.get_technologies(con, schema)[source]

Return the technologies to model from table view in postgres.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema in which the technologies exist

Returns

techs – List of technologies to be modeled in table view in postgres

Return type

‘list’

python.data_functions.get_technology_costs_solar(con, schema)[source]

Return technology costs for solar.

Parameters
  • con ('SQL connection') – Connection

  • schema ('SQL schema') – Schema produced when model is run

Returns

df – Pandas DataFrame with year, sector_abbr, system_capex_per_kw, system_om_per_kw, system_variable_om_per_kw as columns.

Return type

‘pd.df’

python.data_functions.make_output_directory_path(suffix)[source]

Creates and returns a directory named ‘results’ with the timestamp associated with the model run appended. Note, this directory stores metadata associated with the a model run, however, the results of the model run are in the ‘agent_outputs’ table within the schema created with each run in the database.

python.data_functions.summarize_scenario(scenario_settings, model_settings)[source]

Log high level secenario settings

python.input_data_functions module

python.input_data_functions.deprec_schedule(df)[source]

Takes depreciation schedule and sorts table fields by depreciation year

Parameters

df ('pd.df') – Dataframe to be sorted by sector.

Returns

output – Dataframe of depreciation schedule sorted by year

Return type

‘pd.df’

python.input_data_functions.df_to_psql(df, engine, schema, owner, name, if_exists='replace', append_transformations=False)[source]

Uploads dataframe to database

Parameters
  • df ('pd.df') – Dataframe to upload to database

  • engine ('SQL table') – SQL engine to intepret SQL query

  • schema ('SQL schema') – Schema in which to upload df

  • owner ('string') – Owner of schema

  • name ('string') – Name to be given to table that is uploaded

  • if_exists ('replace or append') – If table exists and if if_exists set to replace, replaces table in database. If table exists and if if_exists set to append, appendss table in database.

  • append_transformations ('bool') – IDK

Returns

df – Dataframe that was uploaded to database

Return type

‘pd.df’

python.input_data_functions.get_psql_table_fields(engine, schema, name)[source]

Creates numpy array of columns from specified schema and table

Parameters
  • engine ('SQL engine') – SQL engine to intepret SQL query

  • schema ('SQL schema') – SQL schema to pull table from

  • name ('string') – Name of the table from which fields are retrieved

Returns

numpy array – Numpy array of columns

Return type

‘np.array’

python.input_data_functions.get_scenario_settings(schema, con)[source]

Creates dataframe of default scenario settings from input_main_scenario_options table

Parameters
  • schema ('SQL schema') – Schema in which to look for the scenario settings

  • con ('SQL connection') – SQL connection to connect to database

Returns

df – Dataframe of default scenario settings

Return type

‘pd.df’

python.input_data_functions.get_userdefined_scenario_settings(schema, table_name, con)[source]

Creates dataframe of user created scenario settings

Parameters
  • schema ('SQL schema') – Schema in which to look for the scenario settings

  • con ('SQL connection') – SQL connection to connect to database

Returns

df – Dataframe of user created scenario settings

Return type

‘pd.df’

python.input_data_functions.import_agent_file(scenario_settings, con, cur, engine, model_settings, agent_file_status, input_name)[source]

Generates new agents or uses pre-generated agents from provided .pkl file

Parameters
  • scenario_settings ('SQL schema') – Schema of the scenario settings

  • con ('SQL connection') – SQL connection to connect to database

  • cur ('SQL cursor') – Cursor

  • engine ('SQL engine') – SQL engine to intepret SQL query

  • model_settings ('object') – Model settings that apply to all scenarios

  • agent_file_status ('attribute') – Attribute that describes whether to use pre-generated agent file or create new

  • input_name ('string') – .Pkl file name substring of pre-generated agent table

Returns

solar_agents – Instance of Agents class with either user pre-generated or new data

Return type

‘Class’

python.input_data_functions.import_table(scenario_settings, con, engine, role, input_name, csv_import_function=None)[source]

Imports table from csv given the name of the csv

Parameters
  • scenario_settings ('SQL schema') – Schema in which to look for the scenario settings

  • con ('SQL connection') – SQL connection to connect to database

  • engine ('SQL engine') – SQL engine to intepret SQL query

  • role ('string') – Owner of schema

  • input_name ('string') – Name of the csv file that should be imported

  • csv_import_function ('function') – Specific function to import and munge csv

Returns

df – Dataframe of the table that was imported

Return type

‘pd.df’

python.input_data_functions.melt_year(parameter_name)[source]

Returns a function to melt dataframe’s columns of years and parameter values to the row axis

Parameters

name (parameter) – Name of the parameter value in dataframe.

Returns

function – Function that melts years and parameter value to row axis

Return type

‘function’

python.input_data_functions.process_elec_price_trajectories(elec_price_traj)[source]

Returns the trajectory of the change in electricity prices over time with 2018 as the base year

Parameters

elec_price_traj ('pd.df') – Dataframe of electricity prices by year and ReEDS BA

Returns

elec_price_change_traj – Dataframe of annual electricity price change factors from base year

Return type

‘pd.df’

python.input_data_functions.process_load_growth(load_growth)[source]

Returns the trajectory of the load growth rates over time relative to a base year of 2014

Parameters

load_growth ('pd.df') – Dataframe of annual load growth rates

Returns

load_growth_change_traj – Dataframe of annual load growth rates relative to base year

Return type

‘pd.df’

python.input_data_functions.process_wholesale_elec_prices(wholesale_elec_price_traj)[source]

Returns the trajectory of the change in wholesale electricity prices over time

Parameters

wholesale_elec_price_traj ('pd.df') – Dataframe of wholesale electricity prices by year and ReEDS BA

Returns

wholesale_elec_price_change_traj – Dataframe of annual electricity price change factors from base year

Return type

‘pd.df’

python.input_data_functions.stacked_sectors(df)[source]

Takes dataframe and sorts table fields by sector

Parameters

df ('pd.df') – Dataframe to be sorted by sector.

Returns

output – Dataframe of the table that was imported and split by sector

Return type

‘pd.df’

python.diffusion_functions_elec module

Name: diffusion_functions Purpose: Contains functions to calculate diffusion of distributed wind model

  1. Determine maximum market size as a function of payback time;

  2. Parameterize Bass diffusion curve with diffusion rates (p, q) set by payback time;

  3. Determine current stage (equivaluent time) of diffusion based on existing market and current economics

  4. Calculate new market share by stepping forward on diffusion curve.

python.diffusion_functions_elec.bass_diffusion(df)[source]

Calculate the fraction of population that diffuse into the max_market_share. Note that this is different than the fraction of population that will adopt, which is the max market share

IN: p,q - numpy arrays - Bass diffusion parameters

t - numpy array - Number of years since diffusion began

OUT: new_adopt_fraction - numpy array - fraction of overall population

that will adopt the technology

python.diffusion_functions_elec.calc_diffusion(df, cur, con, techs, choose_tech, sectors, schema, is_first_year, bass_params, override_p_value=None, override_q_value=None, override_teq_yr1_value=None)[source]

Calculate the fraction of overall population that have adopted the technology in the current period. :param df:

df.payback_period

Payback period in years.

Type

numpy.ndarray

df.max_market_share

Maximum market share as decimal percentage.

Type

numpy.ndarray

df.current_market_share

Current market share as decimal percentage.

Type

numpy.ndarray

Parameters

is_first_year (bool) – If True, the new equivalent time (teq2) is equal to the original teq_yr1 plus the increment defined in teq. Otherwise, teq2 is equal to teq plus 2 years.

Returns

The fraction of overall population that have adopted the technology

Return type

numpy.ndarray

Note

  1. This does not specify the actual new adoption fraction without knowing adoption in the previous period.

  2. The relative economic attractiveness controls the p, q value in the Bass diffusion model.

  3. The current assumption is that only payback and MBS are being used, that pp is bounded [0-30] and MBS is bounded [0-120].

python.diffusion_functions_elec.calc_diffusion_market_share(df, is_first_year)[source]

Calculate the fraction of overall population that have adopted (diffused into the max market share) the technology in the current period. Note that this does not specify the actual new adoption fraction without knowing adoption in the previous period.

Parameters

df (pandas.DataFrame) –

df.p

Bass diffusion parameter defining the coeffieicent of innovation.

Type

numpy.ndarray

df.q

Bass diffusion parameter definint the coefficient of imitation.

Type

numpy.ndarray

df.t

Number of years since the diffusion model began.

Type

numpy.ndarray

Returns

Input dataframe with new_adopt_fraction column added. new_adopt_fraction represents the proportion of the overall population that will adopt the technology.

Return type

DataFrame

Note

This is different than the fraction of population that will adopt, which is the max market share.y

python.diffusion_functions_elec.calc_diffusion_solar(df, is_first_year, bass_params, year, override_p_value=None, override_q_value=None, override_teq_yr1_value=None)[source]

Calculates the market share (ms) added in the solve year. Market share must be less than max market share (mms) except initial ms is greater than the calculated mms. For this circumstance, no diffusion allowed until mms > ms. Also, do not allow ms to decrease if economics deterioriate. Using the calculated market share, relevant quantities are updated.

Parameters
  • df (pandas.DataFrame) – Input dataframe.

  • is_first_year (bool) – Passed to diffusion_functions.calc_diffusion_market_share() to determine the increment of teq

  • bass_params (pandas.DataFrame) – DataFrame generally derived from settings.get_bass_params(), includes the following attributes: control_reg_id, country_abbr, sector_abbr, state_id, p, q, teq_yr1, tech.

  • override_p_values (float , optional) – Value to override bass diffusion p coefficient of innovation with.

  • overide_q_values (float, optional) – Value to override bass diffusion q coefficient of immitation with.

  • override_teq_yr1_value (float, optional) – Value to override bass diffusion teq_yr1 value representing the number of years since diffusion began for the first year of observation.

Returns

Dataframe contains market_last_year column to inform diffusion in next year.

Return type

pandas.DataFrame

python.diffusion_functions_elec.calc_equiv_time(df)[source]

Calculate the “equivalent time” on the diffusion curve. This defines the gradient of adoption.

Parameters

df (pandas.DataFrame) –

df.msly

Market share last year [at end of the previous solve] as decimal

Type

numpy.ndarray

df.mms

Maximum market share as a decimal percentage.

Type

numpy.ndarray

df.p

Bass diffusion parameter defining the coefficient of innovation.

Type

numpy.ndarray

df.q

Bass diffusion paramter defining the coefficient of imitation.

Type

numpy.ndarray

Returns

Input dataframe with teq column added. teq is the equivalent number of years after diffusion started on the diffusion curve

Return type

pandas.DataFrame

python.diffusion_functions_elec.set_bass_param(df, bass_params, override_p_value, override_q_value, override_teq_yr1_value)[source]

Set the p & q parameters which define the Bass diffusion curve. p is the coefficient of innovation, external influence or advertising effect. q is the coefficient of imitation, internal influence or word-of-mouth effect.

IN: scaled_metric_value - numpy array - scaled value of economic attractiveness [0-1] OUT: p,q - numpy arrays - Bass diffusion parameters

python.financial_functions module

python.financial_functions.calc_financial_performance(dataframe)[source]

Function to calculate the payback period and join it on the agent dataframe

Parameters

dataframe ("pd.df") – Agent dataframe

Returns

- dataframe

Return type

‘pd.df’ - Agent dataframe with payback period joined on dataframe

python.financial_functions.calc_max_market_share(dataframe, max_market_share_df)[source]

Calculates the maximum marketshare available for each agent. :param dataframe:

python.financial_functions.metric_value
Type

float

Parameters

max_market_share_df (pandas.DataFrame) – Set by settings.ScenarioSettings.get_max_marketshare().

Returns

Input DataFrame with max_market_share and metric columns joined on.

Return type

pandas.DataFrame

python.financial_functions.calc_payback_vectorized(cfs, tech_lifetime)[source]

Calculate the payback period in years for a given cash flow. Payback is defined as the first year where cumulative cash flows are positive. Cash flows that do not result in payback are given a period of 30.1

Parameters
  • cfs ("numpy.ndarray") – Annual cash flows of investment, where 0th index refers to 0th year of investment

  • tech_lifetime ("numpy.ndarray") – Number of years to assume for technology lifetime

Returns

pp_final – Payback period in years

Return type

‘numpy.ndarray’

python.financial_functions.calc_system_performance(kw, pv, utilityrate, loan, batt, costs, agent, rate_switch_table, en_batt=True, batt_simple_dispatch=0)[source]

Executes Battwatts, Utilityrate5, and Cashloan PySAM modules with system sizes (kw) as input

Parameters
  • kw (Capacity (in kW)) –

  • pv (Dictionary with generation_hourly and consumption_hourly) –

  • utilityrate (PySAM Utilityrate5 module) –

  • loan (PySAM Cashloan module) –

  • batt (PySAM Battwatts module) –

  • costs (Dictionary with system costs) –

  • agent (pd.Series with agent attirbutes) –

  • rate_switch_table (pd.DataFrame with details on how rates will switch with DG/storage adoption) –

  • en_batt (Enable battery) –

  • batt_simple_dispatch (batt.Battery.batt_simple_dispatch) –

    • batt_simple_dispatch = 0 (peak shaving look ahead)

    • batt_simple_dispatch = 1 (peak shaving look behind)

Returns

-loan.Outputs.npv

Return type

the negative net present value of system + storage to be optimized for system sizing

python.financial_functions.calc_system_size_and_performance(agent, sectors, rate_switch_table=None)[source]

Calculate the optimal system and battery size and generation profile, and resulting bill savings and financial metrics.

Parameters

agent ('pd.df') – individual agent object.

Returns

agent – Adds several features to the agent dataframe:

  • agent_id

  • system_kw - system capacity selected by agent

  • batt_kw - battery capacity selected by agent

  • batt_kwh - battery energy capacity

  • npv - net present value of system + storage

  • cash_flow - array of annual cash flows from system adoption

  • batt_dispatch_profile - array of hourly battery dispatch

  • annual_energy_production_kwh - annual energy production (kwh) of system

  • naep - normalized annual energy production (kwh/kW) of system

  • capacity_factor - annual capacity factor

  • first_year_elec_bill_with_system - first year electricity bill with adopted system ($/yr)

  • first_year_elec_bill_savings - first year electricity bill savings with adopted system ($/yr)

  • first_year_elec_bill_savings_frac - fraction of savings on electricity bill in first year of system adoption

  • max_system_kw - maximum system size allowed as constrained by roof size or not exceeding annual consumption

  • first_year_elec_bill_without_system - first year electricity bill without adopted system ($/yr)

  • avg_elec_price_cents_per_kwh - first year electricity price (c/kwh)

  • cbi - ndarray of capacity-based incentives applicable to agent

  • ibi - ndarray of investment-based incentives applicable to agent

  • pbi - ndarray of performance-based incentives applicable to agent

  • cash_incentives - ndarray of cash-based incentives applicable to agent

  • export_tariff_result - summary of structure of retail tariff applied to agent

Return type

‘pd.df’

python.financial_functions.check_incentive_constraints(incentive_data, incentive_value, system_cost)[source]
python.financial_functions.check_minmax(value, min_, max_)[source]
python.financial_functions.eqn_builder(method, incentive_info, info_params, default_params, additional_data)[source]
python.financial_functions.eqn_flat_rate(incentive_info, info_params, default_params, additional_params)[source]
python.financial_functions.eqn_linear_decay_to_zero(incentive_info, info_params, default_params, additional_params)[source]
python.financial_functions.get_expiration(end_date, current_year, timesteps_per_year)[source]
python.financial_functions.process_incentives(loan, kw, batt_kw, batt_kwh, generation_hourly, agent)[source]
python.financial_functions.process_tariff(utilityrate, tariff_dict, net_billing_sell_rate)[source]

Instantiate the utilityrate5 PySAM model and process the agent’s rate json object to conform with PySAM input formatting.

Parameters

agent ('pd.Series') – Individual agent object.

Returns

utilityrate

Return type

‘PySAM.Utilityrate5’

python.settings module

class python.settings.ModelSettings[source]

Bases: object

Class containing the model settings parameters .. attribute:: model_init

type

float

cdate
Type

str

out_dir
Type

str

input_agent_dir
Type

str

input_data_dir
Type

str

start_year
Type

int

input_scenarios
Type

list

pg_params_file
Type

str

role
Type

str

pg_params
Type

dict

pg_engine_params
Type

dict

pg_conn_string
Type

str

pg_engine_string
Type

str

pg_params_log
Type

str

model_path
Type

bool

local_cores
Type

int

pg_procs
Type

int

delete_output_schema
Type

bool

dynamic_system_sizing
Type

bool

add_config(config)[source]
get(attr)[source]
set(attr, value)[source]
set_pg_params(pg_params_file)[source]
validate()[source]
validate_property(property_name)[source]
class python.settings.ScenarioSettings[source]

Bases: object

Storage of all scenario specific inputs

add_scenario_options(scenario_options)[source]
get(attr)[source]
set(attr, value)[source]
set_tech_mode()[source]
validate()[source]
validate_property(property_name)[source]
python.settings.check_type(obj, expected_type)[source]
python.settings.init_model_settings()[source]

initialize Model Settings object (this controls settings that apply to all scenarios to be executed)

python.settings.init_scenario_settings(scenario_file, model_settings, con, cur)[source]

load scenario specific data and configure output settings

python.tariff_functions module

Deprecated. Nullified by new PySAM code and will be taken out in Beta release.

class python.tariff_functions.Export_Tariff(full_retail_nem=True, prices=array([[0.0]]), levels=array([[0.0]]), periods_8760=array([0, 0, 0, ..., 0, 0, 0]), period_tou_n=1)[source]

Bases: object

Structure of compensation for exported generation. Currently only two styles: full-retail NEM, and instantanous TOU energy value.

set_constant_sell_price(price)[source]
class python.tariff_functions.Tariff(start_day=6, urdb_id=None, json_file_name=None, dict_obj=None, api_key=None)[source]

Bases: object

Tariff Attributes:

urdb_id: id for utility rate database. US, not international. eia_id: The EIA assigned ID number for the utility associated with this tariff name: tariff name utility: Name of utility this tariff is associated with fixed_charge: Fixed monthly charge in $/mo. peak_kW_capacity_max: The annual maximum kW of demand that a customer can have and still be on this tariff peak_kW_capacity_min: The annula minimum kW of demand that a customer can have and still be on this tariff kWh_useage_max: The maximum kWh of average monthly consumption that a customer can have and still be on this tariff kWh_useage_min: The minimum kWh of average monthly consumption that a customer can have and still be on this tariff sector: residential, commercial, or industrial comments: comments from the urdb description: tariff description from urdb source: uri for the source of the tariff uri: link the the urdb page voltage_category: secondary, primary, transmission d_flat_exists: Boolean of whether there is a flat (not tou) demand charge component. Flat demand is also called monthly or seasonal demand. d_flat_n: Number of unique flat demand period constructions. Does NOT correspond to width of d_flat_x constructs. d_flat_prices: The prices of each tier/period combination for flat demand. Rows are tiers, columns are months. Differs from TOU, where columns are periods. d_flat_levels: The limit (total kW) of each of each tier/period combination for flat demand. Rows are tiers, columns are months. Differs from TOU, where columns are periods. d_tou_exists = Boolean of whether there is a tou (not flat) demand charge component d_tou_n = Number of unique tou demand periods. Minimum of 1, since I’m counting no-charge periods still as a period. d_tou_prices = The prices of each tier/period combination for tou demand. Rows are tiers, columns are periods. d_tou_levels = The limit (total kW) of each of each tier/period combination for tou demand. Rows are tiers, columns are periods. e_exists = Boolean of whether there is a flat (not tou) demand charge component e_tou_exists = Boolean of whether there is a flat (not tou) demand charge component e_n = Number of unique energy periods. Minimum of 1, since I’m counting no-charge periods still as a period. e_prices = The prices of each tier/period combination for flat demand. Rows are tiers, columns are periods. e_levels = The limit (total kWh) of each of each tier/period combination for energy. Rows are tiers, columns are periods. e_wkday_12by24: 12 by 24 period definition for weekday energy. Rows are months, columns are hours. e_wkend_12by24: 12 by 24 period definition for weekend energy. Rows are months, columns are hours. d_wkday_12by24: 12 by 24 period definition for weekday energy. Rows are months, columns are hours. d_wkend_12by24: 12 by 24 period definition for weekend energy. Rows are months, columns are hours. d_tou_8760 e_tou_8760 e_prices_no_tier e_max_difference: The maximum energy price differential within any single day energy_rate_unit: kWh or kWh/day - for guiding the bill calculations later demand_rate_unit: kW or kW/day - for guiding the bill calculations later

define_d_flat(d_flat_levels, d_flat_prices)[source]
define_d_tou(d_wkday_12by24, d_wkend_12by24, d_tou_levels, d_tou_prices)[source]
define_e(e_wkday_12by24, e_wkend_12by24, e_levels, e_prices)[source]
identify_max_demand_charge()[source]
write_json(json_file_name)[source]
python.tariff_functions.bill_calculator(load_profile, tariff, export_tariff)[source]

Deprecated. Nullified by new PySAM code but kept for reference.

Parameters
python.tariff_functions.build_8760_from_12by24s(wkday_12by24, wkend_12by24, start_day=6)[source]

Construct long-df (8760) from a weekday and weekend 12by24

Parameters
  • wkday_12by24 (numpy.ndarray) –

  • wkend_12by24 (numpy.ndarray) –

  • start_day (int) – Start day of 6 (default) equates to a Sunday.

python.tariff_functions.design_tariff_for_portfolio(agent_df, avg_rev, peak_hour_indicies, summer_month_indicies, rev_f_d, rev_f_e, rev_f_fixed)[source]

Builds a tariff that would extract a given $/kWh from a portfolio of customers.

Parameters
  • agent_df ('pd.DataFrame') –

    agents as loaded from the agent pkl file.

    agent_df.load_profile
    Type

    numpy.ndarray

    agent_df.weight
    Type

    numpy.ndarray

  • avg_rev ('float') – $/kWh that the tariff would extract from the given portfolio of customers.

  • peak_hour_indicies ('list') – list of indices corresponding to the peak demand hours. Assumes peak hours are the same between demand and energy.

  • summer_month_indicies ('list') – list of indices corresponding to the summer peak demand hours. Assumes peak hours only occur during the summer.

  • rev_f_d ('list') – revenue strucutre for demand charges. Format is [fraction of total revenue, fraction that comes from tou charges, fraction that comes from flat charges] ex: [0.4875, 0.5, 0.5].

  • rev_f_e ('list') – revenue strucutre for energy charges. Format is [fraction of total revenue, fraction that comes from off-peak hours, fraction that comes from on-peak hours] ex: [0.4875, 0.20, 0.8].

  • rev_f_fixed ('list') – [fraction of revenue from fixed monthly charges]. ex: [0.025].

Note

  1. Peak hours are the same between demand and energy.

  2. Peak hours only occur during the summer.

Returns

tarrif – Returns tarrif, an object instantiated with the Tarrif class.

Return type

‘class object’

python.tariff_functions.download_tariffs_from_urdb(api_key, sector=None, utility=None, print_progress=False)[source]

API request for URDB rates. Each user should get their own URDB API key: http://en.openei.org/services/api/signup/ Sectors: Residential, Commercial, Industrial, Lighting

Parameters
Returns

Dataframe of URDB rates.

Return type

pandas.DataFrame

python.tariff_functions.filter_tariff_df(tariff_df, keyword_list=None, keyword_list_file=None, demand_units_to_exclude=['hp', 'kVA', 'kW daily', 'hp daily', 'kVA daily'], remove_expired=True)[source]

Filter tariffs based on inclusion (e.g. keywords), or exclusion (e.g. demand units) :param tariff_df: dataframe of URDB tariffs created by download_tariffs_from_urdb(). :type tariff_df: pandas.DataFrame :param keyword_list: list of keywords to search for in rate structure. :type keyword_list: list of str, optional :param keyword_list_file: filepath to .txt file containing keywords to search for. :type keyword_list_file: str :param demand_units_to_exclude: exclude rates from URDB database if the units are contained in this list. Default values are hp,`kVA`,`kW daily`,`hp daily`,`kVA daily` :type demand_units_to_exclude: list of str :param remove_expired: exclude expired rates. Default is True. :type remove_expired: bool

python.tariff_functions.load_config_params(config_file_name)[source]

Each user should fill in a config_template.json file.

python.tariff_functions.tiered_calc_vec(values, levels, prices)[source]

python.utility_functions module

class python.utility_functions.Timer[source]

Bases: object

python.utility_functions.code_profiler(out_dir)[source]
python.utility_functions.current_datetime(format='%Y_%m_%d_%Hh%Mm%Ss')[source]
python.utility_functions.get_epoch_time()[source]
python.utility_functions.get_formatted_time()[source]
python.utility_functions.get_logger(log_file_path=None)[source]

Takes depreciation schedule and sorts table fields by depreciation year

Parameters

log_file_path ('str') – The log_file_path.

Returns

logger – logger object for logging

Return type

‘loggin.logger’

python.utility_functions.get_pg_engine_params(json_file)[source]
python.utility_functions.get_pg_params(json_file)[source]

Takes the path to the json file specifying database connection information and returns formatted information.

Parameters

json_file ('str') – The path to the json file specifying database connection information.

Returns

  • pg_params (‘json’) – ‘postgres database connection parameters’

  • pg_conn_str (‘str’) – Formatted connection string

python.utility_functions.make_con(connection_string, role, async_=False)[source]

Returns the psql connection and cursor objects to be used with functions that query from the database.

Parameters
  • connection_string ('SQL connection') – Connection string. e.g. “postgresql+psycopg2://postgres:postgres@127.0.0.1:5432/dgen_db”.

  • role ('str') – Database role. ‘postgres’ should be the default role name for the open source codebase.

Returns

  • con (‘SQL connection’) – Postgres Database Connection.

  • cur (‘SQL cursor’) – Postgres Database Cursor.

python.utility_functions.make_engine(pg_engine_con)[source]
python.utility_functions.parse_command_args(argv)[source]

Function to parse the command line arguments.

Parameters

argv ('str') – -h : help ‘dg_model.py -i <Initiate Model?> -y <year>’ -i : Initiate model for 2010 and quit -y: or year= : Resume model solve in passed year

Returns

  • init_model - ‘bool’ – Initialize the model

  • resume_year (‘float’) – The year the model should resume.

python.utility_functions.pylist_2_pglist(l)[source]
python.utility_functions.shutdown_log(logger)[source]
python.utility_functions.wait(conn)[source]

Module contents