Build

SLiDE._map_schemeMethod
_map_scheme(df, dfmap, on)

Internal support for SLiDE.map_scheme to avoid confusion over DataFrame inputs.

Arguments

  • df::DataFrame of column(s) to scale
  • dfmap::DataFrame of mapping columns
  • on::Symbol or on::Array{Symbol,1}: columns in df that will be mapped

Returns

  • from::Symbol or from::Array{Symbol,1}: dfmap columns that overlap with df
  • to::Symbol or to::Array{Symbol,1}: dfmap columns that do not overlap with df
  • on::Symbol or on::Array{Symbol,1}: columns in df that will be mapped
source
SLiDE.compound_for!Method
compound_for!(x::T, lst::AbstractArray) where T <: Scale
compound_for!(x::T, lst::AbstractArray, df::DataFrame) where T <: Scale

This function compounds the information in Scale for parameters scaled over multiple indices simultaneously. This is relevant for parameters such as sectoral output, $ys_{yr,r,ss,gg}$, and intermediate demand, $id_{yr,r,gg,ss}$, that depend on both goods and sectors.

Arguments

  • x::T where T <: Scale, scaling information over one index (ex: aa -> a), with x.on set to the target scaling indices (ex: x.on = [:s,:g] when compounding to scale ys_{yr,r,s,g})
  • lst::AbstractArray, the complete list of disaggregate-level values in the scaling DataFrame.
  • df::DataFrame, DataFrame that will ultimately be scaled. If given, x.data will be extended using SLiDE.map_year, to ensure that it is fully defined over all years. This is required, for example, when using detail-level BEA data (collected every 5 years) to disaggregate summary-level data (collected annually).

Returns

  • x::T where T <: Scale, $\delta_{c,aa \rightarrow a, bb \rightarrow b} = \delta_{c,aa \rightarrow a} \cdot \delta_{c, bb \rightarrow b}$ where $c$ (x.constant) represents the index/ices included in, but not changed by, the scaling process, and $aa$,$bb$ (x.from) and $a$,$b$ (x.to) represent the value(s) of the scaled index/ices before and after scaling.

    x.data does not include (a,b) combinations that result into one-to-one mapping.

The specifics of this calculation depend on the Scale subtype input argument.

compound_for!(x::Mapping, lst::AbstractArray)

Here, all ($aa\rightarrow a$,$bb\rightarrow b$) pairs that do not result in one-to-one mapping are included.

compound_for!(x::Weighting, lst::AbstractArray)

Here, assume x.direction = disaggregate, since aggregation does not require multiplication by a weighting factor. Consider the case of sharing across both goods and sectors at once. So, $g$, $s$ represent disaggregate-level goods and sectors. and $gg$, $ss$ represent aggregate-level goods and sectors

This function generates a DataFrame with these sharing parameters through the following process:

  1. Multiply shares for all ($gg\rightarrow g$,$ss\rightarrow s$) combinations.
  2. Address the case of when aggregate-level goods and sectors are the same ($gg=ss$):
    • If $g = s$, sum all of the share values.
    • If $g\neq s$, drop these values.

Example

These two examples are taken from slices of the Weighting and Mapping DataTypes compounded to scale sectoral supply, ys0(yr,r,s,g) when scaling the model parameters during the first step of the EEM build stream, executed by SLiDE.scale_sector.

First, summary-level parameters must be disaggregated to a hybrid of summary- and detail- level data.

julia> lst = ["col_min", "ele_uti", "min", "oil", "uti"];

julia> df = read_file(joinpath(SLIDE_DIR,"docs","src","assets","data","compound_for-weighting.csv"))
4×4 DataFrame
│ Row │ yr    │ summary │ detail  │ value    │
│     │ Int64 │ String  │ String  │ Float64  │
├─────┼───────┼─────────┼─────────┼──────────┤
│ 1   │ 2012  │ min     │ col_min │ 0.419384 │
│ 2   │ 2012  │ min     │ min     │ 0.580616 │
│ 3   │ 2012  │ uti     │ ele_uti │ 0.715143 │
│ 4   │ 2012  │ uti     │ uti     │ 0.284857 │

julia> weighting = Weighting(data=df, constant=[:yr], from=:summary, to=:detail, on=[:s,:g], direction=:disaggregate);

julia> SLiDE.compound_for!(weighting, lst)
Weighting(20×6 DataFrame
│ Row │ yr     │ summary_s │ summary_g │ detail_s │ detail_g │ value    │
│     │ Int64? │ String?   │ String?   │ String?  │ String?  │ Float64  │
├─────┼────────┼───────────┼───────────┼──────────┼──────────┼──────────┤
│ 1   │ 2012   │ min       │ min       │ col_min  │ col_min  │ 0.419384 │
│ 2   │ 2012   │ min       │ min       │ min      │ min      │ 0.580616 │
│ 3   │ 2012   │ min       │ oil       │ col_min  │ oil      │ 0.419384 │
│ 4   │ 2012   │ min       │ oil       │ min      │ oil      │ 0.580616 │
│ 5   │ 2012   │ min       │ uti       │ col_min  │ ele_uti  │ 0.29992  │
│ 6   │ 2012   │ min       │ uti       │ col_min  │ uti      │ 0.119465 │
│ 7   │ 2012   │ min       │ uti       │ min      │ ele_uti  │ 0.415223 │
⋮
│ 13  │ 2012   │ uti       │ min       │ ele_uti  │ col_min  │ 0.29992  │
│ 14  │ 2012   │ uti       │ min       │ ele_uti  │ min      │ 0.415223 │
│ 15  │ 2012   │ uti       │ min       │ uti      │ col_min  │ 0.119465 │
│ 16  │ 2012   │ uti       │ min       │ uti      │ min      │ 0.165393 │
│ 17  │ 2012   │ uti       │ oil       │ ele_uti  │ oil      │ 0.715143 │
│ 18  │ 2012   │ uti       │ oil       │ uti      │ oil      │ 0.284857 │
│ 19  │ 2012   │ uti       │ uti       │ ele_uti  │ ele_uti  │ 0.715143 │
│ 20  │ 2012   │ uti       │ uti       │ uti      │ uti      │ 0.284857 │, [:yr], [:summary_s, :summary_g], [:detail_s, :detail_g], [:s, :g], :disaggregate)

Next, these hybrid-level parameters must be aggregated in accordance with the scheme required for the EEM.

julia> df = read_file(joinpath(SLIDE_DIR,"docs","src","assets","data","compound_for-mapping.csv"))
4×2 DataFrame
│ Row │ aggr   │ disagg  │
│     │ String │ String  │
├─────┼────────┼─────────┤
│ 1   │ col    │ col_min │
│ 2   │ eint   │ min     │
│ 3   │ eint   │ uti     │
│ 4   │ ele    │ ele_uti │

julia> mapping = Mapping(data=df, from=:disagg, to=:aggr, on=[:s,:g], direction=:aggregate);

julia> SLiDE.compound_for!(mapping, lst)
Mapping(24×4 DataFrame
│ Row │ disagg_s │ disagg_g │ aggr_s │ aggr_g │
│     │ String   │ String   │ String │ String │
├─────┼──────────┼──────────┼────────┼────────┤
│ 1   │ col_min  │ col_min  │ col    │ col    │
│ 2   │ col_min  │ ele_uti  │ col    │ ele    │
│ 3   │ col_min  │ min      │ col    │ eint   │
│ 4   │ col_min  │ oil      │ col    │ oil    │
│ 5   │ col_min  │ uti      │ col    │ eint   │
│ 6   │ ele_uti  │ col_min  │ ele    │ col    │
│ 7   │ ele_uti  │ ele_uti  │ ele    │ ele    │
⋮
│ 17  │ oil      │ ele_uti  │ oil    │ ele    │
│ 18  │ oil      │ min      │ oil    │ eint   │
│ 19  │ oil      │ uti      │ oil    │ eint   │
│ 20  │ uti      │ col_min  │ eint   │ col    │
│ 21  │ uti      │ ele_uti  │ eint   │ ele    │
│ 22  │ uti      │ min      │ eint   │ eint   │
│ 23  │ uti      │ oil      │ eint   │ oil    │
│ 24  │ uti      │ uti      │ eint   │ eint   │, [:disagg_s, :disagg_g], [:aggr_s, :aggr_g], [:s, :g], :aggregate)
source
SLiDE.compound_forMethod
compound_for(x::T, lst::AbstractArray, df::DataFrame)
compound_for(x::T, lst::AbstractArray)
source
SLiDE.filter_for!Method
filter_for!(weighting::Weighting, lst::AbstractArray)
filter_for!(mapping::Mapping, weighting::Weighting)

filter_for!(mapping::Mapping, weighting::Weighting, lst::AbstractArray)
filter_for!(weighting::Weighting, mapping::Mapping, lst::AbstractArray)
source
SLiDE.find_sectorMethod

Argument

  • idx::AbstractArray: list of columns that might contain good/sector indices OR df::DataFrame: for which we need to find goods/sectors

Returns

  • idx::Array{Symbol,1}: input columns that overlap with [:g,:s] in the order in which they're given
source
SLiDE.has_schemeMethod

This function returns true if all scaling parameters have been set to a defined parameter.

source
SLiDE.list_uniqueMethod
list_unique(df::DataFrame)
list_unique(df::DataFrame, idx::AbstractArray)
list_unique(df::DataFrame, idx::Symbol)

This function returns a list of all unique elements across multiple DataFrame columns

source
SLiDE.map_directionMethod
map_direction(df::DataFrame)
map_direction(x::T) where T <: Scale

This function returns a Tuple of DataFrame columns in the order (aggregate, disaggregate). This is determined from the number of unique entries in each column under the assumption that the aggregate-level will have fewer unique entries.

df = DataFrame(s="cng", src=["cru","gas"])
SLiDE.map_direction(df)

# output

(:s, :src)
source
SLiDE.map_identityMethod
map_identity(x::T, lst::AbstractArray) where T<:Scale

This function adds one-to-one mapping to the data field in Mapping or Weighting so that the entirity of lst is included in the mapping.

Returns

  • df::DataFrame with lst completely mapped.
source
SLiDE.map_schemeMethod
map_scheme(x::T...)
map_scheme(x::T, df::DataFrame)

This function sets the direction field for Mapping and Weighting types based on overlap between input parameters.

source
SLiDE.scale_withMethod
scale_with(df::DataFrame, x::Weighting)

This function maps df: x.from -> x.to, multiplying by any associated share specified in x.data. For a parameter $\bar{z}$,

\[\begin{aligned} \bar{z}_{c,a} = \sum_{aa} \left( \bar{z}_{c,aa} \cdot \tilde{\delta}_{c,aa \rightarrow a} \right) \end{aligned}\]

where $c$ (x.constant) represents the index/ices included in, but not changed by, the scaling process, and $aa$ (x.from) and $a$ (x.to) represent the value(s) of the scaled index/ices before and after scaling.

scale_with(df::DataFrame, x::Mapping)

This function scales a parameter in df according to the input map dfmap. For a parameter $\bar{z}$,

\[\bar{z}_{c,a} = \left(\bar{z}_{c,aa} \circ map_{aa\rightarrow a} \right)\]

where $c$ (x.constant) represents the index/ices included in, but not changed by, the scaling process, and $aa$ (x.from) and $a$ (x.to) represent the value(s) of the scaled index/ices before and after scaling.

For each method, x.direction = disaggregate, all disaggregate-level entries will remain equal to their aggregate-level value. If x.direction = aggregate,

\[\bar{z}_{c,a} = \sum_{aa} \bar{z}_{c,a}\]

source
SLiDE.set_scheme!Method
set_scheme!(mapping::Mapping)

Define Mapping and/or Weighting fields from and to if direction is already defined.

set_scheme!(mapping::Mapping, weighting::Weighting)
set_scheme!(weighting::Weighting, mapping::Mapping)

Defines Mapping and/or Weighting fields from and to !!!!

source