phygnn.layers.handlers.Layers
- class Layers(n_features, n_labels=1, hidden_layers=None, input_layer=None, output_layer=None)[source]
Bases:
HiddenLayers
Class to handle TensorFlow layers
- Parameters:
n_features (int) – Number of features (inputs) to train the model on
n_labels (int, optional) – Number of labels (outputs) to the model, by default 1
hidden_layers (list | None, optional) – List of dictionaries of key word arguments for each hidden layer in the NN. Dense linear layers can be input with their activations or separately for more explicit control over the layer ordering. For example, this is a valid input for hidden_layers that will yield 8 hidden layers (10 layers including input+output):
- [{‘units’: 64, ‘activation’: ‘relu’, ‘dropout’: 0.01},
{‘units’: 64}, {‘batch_normalization’: {‘axis’: -1}}, {‘activation’: ‘relu’}, {‘dropout’: 0.01}, {‘class’: ‘Flatten’}, ]
by default None which will lead to a single linear layer
input_layer (None | bool | dict) – Input layer. specification. Can be a dictionary similar to hidden_layers specifying a dense / conv / lstm layer. Will default to a keras InputLayer with input shape = n_features. Can be False if the input layer will be included in the hidden_layers input.
output_layer (None | bool | list | dict) – Output layer specification. Can be a list/dict similar to hidden_layers input specifying a dense layer with activation. For example, for a classfication problem with a single output, output_layer should be [{‘units’: 1}, {‘activation’: ‘sigmoid’}]. This defaults to a single dense layer with no activation (best for regression problems). Can be False if the output layer will be included in the hidden_layers input.
Methods
add_layer
(layer_kwargs)Add a hidden layer to the DNN.
add_layer_by_class
(class_name, **kwargs)Add a new layer by the class name, either from phygnn.layers.custom_layers or tf.keras.layers
add_skip_layer
(name)Add a skip layer, looking for a prior skip connection start point if already in the layer list.
compile
(model, n_features[, n_labels, ...])Build all layers needed for model
parse_repeats
(hidden_layers)Parse repeat layers.
Attributes
Get a list of the NN bias weights (tensors)
List of dictionaries of key word arguments for each hidden layer in the NN.
Dictionary of key word arguments for the input layer.
Get a list of the NN kernel weights (tensors)
TensorFlow keras layers
Dictionary of key word arguments for the output layer.
Get a dictionary of unique SkipConnection objects in the layers list keyed by SkipConnection name.
Get a list of layer weights for gradient calculations.
- property input_layer_kwargs
Dictionary of key word arguments for the input layer. This is a copy of the input_layer input arg that can be used to reconstruct the network.
- Returns:
list
- add_layer(layer_kwargs)
Add a hidden layer to the DNN.
- Parameters:
layer_kwargs (dict) – Dictionary of key word arguments for list layer. For example, any of the following are valid inputs:
{‘units’: 64, ‘activation’: ‘relu’, ‘dropout’: 0.05} {‘units’: 64, ‘name’: ‘relu1’} {‘activation’: ‘relu’} {‘batch_normalization’: {‘axis’: -1}} {‘dropout’: 0.1}
- add_layer_by_class(class_name, **kwargs)
Add a new layer by the class name, either from phygnn.layers.custom_layers or tf.keras.layers
- Parameters:
class_name (str) – Class name from phygnn.layers.custom_layers or tf.keras.layers
kwargs (dict) – Key word arguments to initialize the class.
- add_skip_layer(name)
Add a skip layer, looking for a prior skip connection start point if already in the layer list.
- Parameters:
name (str) – Unique string identifier of the skip connection. The skip endpoint should have the same name.
- property bias_weights
Get a list of the NN bias weights (tensors)
(can be used for bias regularization).
Does not include input layer or dropout layers. Does include the output layer.
- Returns:
list
List of dictionaries of key word arguments for each hidden layer in the NN. This is a copy of the hidden_layers input arg that can be used to reconstruct the network.
- Returns:
list
- property kernel_weights
Get a list of the NN kernel weights (tensors)
(can be used for kernel regularization).
Does not include input layer or dropout layers. Does include the output layer.
- Returns:
list
- property layers
TensorFlow keras layers
- Returns:
list
- property output_layer_kwargs
Dictionary of key word arguments for the output layer. This is a copy of the output_layer input arg that can be used to reconstruct the network.
- Returns:
list
- static parse_repeats(hidden_layers)
Parse repeat layers. Must have “repeat” and “n” to repeat one or more layers.
- Parameters:
hidden_layers (list) – Hidden layer kwargs including possibly entries with {‘n’: 2, ‘repeat’: [{…}, {…}]} that will duplicate the list sub entry n times.
- Returns:
hidden_layers (list) – Hidden layer kwargs exploded for ‘repeat’ entries.
- property skip_layers
Get a dictionary of unique SkipConnection objects in the layers list keyed by SkipConnection name.
- Returns:
list
- property weights
Get a list of layer weights for gradient calculations.
- Returns:
list
- classmethod compile(model, n_features, n_labels=1, hidden_layers=None, input_layer=None, output_layer=None)[source]
Build all layers needed for model
- Parameters:
model (tensorflow.keras.Sequential) – Model to add layers too
n_features (int) – Number of features (inputs) to train the model on
n_labels (int, optional) – Number of labels (outputs) to the model, by default 1
hidden_layers (list | None, optional) – List of dictionaries of key word arguments for each hidden layer in the NN. Dense linear layers can be input with their activations or separately for more explicit control over the layer ordering. For example, this is a valid input for hidden_layers that will yield 7 hidden layers (9 layers total):
- [{‘units’: 64, ‘activation’: ‘relu’, ‘dropout’: 0.01},
{‘units’: 64}, {‘batch_normalization’: {‘axis’: -1}}, {‘activation’: ‘relu’}, {‘dropout’: 0.01}]
by default None which will lead to a single linear layer
input_layer (None | bool | InputLayer) – Keras input layer. Will default to an InputLayer with input shape = n_features. Can be False if the input layer will be included in the hidden_layers input.
output_layer (None | bool | list | dict) – Output layer specification. Can be a list/dict similar to hidden_layers input specifying a dense layer with activation. For example, for a classfication problem with a single output, output_layer should be [{‘units’: 1}, {‘activation’: ‘sigmoid’}] This defaults to a single dense layer with no activation (best for regression problems). Can be False if the output layer will be included in the hidden_layers input.
- Returns:
model (tensorflow.keras) – Model with layers added