Generative Adversarial Imitation Learning on Re-routing & N/W reconfiguration

cps_train_gail.evaluate_policy(cpenv, model)

Evaluating the learned policy by running experiments in the environment

Parameters
  • cpenv (Gym.Env) – The cyber-physical RL environment

  • model (torch.nn.Module) – Trained policy network model

Returns

average episode length, average reward

Return type

float

cps_train_gail.train_and_evaluate(comp_zones, exp_tajectories, channel_bws, router_qlimits, policy_net_train_len, gail_train_len)

For different combination of channel bandwidths, router queue limits and expert demonstrations, train and test the policy and saves the results, reward and policy network.

Parameters
  • comp_zones (dict) – Cyber Physical mapping information.

  • exp_tajectories (list) – List of the number of expert demonstrations steps considered for GAIL training

  • channel_bws (list) – List of the channel bandwidths value considered in the communication network

  • router_qlimits (list) – List of the router queue upper bound considered in the network

  • policy_net_train_len (int) – Samples considered for training the policy network fed in the generator network as initial policy

  • gail_train_len (int) – Samples considered for training GAIL network

Returns

Nothing

Return type

None