Behavioral Cloning on N/W reconfiguration¶
Classes and Functions¶
- phy_train_bc.evaluate_policy(env, model)¶
Evaluating the learned policy by running experiments in the environment
- Parameters
env (Gym.Env) – The Open DSS RL environment
model (torch.nn.Module) – Trained policy network model
- Returns
average episode length, average reward
- Return type
float
- phy_train_bc.train_and_evaluate(env, bc_train_epoch_lens, exp_trajectory_len)¶
For different combination of channel bandwidths, router queue limits and expert demonstrations, train and test the policy and saves the results, reward and policy network.
- Parameters
env (Gym.Env) – The Open DSS RL environment
bc_train_epoch_lens (list) – List of the number of epochs the behavioral cloning agent need to be trained
exp_tajectory_len (int) – The number of expert demonstrations steps considered for AIRL training
- Returns
Nothing
- Return type
None