Welcome to DSS-SimPy-RL’s documentation!¶
This project develops a communication discrete-event simulation environment for reinforcement learning using SimPy. Further the environment is extended for cyber physical simulation with integration of the OpenDSS environment, that provides a playground for cyber resilient distribution grid control. This co-simulation RL environment is light-weight and can assist in performing faster simulations and generating large-scale datasets. In the current work two Markov Decision Process (MDP) models are developed for re-routing and network reconfiguration based restoration in the communication and feeder network respectively.
This cyber-physical RL environment is further utilized to learn an Adaptive Resilience Metric using the concept of Inverse Reinforcement Learning.
- OpenDSS RL Env
- OpenDSS Backend APIs
- SimPy Cyber RL Env
- SimPy Cyber RL Env Test
- OpenDSS RL Env Test
- SimPyDSS RL Env Test
- OpenDSS RL Env GUI
- SimPy RL Env GUI
- Graph-based Resilience Metrics
- OpenDSS : Generate Expert Demonstrations
- SimPy : Generate Expert Demonstrations
- SimPyDSS : Generate Expert Demonstrations
- Behavioral Cloning on Re-routing
- Behavioral Cloning on N/W reconfiguration
- Adversarial IRL on Re-routing
- Adversarial IRL on N/W reconfiguration
- Adversarial IRL on Re-routing & N/W reconfiguration
- GAIL on N/W reconfiguration
- Generative Adversarial Imitation Learning on Re-routing & N/W reconfiguration
- DAgger Imitation Learning on N/W reconfiguration
- Reward Network Visualization for Re-routing
- Reward Network Visualization for Critical Load Restoration
- Bayesian Inverse RL
- Max Margin IRL using linear function approximator
- Behavioral Cloning
- DAgger Imitation Learning
- Generative Adversarial Imitation Learning (GAIL)
- Adversarial IRL
- Deep Q Network Learning
- Prioritized Experienced Replay DQN