Deep Reinforcement Learning-Based Control for Real-Time Hybrid Simulation of Civil Structures

Real-time Hybrid Simulation (RTHS) is a cyber-physical technique that studies the dynamic behavior of a system by combining physical and numerical components that are coupled through a boundary condition enforcer. In structural engineering, the numerical components are subjected to environmental loads that become dynamic displacements of the physical substructure applied through an actuator. However, the dynamics of the coupling between components and the complexities of the system lead to synchronization challenges that affect the accuracy of the simulation. Thus, requiring tracking controllers to ensure the fidelity of the simulation. This paper studies deep reinforcement learning (DRL) as a novel alternative to designing the tracking controller. Three controllers are designed: a DRL agent combined with a conventional time delay compensation, a conventional feedback controller combined with a DRL agent, and a DRL agent with a complex neural network architecture. The proposed approaches are tested using a virtual RTHS benchmark problem, and the results are compared with an optimized controller that has a proportional-integral-derivative controller and phase-lead compensation. The results show that DRL can address the synchronization challenges of RTHS with a model-free approach and simple neural network architectures. The work shown in this study is a critical step toward model-free control methodologies that can transform and further develop the RTHS method. The proposed methodology can be used to address important challenges related to RTHS, including nonlinearities and uncertainties of the physical substructure, complex boundary conditions, and the computational efficiency when physical structures with complex dynamics are present.

Files

Metadata

Work Title Deep Reinforcement Learning-Based Control for Real-Time Hybrid Simulation of Civil Structures
Access
Open Access
Creators
  1. Andrés Felipe Niño
  2. Alejandro Palacio-Betancur
  3. Piedad Miranda Chiquito
  4. Juan David Amaya
  5. Christian E Silva
  6. Mariantonieta Gutierrez Soto
  7. Luis Felipe Giraldo
License CC BY-NC 4.0 (Attribution-NonCommercial)
Work Type Article
Publisher
  1. International Journal of Robust and Nonlinear Control
Publication Date January 25, 2025
Publisher Identifier (DOI)
  1. https://doi.org/10.1002/rnc.7824
Deposited June 20, 2025

Versions

Analytics

Collections

This resource is currently not in any collection.

Work History

Version 1
published

  • Created
  • Added Intl_J_Robust_Nonlinear_-_2025_-_Felipe_Niño_-_Deep_Reinforcement_Learning_Based_Control_for_Real_Time_Hybrid_Simulation-1.pdf
  • Added Creator Andrés Felipe Niño
  • Added Creator Alejandro Palacio-Betancur
  • Added Creator Piedad Miranda Chiquito
  • Added Creator Juan David Amaya
  • Added Creator Christian E Silva
  • Added Creator Mariantonieta Gutierrez Soto
  • Added Creator Luis Felipe Giraldo
  • Published
  • Updated