Pasar al contenido principal

Traffic-signal control reinforcement learning approach for continuous-time Markov games

Autor/es Anáhuac
Román Aragón-Gómez
Año de publicación
2020
Journal o Editorial
Engineering Applications of Artificial Intelligence

Abstract 
Traffic-Signal Control (TSC) models have been transformed from simple pre-timed isolated indications to a more complex form of actuated and coordinated TSC models for highways, railroads, etc. However, existing TSC models cannot always manage inconveniences like: over-saturation, delays by incidents, congestion by weather conditions, among others, which is why this is still an open area of research. An important challenge is to propose a TSC solution model for multiple intersections, which adapts traffic signal timing according to real-time traffic.
This paper introduces a novel Reinforcement Learning (RL) approach for solving the Traffic-Signal Control problem for multiple intersections using Continuous-Time Markov Games (CTMG). The RL model is based on a temporal difference method. For estimating the transition rates of the Markov model, we use non-degenerate randomized Markov laws are being used, such that the connected chain is shown to be ergodic, and to visit all states infinitely often, using all the controls in every state. Our reinforcement learning model supposes to have complete information. The estimation of the transition rates is obtained by the number of transitions on an interval of time divided by the total value of the holding time. The estimation of the rewards is defined as the arithmetic mean of the observed rewards. We consider a non-cooperative game model for solving the multiple intersections problem. For computing the Nash equilibrium, we employ an iterative proximal gradient method. As our final contribution, we present a numerical example for validating our model and concretely measure the benefits of the TSC model.