Data-driven adaptive traffic signal control via deep reinforcement learning

Placeholder Show Content

Abstract/Contents

Abstract
Adaptive traffic signal control (ATSC) system serves a significant role for relieving urban traffic congestion. The system is capable of adjusting signal phases and timings of all traffic lights simultaneously according to real-time traffic sensor data, resulting in a better overall traffic management and an improved traffic condition on road. In recent years, deep reinforcement learning (DRL), one powerful paradigm in artificial intelligence (AI) for sequential decision-making, has drawn great attention from transportation researchers. The following three properties of DRL make it very attractive and ideal for the next generation ATSC system: (1) model-free: DRL reasons about the optimal control strategies directly from data without making additional assumptions on the underlying traffic distributions and traffic flows. Compared with traditional traffic optimization methods, DRL avoids the cumbersome formulation of traffic dynamics and modeling; (2) self-learning: DRL self-learns the signal control knowledge from traffic data with minimal human expertise; (3) simple data requirement: by using large nonlinear neural networks as function approximators, DRL has enough representation power to map directly from simple traffic measurements, e.g. queue length and waiting time, to signal control policies. This thesis focuses on building data-driven and adaptive controllers via deep reinforcement learning for large-scale traffic signal control systems. In particular, the thesis first proposes a hierarchical decentralized-to-centralized DRL framework for large-scale ATSC to better coordinate multiple signalized intersections in the traffic system. Second, the thesis introduces efficient DRL with efficient exploration for ATSC to greatly improve sample complexity of DRL algorithms, making them more suitable for real-world control systems. Furthermore, the thesis combines multi-agent system with efficient DRL to solve large-scale ATSC problems that have multiple intersections. Finally, the thesis presents several algorithmic extensions to handle complex topology and heterogeneous intersections in real-world traffic networks. To gauge the performance of the presented DRL algorithms, various experiments have been conducted and included in the thesis both on small-scale and on large-scale simulated traffic networks. The empirical results have demonstrated that the proposed DRL algorithms outperform both rule-based control policy and commonly-used off-the-shelf DRL algorithms by a significant margin. Moreover, the proposed efficient MARL algorithms have achieved the state-of-the-art performance with improved sample-complexity for large-scale ATSC

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Tan, Tian, (Researcher of deep reinforcement learning)
Degree supervisor Leckie, Jim, 1939-
Thesis advisor Leckie, Jim, 1939-
Thesis advisor Lepech, Michael
Thesis advisor Wang, Jie, (Researcher in adaptive computational learning)
Degree committee member Lepech, Michael
Degree committee member Wang, Jie, (Researcher in adaptive computational learning)
Associated with Stanford University, Department of Civil & Environmental Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Tian Tan
Note Submitted to the Department of Civil & Environmental Engineering
Thesis Thesis Ph.D. Stanford University 2020
Location electronic resource

Access conditions

Copyright
© 2020 by Tian Tan
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...