Lessons From Alphazero For Optimal Model Predictive And Adaptive Control

Download Lessons From Alphazero For Optimal Model Predictive And Adaptive Control full books in PDF, epub, and Kindle. Read online free Lessons From Alphazero For Optimal Model Predictive And Adaptive Control ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads. We cannot guarantee that every ebooks is available!

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control
Author :
Publisher : Athena Scientific
Total Pages : 229
Release :
ISBN-10 : 9781886529175
ISBN-13 : 1886529175
Rating : 4/5 (175 Downloads)

Book Synopsis Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control by : Dimitri Bertsekas

Download or read book Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control written by Dimitri Bertsekas and published by Athena Scientific. This book was released on 2022-03-19 with total page 229 pages. Available in PDF, EPUB and Kindle. Book excerpt: The purpose of this book is to propose and develop a new conceptual framework for approximate Dynamic Programming (DP) and Reinforcement Learning (RL). This framework centers around two algorithms, which are designed largely independently of each other and operate in synergy through the powerful mechanism of Newton's method. We call these the off-line training and the on-line play algorithms; the names are borrowed from some of the major successes of RL involving games. Primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon). In these game contexts, the off-line training algorithm is the method used to teach the program how to evaluate positions and to generate good moves at any given position, while the on-line play algorithm is the method used to play in real time against human or computer opponents. Both AlphaZero and TD-Gammon were trained off-line extensively using neural networks and an approximate version of the fundamental DP algorithm of policy iteration. Yet the AlphaZero player that was obtained off-line is not used directly during on-line play (it is too inaccurate due to approximation errors that are inherent in off-line neural network training). Instead a separate on-line player is used to select moves, based on multistep lookahead minimization and a terminal position evaluator that was trained using experience with the off-line player. The on-line player performs a form of policy improvement, which is not degraded by neural network approximations. As a result, it greatly improves the performance of the off-line player. Similarly, TD-Gammon performs on-line a policy improvement step using one-step or two-step lookahead minimization, which is not degraded by neural network approximations. To this end it uses an off-line neural network-trained terminal position evaluator, and importantly it also extends its on-line lookahead by rollout (simulation with the one-step lookahead player that is based on the position evaluator). Significantly, the synergy between off-line training and on-line play also underlies Model Predictive Control (MPC), a major control system design methodology that has been extensively developed since the 1980s. This synergy can be understood in terms of abstract models of infinite horizon DP and simple geometrical constructions, and helps to explain the all-important stability issues within the MPC context. An additional benefit of policy improvement by approximation in value space, not observed in the context of games (which have stable rules and environment), is that it works well with changing problem parameters and on-line replanning, similar to indirect adaptive control. Here the Bellman equation is perturbed due to the parameter changes, but approximation in value space still operates as a Newton step. An essential requirement here is that a system model is estimated on-line through some identification method, and is used during the one-step or multistep lookahead minimization process. In this monograph we aim to provide insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. In the process, we will bring out the strong connections between the artificial intelligence view of RL, and the control theory views of MPC and adaptive control. Moreover, we will show that in addition to MPC and adaptive control, our conceptual framework can be effectively integrated with other important methodologies such as multiagent systems and decentralized control, discrete and Bayesian optimization, and heuristic algorithms for discrete optimization. One of our principal aims is to show, through the algorithmic ideas of Newton's method and the unifying principles of abstract DP, that the AlphaZero/TD-Gammon methodology of approximation in value space and rollout applies very broadly to deterministic and stochastic optimal control problems. Newton's method here is used for the solution of Bellman's equation, an operator equation that applies universally within DP with both discrete and continuous state and control spaces, as well as finite and infinite horizon.


Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control Related Books

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control
Language: en
Pages: 229
Authors: Dimitri Bertsekas
Categories: Computers
Type: BOOK - Published: 2022-03-19 - Publisher: Athena Scientific

DOWNLOAD EBOOK

The purpose of this book is to propose and develop a new conceptual framework for approximate Dynamic Programming (DP) and Reinforcement Learning (RL). This fra
A Course in Reinforcement Learning
Language: en
Pages: 421
Authors: Dimitri Bertsekas
Categories: Computers
Type: BOOK - Published: 2023-06-21 - Publisher: Athena Scientific

DOWNLOAD EBOOK

These lecture notes were prepared for use in the 2023 ASU research-oriented course on Reinforcement Learning (RL) that I have offered in each of the last five y
Reinforcement Learning and Optimal Control
Language: en
Pages: 388
Authors: Dimitri Bertsekas
Categories: Computers
Type: BOOK - Published: 2019-07-01 - Publisher: Athena Scientific

DOWNLOAD EBOOK

This book considers large and challenging multistage decision problems, which can be solved in principle by dynamic programming (DP), but their exact solution i
Predictive Control for Linear and Hybrid Systems
Language: en
Pages: 447
Authors: Francesco Borrelli
Categories: Mathematics
Type: BOOK - Published: 2017-06-22 - Publisher: Cambridge University Press

DOWNLOAD EBOOK

With a simple approach that includes real-time applications and algorithms, this book covers the theory of model predictive control (MPC).
Rollout, Policy Iteration, and Distributed Reinforcement Learning
Language: en
Pages: 498
Authors: Dimitri Bertsekas
Categories: Computers
Type: BOOK - Published: 2021-08-20 - Publisher: Athena Scientific

DOWNLOAD EBOOK

The purpose of this book is to develop in greater depth some of the methods from the author's Reinforcement Learning and Optimal Control recently published text