Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games

· · · ·
· Springer Nature
Ebook
267
Pages
Ratings and reviews aren’t verified  Learn More

About this ebook

Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games develops its specific learning techniques, motivated by application to autonomous driving and microgrid systems, with breadth and depth: integral reinforcement learning (RL) achieves model-free control without system estimation compared with system identification methods and their inevitable estimation errors; novel inverse RL methods fill a gap that will help them to attract readers interested in finding data-driven model-free solutions for inverse optimization and optimal control, imitation learning and autonomous driving among other areas.

Graduate students will find that this book offers a thorough introduction to integral and inverse RL for feedback control related to optimal regulation and tracking, disturbance rejection, and multiplayer and multiagent systems. For researchers, it provides a combination of theoretical analysis, rigorous algorithms, and a wide-ranging selection of examples. The book equips practitioners working in various domains – aircraft, robotics, power systems, and communication networks among them – with theoretical insights valuable in tackling the real-world challenges they face.

About the author

Bosen Lian obtained his B.S. degree from the North China University of Water Resources and Electric Power, Zhengzhou, China, in 2015, the M.S. degree from Northeastern University, Shenyang, China, in 2018, and the Ph.D. from the University of Texas at Arlington, TX, USA, in 2021. He is currently an Assistant Professor at the Electrical and Computer Engineering Department, Auburn University, Auburn, AL, USA. Prior to that, he was an Adjunct Professor at the Electrical Engineering Department, University of Texas at Arlington and a Postdoctoral Research Associate at the University of Texas at Arlington Research Institute. His research interests focus on reinforcement learning, inverse reinforcement learning, distributed estimation, distributed control, and robotics.

Wenqian Xue received the B.Eng. degree from the Qingdao University, Qingdao, China, in 2015, the M.SE. degree from the Northeastern University, Shenyang, China, in 2018, where she is currently pursuing towards the Ph.D. degree. She was a Research Assistant (Visiting Schlor) with the University of Texas at Arlington from 2019 to 2021. Her current research interests include learning-based data-driven control, reinforcement learning and inverse reinforcement learning, game theory, distributed control of multi-agent systems. She is a Reviewer of Automatica, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, etc.

Frank L. Lewis obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from Univ. W. Florida, and the Ph.D. at Ga. Tech. Fellow, National Academy of Inventors. Fellow IEEE, Fellow IFAC, Fellow AAAS, Fellow European Union Academy of Science, Fellow U.K. Institute of Measurement & Control. PE Texas, U.K. Chartered Engineer. UTA Charter Distinguished Scholar Professor, UTA Distinguished Teaching Professor, and Moncrief-O’Donnell Chair at the University of Texas at Arlington Research Institute. Lewis is Ranked as number 19 in the world of all scientists in Electronics and Electrical Engineering by Research.com. Ranked number 5 in the world in the subfield of Industrial Engineering and Automation according to a Stanford University Research Study in 2021. 80,000 Google Scholar Citations, H-index 123. He works in feedback control, intelligent systems, reinforcement learning, cooperative control systems, and nonlinear systems. He is author of 8 U.S. patents, numerous journal special issues, 445 journal papers, 20 books, including the textbooks Optimal Control, Aircraft Control, Optimal Estimation, and Robot Manipulator Control. He received the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, Int. Neural Network Soc. Gabor Award, U.K. Inst Measurement & Control Honeywell Field Engineering Medal, IEEE Computational Intelligence Society Neural Networks Pioneer Award, AIAA Intelligent Systems Award, AACC Ragazzini Award. He has received over $12M in 100 research grants from NSF, ARO, ONR, AFOSR, DARPA, and USA industry contracts. Helped win the US SBA Tibbets Award in 1996 as Director of the UTA Research Institute SBIR Program.

Hamidreza Modares received the B.S. degree from the University of Tehran, Tehran, Iran, in 2004, the M.S. degree from the Shahrood University of Technology, Shahrood, Iran, in 2006, and the Ph.D. degree from the University of Texas at Arlington, Arlington, TX, USA, in 2015. He is currently an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber-physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.

Bahare Kiumarsi received the B.S. degree in electrical engineering from the Shahrood University of Technology, Iran, in 2009, the M.S. degree in electrical engineering from the Ferdowsi University of Mashhad, Iran, in 2013, and the Ph.D. degree in electrical engineering from the University of Texas at Arlington, Arlington, TX, USA, in 2017. In 2018, she was a Post-Doctoral Research Associate with the Coordinated Science Laboratory, University of Illinois at Urbana–Champaign, Urbana, IL, USA. She is currently an Assistant Professor with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA. Her current research interests include machine learning in control, security of cyber-physical systems, game theory, and distributed control.

Rate this ebook

Tell us what you think.

Reading information

Smartphones and tablets
Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.
Laptops and computers
You can listen to audiobooks purchased on Google Play using your computer's web browser.
eReaders and other devices
To read on e-ink devices like Kobo eReaders, you'll need to download a file and transfer it to your device. Follow the detailed Help Center instructions to transfer the files to supported eReaders.