Advanced Controls and Sensors Group



Technical LeadF.L. Lewis, Ph.D.
        National Academy of Inventors
        Fellow IEEE, Fellow IFAC,
������� Fellow U.K. Inst Measurement & Control
������� Fellow European Union Academy of Science
        Professional Engineer, Texas
������� Chartered Engineer U.K. Eng. Council
        University Distinguished Scholar Professor
        University Distinguished Teaching Professor

            Moncrief-Donnell Endowed Chair
                        2004 annual report
                        2005 annual report
                        2006 annual report
                        2007 annual report
                        2008 annual report
                        2009 annual report
                        2010 annual report
                        2011 annual report
                        2012 annual report
                        2013 annual report
                        2014 annual report
                        2015 annual report
                        2016 annual report

F.L. Lewis Professional Details-

             PhD Students
             Grants and Contracts


EE "Systems and Controls" Courses and Notes



Research Areas: 


Cooperative Control of Distributed Systems on Graphs

Cooperative control of Renewable Energy Microgrids

Reinforcement Learning & Approximate Dynamic Programming

          See recent presentations below

            ADP for discrete time systems

            ADP for continuous-time systems

Human-Robot Interactions 

Intelligent Nonlinear Control

Neural network control of robots and nonlinear systems

            Neural network control

            Hamilton Jacobi equation solution using neural networks

            Optimal control for nonlinear systems

            H-infinity (game theory) control

Discrete Event Supervisory Control

            For robotic assembly cells

            For wireless sensor networks

Intelligent Diagnostics and Prognostics


Software from sponsored research








F.L. Lewis, D. Vrabie, and V. Syrmos, Optimal Control, third edition, John Wiley and Sons, New York, 2012.

F.L. Lewis, L. Xie, and D. Popa, Optimal & Robust Estimation:  With an Introduction to Stochastic Control Theory, CRC Press, Boca Raton, 2007.  Second Edition.

B.L. Stevens, F.L. Lewis, E.N. Johnson, Aircraft Control and Simulation: Dynamics, Control, and Autonomous Systems, John Wiley and Sons, New York, Feb. 1992. Third edition 2015.

D. Vrabie, K. Vamvoudakis, and F.L. Lewis, Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, IET Press, 2012.

F.L. Lewis and Derong Liu, editors, Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, John Wiley/IEEE Press, Computational Intelligence Series. 2012.

F.L. Lewis, H. Zhang, K. Hengster-Movric, A. Das, Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches, Springer-Verlag, 2014.

F.L. Lewis, S. Jagannathan, and A. Yesildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, London, 1999.
download pdf file

F.L. Lewis, Applied Optimal Control and Estimation:  Digital Design and Implementation, Prentice-Hall, New Jersey, TI Series, Feb. 1992.

download pdf file

F.L. Lewis, D.M. Dawson, and C.T. Abdallah, Robot Manipulator Control: Theory and Practice, 2nd edition, Revised and Expanded, CRC Press, Boca Raton, 2006.
download pdf file

Y. Kim and F.L. Lewis, High-Level Feedback Control with Neural Networks," World Scientific, Singapore, 1998.

G. Vachtsevanos, F.L. Lewis, M. Roemer, A. Hess, B. Wu, Intelligent Fault Diagnosis and Prognosis for Engineering Systems, John Wiley, New York, 2006.



      ISAS 2024 Intro Lewis

Workshop on Reinforcement Learning 2018

      Main Background Development for Integral Reinforcement Learning

      New Developments and Extensions in Integral Reinforcement Learning- Graphical Games, Off-policy Tracking

      Applications of IRL- Microgrids, UAV, Human-Robot Interaction

Reinforcement Learning for Continuous Systems Optimality and Games

Reinforcement Learning for Discrete-time Systems

RL for Data-driven Optimization and Supervisory Process Control


Networked Multi-agent Systems Control- Stability vs. Optimality, and Graphical Games

Output Regulation of Heterogeneous MAS- Reduced-order design and Geometry

Output Regulation of Heterogeneous MAS- Reinforcement Learning for Synchronization with Completely Unknown Dynamics

Cooperative Synchronization Control for AC Microgrids

Human-Robot Interaction

Adaptive, Robust, Neural Network Control for Robots and Nonlinear Systems

Condition-Based Maintenance and Prognostics for Health Management

Discrete-Event Control for Industrial Processes



Various invited talks 2015, Integral Reinforcement Learning for Real-time Optimal Control and Differential Multi-player Games

Keynote Speaker, Int. Symposium on Resilient Control Systems, Philadelphia, August 2015, Reinforcement Learning for Resilient Control in Cooperative and Adversarial Multi-agent Networks: CPS Applications in Microgrid and Human-Robot Interactions

Invited Talk, Carnegie Mellon Pacific Campus, NASA Ames, Cal, April 2015, Cooperative Control for Renewable Energy Microgrids

Opening Invited Speaker, Workshop on Robotics and Biotechnology, Hong Kong City University, 16 Jan. 2015, “Reinforcement learning for human-robot interaction”

Invited Talk, Northeastern University, Shenyang,, China, Jan. 2015, Data-driven Optimization and Supervisory Control for Industrial Processes

Optimal Control, Workshop at Northeastern University, Shenyang, China, 5-6 November 2014.  Qian Ren and Project 111 Program.
download pdf file, opfb H-infinity paper

Data-driven Control and Optimization for Industrial Processes, Workshop at Northeastern University, Shenyang, China, May 2014.  Qian Ren and Project 111 Program.

Reinforcement Learning and ADP for Real-Time Optimal Control and Dynamic Games, Plenary Talk, Int. Joint Conference on Neural Networks, Dallas, August 2013

Data-driven Control and Optimization for Industrial Processes: Reinforcement Learning & Supervisory Control, Workshop at Northeastern University, Shenyang, China, July 2013.  Project 111 Program.

Optimal Distributed Cooperative Control of Multi-Agent Systems and Graphical Games, Plenary Talk, Int. Conf. Intelligent Control and Information Processing ICICIP, Beijing, June 2013.

Distributed Cooperative Control for Electric Power Microgrid Applications, Plenary Talk, IEEE CYBER, Nanjing, May 2013.

Reinforcement Learning Adaptive Structures for Real-Time Optimal Control and Graphical Games, Invited Talk, Chinese University of Hong Kong, May 2013.

Adaptive Tuning for Optimal Process Control and Multi-Process Games Using Reinforcement Learning, Singapore Institute of Manufacturing Technology SIMTech, May 2013.

Optimal Adaptive Control Using Reinforcement Learning, Opening Plenary Talk, IEEE Multi-Conference on Systems and Controls, Dubrovnik, Croatia, Oct. 2012.

Novel Adaptive Control Structures by Reinforcement Learning, Opening Plenary Talk, Int. Conf. on System Theory, Control and Computing, Sinaia, Romania, Oct. 2012.

Reinforcement Methods for Online Learning in Autonomous Robotic Systems, Plenary Talk, FIRA Robo World Congress, Bristol, UK, 20 August 2012.

Cooperative Control: Stability versus global optimality, Chinese Academy of Sciences, 2012

Cooperative control: optimal design and Graphical Games, Chinese Academy of Sciences, 2012

Cooperative Control: Optimal Design, Observers, distributed adaptive control, Chinese Academy of Sciences, 2011

CDC Orlando 2011 workshop
Optimal Adaptive Control: Online Solutions for Optimal Feedback Control and Differential Games Using Reinforcement Learning
     Lewis notes- MDP and reinforcement learning
     Busoniu notes
     Jagannathan notes
     Vrabie notes
     Lewis notes- online synchronous policy iteration

Optimal Control and Online Game Solutions Using Approximate Dynamic Programming, Workshop, Symp. ADP/RL, Paris, April 2011.

UTA Workshop on Building a Successful Research program and Mentoring PhD Students, 15 October, 2010.

Online Optimal Adaptive Control: Real-Time learning of optimal control and zero-sum game solutions, Plenary Talk, Chinese Conf. Decision & Control, Xuzhou, May 2010.

"Distributed Adaptive Control for Synchronization of Unknown Nonlinear Networked Systems,��� Invited Talk, 9th Symposium on Frontier Problems in System and Control, Chinese Academy of Sciences, Beijing, May 2010

Decision & Control for Sustainable Manufacturing and Green Engineering, A-Star Singapore Manufacturing Technology Institute SIMTech, May 2010.

Structural Health Monitoring for Aircraft Skin Systems, A-Star Data Storage Institute DSI, Singapore, August 2009.


INFORMATION FOR NEW STUDENTS APPLYING.�� Apply through Graduate Adviser, UTA Dept. of Electrical Engineering


Ph.D. Students:


       Jim Worsham, �Aircraft Autopilot Controller Design,� PhD Funded by Lockheed Martin, Sept 2021-present.

       Yusuf Kartal, Autonomous Aerial Vehicles Distributed Control and Interactive Games, Department of Electrical Engineering, University of Texas at Arlington, in progress. Sept. 2019 � present.

       Bosen Lian, Distributed Estimation and Inverse Reinforcement Learning for Multi-agent Systems, PhD degree, Department of Electrical Engineering, University of Texas at Arlington, in progress. Dec. 2021.


Recent Former Ph.D. Students:


 Research Supported by (Past and Present):

National Science Foundation

Office of Naval Research

Army Research Office, Army National Automotive Center, TARDEC/RDECOM

Air Force Office of Scientific Research

ONR, NASA, and ARO SBIR Contracts




UTA Dept. of Electrical Engineering

UTA Research Institute