Posted November 10, 2024
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Rushikesh Kamalapurkar, University of Florida
Operator Theoretic Methods for System Identification
Operator representations of dynamical systems on Banach spaces provide a wide array of modeling and analysis tools. In this talk, I will focus on dynamic mode decomposition (DMD). In particular, new results on provably convergent singular value decomposition (SVD) of total derivative operators corresponding to dynamic systems will be presented. In the SVD approach, dynamic systems are modeled as total derivative operators that operate on reproducing kernel Hilbert spaces (RKHSs). The resulting total derivative operators are shown to be compact provided the domain and the range RKHSs are selected carefully. Compactness is used to construct a novel sequence of finite rank operators that converges, in norm, to the total derivative operator. The finite rank operators are shown to admit SVDs that are easily computed given sample trajectories of the underlying dynamical system. Compactness is further exploited to show convergence of the singular values and the right and left singular functions of the finite rank operators to those of the total derivative operator. Finally, the convergent SVDs are utilized to construct estimates of the vector field that models the system. The estimated vector fields are shown to be provably convergent, uniformly on compact sets. Extensions to systems with control and to partially unknown systems are also discussed. This talk is based in part on joint works [RK23], [RK24], and [RRKJ24] with J.A. Rosenfeld.
Posted December 6, 2024
Last modified January 2, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Harbir Antil, George Mason University
Optimization and Digital Twins
With recent advancements in computing resources and interdisciplinary collaborations, a new research field called Digital Twins (DTs) is starting to emerge. Data from sensors located on a physical system is fed into its DT, the DT in turn help make decisions about the physical system. This cycle then continues for the life-time of the physical system. A typical example is for instance a bridge. In many cases, these problems can be cast as optimization problems with finite or infinite dimensional (partial differential equations) constraints. This talk will provide an introduction to this topic. Special attention will be given to: 1) Optimization algorithms that are adaptive and can handle inexactness, e.g., Trust- Regions and ALESQP; 2) Optimization under uncertainty and tensor train decomposition to overcome the curse of dimensionality; 3) Reduced order modeling for dynamic optimization using randomized compression. Additionally, the DT framework may require coupling mutiphysics / systems / data with very different time scales. Keeping this in mind, a newly introduced notion of barely coupled problems will be discussed. Realistic examples of DTs to identify weakness in structures such as bridges, wind turbines, electric motors, and neuromorphic imaging will be considered.
Posted November 1, 2024
Last modified January 8, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Ali Zemouche, University of Lorraine, France
Advanced Robust Moving Horizon Estimation Schemes for Nonlinear Systems
This presentation deals with robust stability analysis of moving horizon estimation (MHE) for a class of nonlinear systems. New mathematical tools are introduced, enabling the development of new design conditions to optimize the parameters of the MHE scheme's cost function. These conditions are closely tied to the size of the MHE window and the system's incremental exponential input/output-to-state stability (i-EIOSS) coefficients. To enhance the robustness of the MHE while minimizing the window size, advanced prediction techniques are proposed. Additionally, innovative linear LMI-based methods are presented for synthesizing the i-EIOSS coefficients and prediction gains. The effectiveness of the proposed prediction methods is validated through numerical examples, highlighting their performance improvements.
Posted December 23, 2024
Last modified January 10, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Carsten Scherer, University of Stuttgart, Germany
IEEE Fellow
Robust Control and the Design of Controllers for Optimization
Recent years have witnessed a renewed interest in considering optimization algorithms as feedback systems. This viewpoint turns, for example, the analysis of the convergence properties of a first order algorithm into a problem of stability analysis of a Lure system. In this talk we highlight why advanced methods in robust control play a key role for developing unprecedented tools to analyze the convergence properties of first order algorithms for solving strongly convex programs. In contrast to alternative approaches, we reveal that the proposed avenue permits not only the analysis but also the systematic design of optimization algorithms using convex semi-definite programming.
Posted December 8, 2024
Last modified February 24, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
John Baras, University of Maryland
Fellow of AAAS, AMS, IEEE, and SIAM
Robust Machine Learning, Reinforcement Learning and Autonomy
Robustness is a fundamental concept in systems science and engineering. It is a critical consideration in inference and decision-making problems. It has recently surfaced again in the context of machine learning (ML), reinforcement learning (RL) and artificial intelligence (AI). We describe a novel and unifying theory of robustness for ML/RL/AI emanating from our much earlier fundamental results on robust output feedback control for general systems. We briefly summarize this theory and the universal solution it provides consisting of two coupled HJB equations. These earlier results rigorously established the equivalence of three seemingly unrelated problems: the robust output feedback control problem, a partially observed differential game, and a partially observed risk sensitive stochastic control problem. We first show that the “four block” view of the above results leads naturally to a similar formulation of the robust ML problem, and to a rigorous path to analyze robustness and attack resiliency in ML. Then we describe a recent risk-sensitive approach, using an exponential criterion in deep learning, that explains the convergence of stochastic gradients despite over-parametrization. Finally, we describe our most recent results on robust and risk sensitive RL for control, using exponential rewards, that emerge from our earlier theory, with the important new extension that the models are now unknown. We show how all forms of regularized RL can be derived from our theory, including KL and entropy regularization, a relation to probabilistic graphical models, and distributional robustness. The deeper reason for this unification emerges: it is the fundamental tradeoff between performance and risk measures in decision making, via rigorous duality. We close with open problems and future research directions.
Posted December 22, 2024
Last modified March 5, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Serdar Yuksel, Queen’s University, Canada
Robustness to Approximations and Learning in Stochastic Control via a Framework of Kernel Topologies
Stochastic kernels represent system models, control policies, and measurement channels, and thus offer a general mathematical framework for learning, robustness, and approximation analysis. To this end, we will first present and study several kernel topologies. These include weak* (also called Borkar) topology, Young topology, kernel mean embedding topologies, and strong convergence topologies. Convergence, continuity, and robustness properties of optimal cost for models and policies (viewed as kernels) will be presented in both discrete-time and continuous-time stochastic control. For models viewed as kernels, we study robustness to model perturbations, including finite approximations for discrete-time models and robustness to more general modeling errors and study the mismatch loss of optimal control policies designed for incorrect models applied to a true system, as the incorrect model approaches the true model under a variety of kernel convergence criteria. In particular, we show that the expected induced cost is robust under continuous weak convergence of transition kernels. Under stronger Wasserstein or total variation regularity, a modulus of continuity is also applicable. As applications of robustness under continuous weak convergence via data-driven model learning, (i) robustness to empirical model learning for discounted and average cost criteria is obtained with sample complexity bounds, and (ii) convergence and near optimality of a quantized Q-learning algorithm for MDPs with standard Borel spaces, which we show to be converging to an optimal solution of an approximate model under both discounted and average cost criteria, is established. In the context of continuous-time models, we obtain counterparts where we show continuity of cost in policy under Young and Borkar topologies, and robustness of optimal cost in models including discrete-time approximations for finite horizon and infinite-horizon discounted/ergodic criteria. Discrete-time approximations under several criteria and information structures will then be obtained via a unified approach of policy and model convergence. This is joint work with Ali D. Kara, Somnath Pradhan, Naci Saldi, and Tamas Linder.
Posted December 9, 2024
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Serkan Gugercin, Virginia Tech
TBA
Posted December 21, 2024
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Saber Jafarpour, University of Colorado
TBA
Posted November 7, 2024
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Irena Lasiecka, University of Memphis
AACC Bellman Control Heritage Awardee, AMS Fellow, SIAM Fellow, and SIAM Reid Prize Awardee
TBA
Posted January 10, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Carolyn Beck, University of Illinois Urbana-Champaign
IEEE Fellow
TBA
Posted January 16, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Bahman Gharesifard, Queen's University
TBA
Posted February 19, 2025
Control and Optimization Seminar Questions or comments?
11:30 am – 12:20 pm Zoom (click here to join)
Nina Amini, Laboratory of Signals and Systems, CentraleSupélec
TBA