Posted January 11, 2024

Last modified January 17, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Boris Mordukhovich, Wayne State University
AMS Fellow, SIAM Fellow

Optimal Control of Sweeping Processes with Applications

This talk is devoted to a novel class of optimal control problems governed by sweeping (or Moreau) processes that are described by discontinuous dissipative differential inclusions. Although such dynamical processes, strongly motivated by applications, first appeared in the 1970s, optimal control problems for them have only been formulated quite recently and were found to be complicated from the viewpoint of developing control theory. Their study and applications require advanced tools of variational analysis and generalized differentiation, which will be presented in this talk. Combining this machinery with the method of discrete approximations leads us to deriving new necessary optimality conditions and their applications to practical models in elastoplasticity, traffic equilibria, and robotics. This talk is based on joint work with Giovanni Colombo (University of Padova), Dao Nguyen (San Diego State University), and Trang Nguyen (Wayne State University).

Posted February 2, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Ali Kara, University of Michigan

Stochastic Control with Partial Information: Optimality, Stability, Approximations and Learning

Partially observed stochastic control is an appropriate model for many applications involving optimal decision making and control. In this talk, we will first present a general introduction and then study optimality, approximation, and learning theoretic results. For such problems, existence of optimal policies have in general been established via reducing the original partially observed stochastic control problem to a fully observed one with probability measure valued states. However, computing a near-optimal policy for this fully observed model is challenging. We present an alternative reduction tailored to an approximation analysis via filter stability and arrive at an approximate finite model. Toward this end, we will present associated regularity and Feller continuity, and controlled filter stability conditions: Filter stability refers to the correction of an incorrectly initialized filter for a partially observed dynamical system with increasing measurements. We present explicit conditions for filter stability which are then utilized to arrive at approximately optimal solutions. Finally, we establish the convergence of a learning algorithm for control policies using a finite history of past observations and control actions (by viewing the finite window as a 'state') and establish near optimality of this approach. As a corollary, this analysis establishes near optimality of classical Q-learning for continuous state space stochastic control problems (by lifting them to partially observed models with approximating quantizers viewed as measurement kernels) under weak continuity conditions. Further implications and some open problems will also be discussed.

Posted December 28, 2023

Last modified February 20, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Huyên Pham
Editor-in-Chief for SIAM Journal on Control and Optimization, 2024-

A Schrödinger Bridge Approach to Generative Modeling for Time Series

We propose a novel generative model for time series based on Schrödinger bridge (SB) approach. This consists in the entropic interpolation via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series. The solution is characterized by a stochastic differential equation on finite horizon with a path-dependent drift function, hence respecting the temporal dynamics of the time series distribution. We estimate the drift function from data samples by nonparametric, e.g. kernel regression methods, and the simulation of the SB diffusion yields new synthetic data samples of the time series. The performance of our generative model is evaluated through a series of numerical experiments. First, we test with autoregressive models, a GARCH Model, and the example of fractional Brownian motion, and measure the accuracy of our algorithm with marginal, temporal dependencies metrics, and predictive scores. Next, we use our SB generated synthetic samples for the application to deep hedging on real-data sets.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Dante Kalise, Imperial College

Feedback Control Synthesis for Interacting Particle Systems across Scales

This talk focuses on the computational synthesis of optimal feedback controllers for interacting particle systems operating at different scales. In the first part, we discuss the construction of control laws for large-scale microscopic dynamics by supervised learning methods, tackling the curse of dimensionality inherent in such systems. Moving forward, we integrate the microscopic feedback law into a Boltzmann-type equation, bridging controls at microscopic and mesoscopic scales, allowing for near-optimal control of high-dimensional densities. Finally, in the framework of mean field optimal control, we discuss the stabilization of nonlinear Fokker-Planck equations towards unstable steady states via model predictive control.

Posted February 12, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Note the Special Earlier Seminar Time For Only This Week. This is a Zoom Seminar. Zoom (click here to join)
Antoine Girard, Laboratoire des Signaux et Systèmes
CNRS Bronze Medalist, IEEE Fellow, and George S. Axelby Outstanding Paper Awardee

Switched Systems with Omega-Regular Switching Sequences: Application to Switched Observer Design

In this talk, I will present recent results on discrete-time switched linear systems. We consider systems with constrained switching signals where the constraint is given by an omega-regular language. Omega-regular languages allow us to specify fairness properties (e.g., all modes have to be activated an infinite number of times) that cannot be captured by usual switching constraints given by dwell-times or graph constraints. By combining automata theoretic techniques and Lyapunov theory, we provide necessary and sufficient conditions for the stability of such switched systems. In the second part of the talk, I will present an application of our framework to observer design of switched systems that are unobservable for arbitrary switching. We establish a systematic and "almost universal" procedure to design observers for discrete-time switched linear systems. This is joint work with Georges Aazan, Luca Greco and Paolo Mason.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Boris Kramer, University of California San Diego

Scalable Computations for Nonlinear Balanced Truncation Model Reduction

Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems on nonlinear manifolds and preserves either open- or closed-loop observability and controllability aspects of the nonlinear system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the computation of Hamilton-Jacobi-(Bellman) equations that are needed for characterization of controllability and observability aspects, and (b) efficient model reduction and reduced-order model (ROM) simulation on the resulting nonlinear balanced manifolds. We present a novel unifying and scalable approach to balanced truncation for large-scale control-affine nonlinear systems that consider a Taylor-series based approach to solve a class of parametrized Hamilton-Jacobi-Bellman equations that are at the core of balancing. The specific tensor structure for the coefficients of the Taylor series (tensors themselves) allows for scalability up to thousands of states. Moreover, we will present a nonlinear balance-and-reduce approach that finds a reduced nonlinear state transformation that balances the system properties. The talk will illustrate the strength and scalability of the algorithm on several semi-discretized nonlinear partial differential equations, including a nonlinear heat equation, vibrating beams, Burgers' equation and the Kuramoto-Sivashinsky equation.

Posted January 27, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Sergey Dashkovskiy , Julius-Maximilians-Universität Würzburg

Stability Properties of Dynamical Systems Subjected to Impulsive Actions

We consider several approaches to study stability and instability properties of infinite dimensional impulsive systems. The approaches are of Lyapunov type and provide conditions under which an impulsive system is stable. In particular we will cover the case, when discrete and continuous dynamics are not stable simultaneously. Also we will handle the case when both the flow and jumps are stable, but the overall system is not. We will illustrate these approaches by means of several examples.

Posted January 6, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Madalena Chaves, Centre Inria d'Université Côte d'Azur

Coupling, Synchronization Dynamics, and Emergent Behavior in a Network of Biological Oscillators

Biological oscillators often involve a complex network of interactions, such as in the case of circadian rhythms or cell cycle. Mathematical modeling and especially model reduction help to understand the main mechanisms behind oscillatory behavior. In this context, we first study a two-gene oscillator using piecewise linear approximations to improve the performance and robustness of the oscillatory dynamics. Next, motivated by the synchronization of biological rhythms in a group of cells in an organ such as the liver, we then study a network of identical oscillators under diffusive coupling, interconnected according to different topologies. The piecewise linear formalism enables us to characterize the emergent dynamics of the network and show that a number of new steady states is generated in the network of oscillators. Finally, given two distinct oscillators mimicking the circadian clock and cell cycle, we analyze their interconnection to study the capacity for mutual period regulation and control between the two reduced oscillators. We are interested in characterizing the coupling parameter range for which the two systems play the roles "controller-follower".

Posted January 17, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Tobias Breiten, Technical University of Berlin

On the Approximability of Koopman-Based Operator Lyapunov Equations

Computing the Lyapunov function of a system plays a crucial role in optimal feedback control, for example when the policy iteration is used. This talk will focus on the Lyapunov function of a nonlinear autonomous finite-dimensional dynamical system which will be rewritten as an infinite-dimensional linear system using the Koopman operator. Since this infinite-dimensional system has the structure of a weak-* continuous semigroup in a specially weighted Lp-space one can establish a connection between the solution of an operator Lyapunov equation and the desired Lyapunov function. It will be shown that the solution to this operator equation attains a rapid eigenvalue decay, which justifies finite rank approximations with numerical methods. The usefulness for numerical computations will also be demonstrated with two short examples. This is joint work with Bernhard Höveler (TU Berlin).

Posted January 16, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Jorge Poveda, University of California, San Diego
Donald P. Eckman, NSF CAREER, and AFOSR Young Investigator Program Awardee

Multi-Time Scale Hybrid Dynamical Systems for Model-Free Control and Optimization

Hybrid dynamical systems, which combine continuous-time and discrete-time dynamics, are prevalent in various engineering applications such as robotics, manufacturing systems, power grids, and transportation networks. Effectively analyzing and controlling these systems is crucial for developing autonomous and efficient engineering systems capable of real-time adaptation and self-optimization. This talk will delve into recent advancements in controlling and optimizing hybrid dynamical systems using multi-time scale techniques. These methods facilitate the systematic incorporation and analysis of both "exploration and exploitation" behaviors within complex control systems through singular perturbation and averaging theory, resulting in a range of provably stable and robust algorithms suitable for model-free control and optimization. Practical engineering system examples will be used to illustrate these theoretical tools.

Posted April 29, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Giovanni Fusco, Università degli Studi di Padova

A Lie-Bracket-Based Notion of Stabilizing Feedback in Optimal Control

With reference to an optimal control problem where the state has to asymptotically approach a closed target while paying a non-negative integral cost, we propose a generalization of the classical dissipative relation that defines a control Lyapunov function by a weaker differential inequality. The latter involves both the cost and the iterated Lie brackets of the vector fields in the dynamics up to a certain degree $k\ge 1$, and we call any of its (suitably defined) solutions a degree-k minimum restraint function. We prove that the existence of a degree-k minimum restraint function allows us to build a Lie-bracket-based feedback which sample stabilizes the system to the target while regulating (i.e., uniformly bounding) the cost.