Posted August 25, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Cristopher Hermosilla, Universidad Técnica Federico Santa María

Hamilton-Jacobi-Bellman Approach for Optimal Control Problems of Sweeping Processes

This talk is concerned with a state constrained optimal control problem governed by a Moreau's sweeping process with a controlled drift. The focus of this work is on the Bellman approach for an infinite horizon problem. In particular, we focus on the regularity of the value function and on the Hamilton-Jacobi-Bellman equation it satisfies. We discuss a uniqueness result and we make a comparison with standard state constrained optimal control problems to highlight a regularizing effect that the sweeping process induces on the value function. This is a joint work with Michele Palladino (University of L’Aquila, Italy) and Emilio Vilches (Universidad de O’Higgins, Chile).

Posted August 18, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Mario Sznaier, Northeastern University
IEEE Fellow, IEEE Control Systems Society Distinguished Member Awardee

Why Do We Need Control in Control Oriented Learning?

Despite recent advances in machine learning (ML), the goal of designing control systems capable of fully exploiting the potential of these methods remains elusive. Modern ML can leverage large amounts of data to learn powerful predictive models, but such models are not designed to operate in a closed-loop environment. Recent results on reinforcement learning offer a tantalizing view of the potential of a rapprochement between control and learning, but so far proofs of performance and safety are mostly restricted to limited cases. Thus, learning elements are often used as black boxes in the loop, with limited interpretability and less than completely understood properties. Further progress hinges on the development of a principled understanding of the limitations of control-oriented machine learning. This talk will present some initial results unveiling the fundamental limitations of some popular learning algorithms and architectures when used to control a dynamical system. For instance, it shows that even though feed forward neural nets are universal approximators, they are unable to stabilize some simple systems. We also show that a recurrent neural net with differentiable activation functions that stabilizes a non-strongly stabilizable system must itself be open loop unstable, and discuss the implications of this for training with noisy, finite data. Finally, we present a simple system where any controller based on unconstrained optimization of the parameters of a given structure fails to render the closed loop system input-to-state stable. The talk finishes by arguing that when the goal is to learn stabilizing controllers, the loss function should reflect closed loop performance, which can be accomplished using gap-metric motivated loss functions, and presenting initial steps towards that goal.

Posted August 18, 2023

Last modified September 11, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Cristina Pignotti, Università degli Studi dell'Aquila

Consensus Results for Hegselmann-Krause Type Models with Time Delay

We study Hegselmann-Krause (HK) opinion formation models in the presence of time delay effects. The influence coefficients among the agents are nonnegative, as usual, but they can also degenerate. This includes, e.g., the case of on-off influence, namely the agents do not communicate over some time intervals. We give sufficient conditions ensuring that consensus is achieved for all initial configurations. Moreover, we analyze the continuity type equation obtained as the mean-field limit of the particle model when the number of agents goes to infinity. Finally, we analyze a control problem for a delayed HK model with leadership and design a simple control strategy steering all agents to any fixed target opinion.

Posted September 12, 2023

Last modified October 11, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Melvin Leok, University of California, San Diego

Connections Between Geometric Mechanics, Information Geometry, Accelerated Optimization and Machine Learning

Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically, and information geometry formulates statistical estimation, inference, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular, the divergence function of information geometry can be viewed as a discrete Lagrangian, which is a generating function of a symplectic map, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method.

Posted August 22, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Eduardo Cerpa, Pontificia Universidad Católica de Chile
SIAM Activity Group on Control and Systems Theory Prize Recipient

Control and System Theory Methods in Neurostimulation

Electrical stimulation therapies are used to treat the symptoms of a variety of nervous system disorders. Recently, the use of high frequency signals has received increased attention due to its varied effects on tissues and cells. In this talk, we will see how some methods from Control and System Theory can be useful to address relevant questions in this framework when the FitzHugh-Nagumo model of a neuron is considered. Here, the stimulation is through the source term of an ODE and the level of neuron activation is associated with the existence of action potentials which are solutions with a particular profile. A first question concerns the effectiveness of a recent technique called interferential currents, which combines two signals of similar kilohertz frequencies intended to activate deeply positioned cells. The second question is about how to avoid the onset of undesirable action potentials originated when signals that produce conduction block are turned on. We will show theoretical and computational results based on methods such as averaging, Lyapunov analysis, quasi-static steering, and others.

Posted August 22, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Philip E. Paré, Purdue University

Modeling, Estimation, and Analysis of Epidemics over Networks

We present and analyze mathematical models for network-dependent spread. We use the analysis to validate a SIS (susceptible-infected-susceptible) model employing John Snow’s classical work on cholera epidemics in London in the 1850’s. Given the demonstrated validity of the model, we discuss control strategies for mitigating spread, and formulate a tractable antidote administration problem that significantly reduces spread. Then we formulate a parameter estimation problem for an SIR (susceptible-infected-recovered) networked model, where costs are incurred by measuring different nodes' states and the goal is to minimize the total cost spent on collecting measurements or to optimize the parameter estimates while remaining within a measurement budget. We show that these problems are NP-hard to solve in general and propose approximation algorithms with performance guarantees. We conclude by discussing an ongoing project where we are developing online parameter estimation techniques for noisy data and time-varying epidemics.

Posted January 18, 2023

Last modified October 30, 2023

Control and Optimization Seminar Questions or comments?

10:30 am 233 Lockett and Zoom (Click “Questions or Comments?” to request a Zoom link)
Maruthi Akella, University of Texas
Fellow of AIAA, IEEE, and AAS

Sub-Modularity Measures for Learning and Robust Perception in Aerospace Autonomy

Onboard learning and robust perception can be generally viewed to characterize autonomy as overarching system-level properties. The complex interplay between autonomy and onboard decision support systems introduces new vulnerabilities that are extremely hard to predict with most existing guidance and control tools. In this seminar, we review some recent advances in learning-oriented and information-aware path- planning, and sub-modularity metrics for non-myopic sensor scheduling for “plug-and- play” systems. The concept of “learning-oriented” path-planning is realized through certain new classes of exploration inducing distance metrics. These technical foundations will be highlighted through aerospace applications with active learning inside dynamic and uncertain environments.

Posted September 2, 2023

Last modified November 15, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Sean Meyn, University of Florida
Robert C. Pittman Eminent Scholar Chair, IEEE Fellow, IEEE CSS Distinguished Lecturer

Stochastic Approximation and Extremum Seeking Control

Stochastic approximation was introduced in the 1950s to solve root finding problems, of which optimization is a canonical application. It is argued in recent work that extremum seeking control (ESC), a particular approach to gradient-free optimization with an even longer history, can be cast as quasi-stochastic approximation (QSA). In this lecture, we will go through the basics of these (until now) disparate fields. Application of QSA theory to ESC leads to several significant conclusions, including that ESC is not globally stable, as examples show. Careful application of QSA theory leads to new algorithms that are stable without any loss of performance. Also, QSA theory immediately provides asymptotic and transient bounds, providing guidelines for algorithm design. In addition to surveying this general theory, the talk provides a tutorial on design principles through numerical studies.

Posted September 29, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Hélène Frankowska, Sorbonne University

Differential Inclusions on Wasserstein Spaces

Optimal control in Wasserstein spaces addresses control of systems with large numbers of agents. It is well known that for optimal control of ODEs, the differential inclusions theory provides useful tools to investigate existence of optimal controls, necessary optimality conditions and Hamilton-Jacobi- Bellman equations. Recently, many models arising in social sciences used the framework of Wasserstein spaces, i.e. metric spaces of Borel probability measures endowed with the Wasserstein metric. This talk is devoted to a recent extension given in [1] of the theory of differential inclusions to the setting of general Wasserstein spaces. In the second part of the talk, necessary and sufficient conditions for the existence of solutions to state-constrained continuity inclusions in Wasserstein spaces, whose right-hand sides may be discontinuous in time, are provided; see [2]. These latter results are based on a fine investigation of the infinitesimal behavior of the underlying reachable sets, which heuristically amounts to showing that up to a negligible set, every admissible velocity can be realized as the metric derivative of a solution of the continuity inclusion, and vice versa. Building on these results, necessary and sufficient geometric conditions for the viability and invariance of stationary and time-dependent constraints, which involve a suitable notion of contingent cones in Wasserstein spaces, are established. Viability and invariance theorems in a more restrictive framework were already applied in [5], [6] to investigate stability of controlled continuity equations and uniqueness of solutions to HJB equations. The provided new tools allow us to get similar results in general Wasserstein spaces. References: [1] BONNET B. and FRANKOWSKA H., Caratheodory Theory and a Priori Estimates for Continuity Inclusions in the Space of Probability Measures, preprint https://arxiv.org/pdf/2302.00963.pdf, 2023. [2] BONNET B. and FRANKOWSKA H., On the Viability and Invariance of Proper Sets under Continuity Inclusions in Wasserstein Spaces, SIAM Journal on Mathematical Analysis, to appear. [3] BONNET B. and FRANKOWSKA H., Differential inclusions in Wasserstein spaces: the Cauchy-Lipschitz framework, Journal of Diff. Eqs. 271: 594-637, 2021. [4] BONNET B. and FRANKOWSKA H., Mean-field optimal control of continuity equations and differential inclusions, Proceedings of 59th IEEE Conference on Decision and Control, Republic of Korea, December 8-11, 2020: 470-475, 2020. [5] BONNET B. and FRANKOWSKA H., Viability and exponentially stable trajectories for differential inclusions in Wasserstein spaces, Proceedings of 61st IEEE Conference on Decision and Control, Mexico, December 6-9, 2022: 5086-5091, 2022. [6] BADREDDINE Z. and FRANKOWSKA H., Solutions to Hamilton-Jacobi equation on a Wasserstein space, Calculus of Variations and PDEs 81: 9, 2022.

Posted September 8, 2023

Last modified November 14, 2023

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (Click “Questions or Comments?” to request a Zoom link)
Meeko Oishi, University of New Mexico
NSF BRITE Fellow

Human-Centered Probabilistic Planning and Control

Although human interaction with autonomous systems is becoming ubiquitous, few tools exist for planning and control of autonomous systems that account for human uncertainty and decision making. We seek methods for probabilistic verification and control that can help ensure compatibility of autonomous systems with human decision making and human uncertainty. This requires the development of theory and computational tools that can accommodate arbitrary, non-Gaussian uncertainty for both probabilistic verification and control, potentially without high confidence models. This talk will focus on our work in probabilistic verification of ReLU neural nets, data-driven stochastic optimal control and stochastic reachability. Our approaches to probabilistic verification are based in Fourier transforms and chance constrained optimization, and our approaches to data-driven stochastic planning and control are based in conditional distribution embeddings. Both of these approaches enable computation without gridding, sampling, or recursion. We also present recent work on data-driven tools for high fidelity modeling and characterization of human-in-the-loop trajectories, that accommodate dynamic processes with probabilistic human inputs.

Posted January 11, 2024

Last modified January 17, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Boris Mordukhovich, Wayne State University
AMS Fellow, SIAM Fellow

Optimal Control of Sweeping Processes with Applications

This talk is devoted to a novel class of optimal control problems governed by sweeping (or Moreau) processes that are described by discontinuous dissipative differential inclusions. Although such dynamical processes, strongly motivated by applications, first appeared in the 1970s, optimal control problems for them have only been formulated quite recently and were found to be complicated from the viewpoint of developing control theory. Their study and applications require advanced tools of variational analysis and generalized differentiation, which will be presented in this talk. Combining this machinery with the method of discrete approximations leads us to deriving new necessary optimality conditions and their applications to practical models in elastoplasticity, traffic equilibria, and robotics. This talk is based on joint work with Giovanni Colombo (University of Padova), Dao Nguyen (San Diego State University), and Trang Nguyen (Wayne State University).

Posted February 2, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Ali Kara, University of Michigan

Stochastic Control with Partial Information: Optimality, Stability, Approximations and Learning

Partially observed stochastic control is an appropriate model for many applications involving optimal decision making and control. In this talk, we will first present a general introduction and then study optimality, approximation, and learning theoretic results. For such problems, existence of optimal policies have in general been established via reducing the original partially observed stochastic control problem to a fully observed one with probability measure valued states. However, computing a near-optimal policy for this fully observed model is challenging. We present an alternative reduction tailored to an approximation analysis via filter stability and arrive at an approximate finite model. Toward this end, we will present associated regularity and Feller continuity, and controlled filter stability conditions: Filter stability refers to the correction of an incorrectly initialized filter for a partially observed dynamical system with increasing measurements. We present explicit conditions for filter stability which are then utilized to arrive at approximately optimal solutions. Finally, we establish the convergence of a learning algorithm for control policies using a finite history of past observations and control actions (by viewing the finite window as a 'state') and establish near optimality of this approach. As a corollary, this analysis establishes near optimality of classical Q-learning for continuous state space stochastic control problems (by lifting them to partially observed models with approximating quantizers viewed as measurement kernels) under weak continuity conditions. Further implications and some open problems will also be discussed.

Posted December 28, 2023

Last modified February 20, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (Click “Questions or Comments?” to request a Zoom link)
Huyên Pham
Editor-in-Chief for SIAM Journal on Control and Optimization, 2024-

A Schrödinger Bridge Approach to Generative Modeling for Time Series

We propose a novel generative model for time series based on Schrödinger bridge (SB) approach. This consists in the entropic interpolation via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series. The solution is characterized by a stochastic differential equation on finite horizon with a path-dependent drift function, hence respecting the temporal dynamics of the time series distribution. We estimate the drift function from data samples by nonparametric, e.g. kernel regression methods, and the simulation of the SB diffusion yields new synthetic data samples of the time series. The performance of our generative model is evaluated through a series of numerical experiments. First, we test with autoregressive models, a GARCH Model, and the example of fractional Brownian motion, and measure the accuracy of our algorithm with marginal, temporal dependencies metrics, and predictive scores. Next, we use our SB generated synthetic samples for the application to deep hedging on real-data sets.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Dante Kalise, Imperial College

Feedback Control Synthesis for Interacting Particle Systems across Scales

This talk focuses on the computational synthesis of optimal feedback controllers for interacting particle systems operating at different scales. In the first part, we discuss the construction of control laws for large-scale microscopic dynamics by supervised learning methods, tackling the curse of dimensionality inherent in such systems. Moving forward, we integrate the microscopic feedback law into a Boltzmann-type equation, bridging controls at microscopic and mesoscopic scales, allowing for near-optimal control of high-dimensional densities. Finally, in the framework of mean field optimal control, we discuss the stabilization of nonlinear Fokker-Planck equations towards unstable steady states via model predictive control.

Posted February 12, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Note the Special Earlier Seminar Time For Only This Week. This is a Zoom Seminar. Zoom (click here to join)
Antoine Girard, Laboratoire des Signaux et Systèmes
CNRS Bronze Medalist, IEEE Fellow, and George S. Axelby Outstanding Paper Awardee

Switched Systems with Omega-Regular Switching Sequences: Application to Switched Observer Design

In this talk, I will present recent results on discrete-time switched linear systems. We consider systems with constrained switching signals where the constraint is given by an omega-regular language. Omega-regular languages allow us to specify fairness properties (e.g., all modes have to be activated an infinite number of times) that cannot be captured by usual switching constraints given by dwell-times or graph constraints. By combining automata theoretic techniques and Lyapunov theory, we provide necessary and sufficient conditions for the stability of such switched systems. In the second part of the talk, I will present an application of our framework to observer design of switched systems that are unobservable for arbitrary switching. We establish a systematic and "almost universal" procedure to design observers for discrete-time switched linear systems. This is joint work with Georges Aazan, Luca Greco and Paolo Mason.

Posted January 22, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Boris Kramer, University of California San Diego

Scalable Computations for Nonlinear Balanced Truncation Model Reduction

Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems on nonlinear manifolds and preserves either open- or closed-loop observability and controllability aspects of the nonlinear system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the computation of Hamilton-Jacobi-(Bellman) equations that are needed for characterization of controllability and observability aspects, and (b) efficient model reduction and reduced-order model (ROM) simulation on the resulting nonlinear balanced manifolds. We present a novel unifying and scalable approach to balanced truncation for large-scale control-affine nonlinear systems that consider a Taylor-series based approach to solve a class of parametrized Hamilton-Jacobi-Bellman equations that are at the core of balancing. The specific tensor structure for the coefficients of the Taylor series (tensors themselves) allows for scalability up to thousands of states. Moreover, we will present a nonlinear balance-and-reduce approach that finds a reduced nonlinear state transformation that balances the system properties. The talk will illustrate the strength and scalability of the algorithm on several semi-discretized nonlinear partial differential equations, including a nonlinear heat equation, vibrating beams, Burgers' equation and the Kuramoto-Sivashinsky equation.

Posted January 27, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Sergey Dashkovskiy , Julius-Maximilians-Universität Würzburg

Stability Properties of Dynamical Systems Subjected to Impulsive Actions

We consider several approaches to study stability and instability properties of infinite dimensional impulsive systems. The approaches are of Lyapunov type and provide conditions under which an impulsive system is stable. In particular we will cover the case, when discrete and continuous dynamics are not stable simultaneously. Also we will handle the case when both the flow and jumps are stable, but the overall system is not. We will illustrate these approaches by means of several examples.

Posted January 6, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Madalena Chaves, Centre Inria d'Université Côte d'Azur

Coupling, Synchronization Dynamics, and Emergent Behavior in a Network of Biological Oscillators

Biological oscillators often involve a complex network of interactions, such as in the case of circadian rhythms or cell cycle. Mathematical modeling and especially model reduction help to understand the main mechanisms behind oscillatory behavior. In this context, we first study a two-gene oscillator using piecewise linear approximations to improve the performance and robustness of the oscillatory dynamics. Next, motivated by the synchronization of biological rhythms in a group of cells in an organ such as the liver, we then study a network of identical oscillators under diffusive coupling, interconnected according to different topologies. The piecewise linear formalism enables us to characterize the emergent dynamics of the network and show that a number of new steady states is generated in the network of oscillators. Finally, given two distinct oscillators mimicking the circadian clock and cell cycle, we analyze their interconnection to study the capacity for mutual period regulation and control between the two reduced oscillators. We are interested in characterizing the coupling parameter range for which the two systems play the roles "controller-follower".

Posted January 17, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Tobias Breiten, Technical University of Berlin

On the Approximability of Koopman-Based Operator Lyapunov Equations

Computing the Lyapunov function of a system plays a crucial role in optimal feedback control, for example when the policy iteration is used. This talk will focus on the Lyapunov function of a nonlinear autonomous finite-dimensional dynamical system which will be rewritten as an infinite-dimensional linear system using the Koopman operator. Since this infinite-dimensional system has the structure of a weak-* continuous semigroup in a specially weighted Lp-space one can establish a connection between the solution of an operator Lyapunov equation and the desired Lyapunov function. It will be shown that the solution to this operator equation attains a rapid eigenvalue decay, which justifies finite rank approximations with numerical methods. The usefulness for numerical computations will also be demonstrated with two short examples. This is joint work with Bernhard Höveler (TU Berlin).

Posted January 16, 2024

Last modified March 4, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Jorge Poveda, University of California, San Diego
Donald P. Eckman, NSF CAREER, and AFOSR Young Investigator Program Awardee

Multi-Time Scale Hybrid Dynamical Systems for Model-Free Control and Optimization

Hybrid dynamical systems, which combine continuous-time and discrete-time dynamics, are prevalent in various engineering applications such as robotics, manufacturing systems, power grids, and transportation networks. Effectively analyzing and controlling these systems is crucial for developing autonomous and efficient engineering systems capable of real-time adaptation and self-optimization. This talk will delve into recent advancements in controlling and optimizing hybrid dynamical systems using multi-time scale techniques. These methods facilitate the systematic incorporation and analysis of both "exploration and exploitation" behaviors within complex control systems through singular perturbation and averaging theory, resulting in a range of provably stable and robust algorithms suitable for model-free control and optimization. Practical engineering system examples will be used to illustrate these theoretical tools.

Posted April 29, 2024

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)
Giovanni Fusco, Università degli Studi di Padova

A Lie-Bracket-Based Notion of Stabilizing Feedback in Optimal Control

With reference to an optimal control problem where the state has to asymptotically approach a closed target while paying a non-negative integral cost, we propose a generalization of the classical dissipative relation that defines a control Lyapunov function by a weaker differential inequality. The latter involves both the cost and the iterated Lie brackets of the vector fields in the dynamics up to a certain degree $k\ge 1$, and we call any of its (suitably defined) solutions a degree-k minimum restraint function. We prove that the existence of a degree-k minimum restraint function allows us to build a Lie-bracket-based feedback which sample stabilizes the system to the target while regulating (i.e., uniformly bounding) the cost.