Calendar

Time interval: Events:

Friday, March 21, 2025

Posted December 9, 2024
Last modified March 14, 2025

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Serkan Gugercin, Virginia Tech
What to Interpolate for L2 Optimal Approximation: Reflections on the Past, Present, and Future

In this talk, we revisit the L2 optimal approximation problem through various formulations and applications, exploring its rich mathematical structure and diverse implications. We begin with the classical case where the optimal approximant is a rational function, highlighting how Hermite interpolation at specific reflected points emerges as the necessary condition for optimality. Building on this foundation, we consider extensions that introduce additional structure to rational approximations and relax certain restrictions, revealing new dimensions of the problem. Throughout, we demonstrate how Hermite interpolation at reflected points serves as a unifying theme across different domains and applications.

Friday, March 28, 2025

Posted March 21, 2025
Last modified March 25, 2025

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Note the Special Earlier Seminar Time For Only This Week. This is a Zoom Seminar. Zoom (click here to join)

Denis Dochain, Université Catholique de Louvain IEEE Fellow, IFAC Fellow
Automatic Control and Biological Systems

This talk aims to give an overview of more than 40 years of research activities in the field of modelling and control of biological systems. It will cover different aspects of modelling, analysis, monitoring and control of bio-systems, and will be illustrated by a large variety of biological systems, from environmental systems to biomedical applications via food processes or plant growth.

Friday, April 11, 2025

Posted November 7, 2024
Last modified March 13, 2025

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Irena Lasiecka, University of Memphis AACC Bellman Control Heritage Awardee, AMS Fellow, SIAM Fellow, and SIAM Reid Prize Awardee
Mathematical Theory of Flow-Structure Interactions

Flow-structure interactions are ubiquitous in nature and in everyday life. Flow or fluid interacting with structural elements can lead to oscillations, hence impacting stability or even safety. Thus problems such as attenuation of turbulence or flutter in an oscillating structure (e.g., the Tacoma bridge), flutter in tall buildings, fluid flows in flexible pipes, nuclear engineering flows about fuel elements, and heat exchanger vanes are just a few prime examples of relevant applications which place themselves at the frontier of interests in applied mathematics. In this lecture, we shall describe mathematical models describing the phenomena. They are based on a 3D linearized Euler equation around unstable equilibria coupled to a nonlinear dynamic elasticity on a 2D manifold. Strong interface coupling between the two media is at the center of the analysis. This provides for a rich mathematical structure, opening the door to several unresolved problems in the area of nonlinear PDEs, dynamical systems, related harmonic analysis, and differential geometry. This talk provides a brief overview of recent developments in the area, with a presentation of some new methodology addressing the issues of control and stability of such structures. Part of this talk is based on recent work with D. Bonheur, F. Gazzola and J. Webster (in Annales de L’Institute Henri Poincare Analyse from 2022), work with A. Balakrishna and J. Webster (in M3AS in 2024), and also work completed while the author was a member of the MSRI program "Mathematical problem in fluid dynamics" at the University of California Berkeley (sponsored by NSF DMS -1928930).

Friday, April 25, 2025

Posted January 10, 2025
Last modified March 26, 2025

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Carolyn Beck, University of Illinois Urbana-Champaign IEEE Fellow
Discrete State System Identification: An Overview and Error Bounds

Classic system identification methods focus on identifying continuous-valued dynamical systems from input-output data, where the main analysis of such approaches largely focuses on asymptotic convergence of the estimated models to the true models, i.e., consistency properties. More recent identification approaches have focused on sample complexity properties, i.e., how much data is needed to achieve an acceptable model approximation. In this talk I will give a brief overview of classical methods and then discuss more recent data-driven methods for modeling continuous-valued linear systems and discrete-valued dynamical systems evolving over networks. Examples of the latter systems include the spread of viruses and diseases over human contact networks, the propagation of ideas and misinformation over social networks, and the spread of financial default risk between banking and economic institutions. In many of these systems, data may be widely available, but approaches to identify relevant mathematical models, including underlying network topologies, are not widely established or agreed upon. We will discuss the problem of modeling discrete-valued, discrete-time dynamical systems evolving over networks, and outline analysis results under maximum likelihood identification approaches that guarantee consistency conditions and sample complexity bounds. Applications to the aforementioned examples will be further discussed as time allows.

Friday, May 2, 2025

Posted January 16, 2025
Last modified April 5, 2025

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Bahman Gharesifard, Queen's University
Structural Average Controllability of Ensembles

In ensemble control, the goal is to steer a parametrized collection of independent systems using a single control input. A key technical challenge arises from the fact that this control input must be designed without relying on the specific parameters of the individual systems. Broadly speaking, as the space of possible system parameters grows, so does the size and diversity of the ensemble — making it increasingly difficult to control all members simultaneously. In fact, an important result among the recent advances on this topic states that when the underlying parameterization spaces are multidimensional, real-analytic linear ensemble systems are not L^p-controllable for p>=2. Therefore, one has to relax the notion of controllability and seek more flexible controllability characteristics. In this talk, I consider continuum ensembles of linear time-invariant control systems with single inputs, featuring a sparsity pattern, and study structural average controllability as a relaxation of structural ensemble controllability. I then provide a necessary and sufficient condition for a sparsity pattern to be structurally average controllable.

Friday, May 9, 2025

Posted February 19, 2025
Last modified April 24, 2025

Control and Optimization Seminar Questions or comments?

11:30 am – 12:20 pm Zoom (click here to join)

Nina Amini, Laboratory of Signals and Systems, CentraleSupélec
Feedback Control of Open Quantum Systems

First, we provide an overview of control strategies for open quantum systems, that is, quantum systems interacting with an environment. This interaction leads to a loss of information to the environment, a phenomenon commonly referred to as decoherence. One of the principal challenges in controlling open quantum systems is compensating for decoherence. To address robustness issues, feedback control methods are considered. Secondly, we consider the feedback stabilization of open quantum systems under repeated indirect measurements, where the evolution is described by quantum trajectories. I will present our recent results concerning the asymptotic behavior, convergence speed, and stabilization of these trajectories.

Friday, August 29, 2025

Posted August 23, 2025
Last modified August 26, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Alex Olshevsky, Boston University AFOSR YIP and NSF CAREER Awardee
The Connection Between Reinforcement Learning and Gradient Descent

Temporal difference (TD) learning with linear function approximation is one of the earliest methods in reinforcement learning and the basis of many modern methods. We revisit the analysis of TD learning through a new lens and show that TD may be viewed as a modification of gradient descent. This leads not only to a better explanation of what TD does but also improved convergence times guarantees. We discuss applications of this result to more involved reinforcement learning methods, such as actor-critic and neural-network based methods.

Friday, September 5, 2025

Posted August 11, 2025
Last modified September 2, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Gabriela Gonzalez, Louisiana State University Member, US National Academy of Sciences
Feedback Loops in the LIGO Gravitational Wave Detectors

The Laser Interferometric Gravitational-wave Observatory (LIGO) operates two detectors in Livingston, LA and Hanford, WA to detect perturbations of space time produced by astrophysical events like the collision of black holes. The detectors have an amazing sensitivity, using laser beams traveling in vacuum detecting differences in two 4km long arms smaller than a thousandth of a proton diameter in a frequency band between 10 Hz and 5 kHz. To achieve this sensitivity, a large number of feedback control systems are used to damp suspended mirrors, to reduce the effect of ground motion, to keep optical cavities resonant, and much more. I will briefly describe these systems and the challenges for current and future detectors.

Friday, October 3, 2025

Posted August 14, 2025
Last modified September 26, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Xin Zhang, New York University
Exciting Games and Monge-Ampère Equations

We consider a competition between d+1 players, and aim to identify the “most exciting game” of this kind. This is translated, mathematically, into a stochastic optimization problem over martingales that live on the d-dimensional sub-probability simplex and terminate on the vertices of the simplex, with a cost function related to a scaling limit of Shannon entropies. We uncover a surprising connection between this problem and the seemingly unrelated field of Monge-Ampère equations, and identify the optimal martingale via a detailed analysis of boundary asymptotics of a Monge-Ampère equation.

Friday, October 10, 2025

Posted August 1, 2025
Last modified October 3, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Felix Schwenninger, University of Twente, The Netherlands
Infinite-Dimensional Input-to-State Stability (ISS) -- Peculiarities of Sup-Norms

E. Sontag’s input-to-state stability (ISS), dating back to the late 80ies, is a cornerstone of modern mathematical control theory. While originally studied for finite-dimensional systems, the theory about infinite-dimensional systems, and in particular models involving partial differential equations, has been developed in the past 15 years. Somewhat surprisingly, the linear case, which is trivial in finite-dimensions, even offered challenges with respect to the mutual relations of several variants of ISS. In this talk we will focus in particular on “integral ISS” for linear and bilinear systems and discuss established results as well as more recent findings. The underlying reason for these subtleties is primarily due to the nontrivial interplay of supremum-norms, which naturally arise in ISS), and the (Banach space) geometry of the state spaces.

Friday, October 24, 2025

Posted September 5, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Naira Hovakimyan, University of Illinois Urbana-Champaign Fellow of AIAA, ASME, IEEE, and IFAC
Safe Learning in Autonomous Systems

Learning-based control paradigms have seen many success stories with autonomous systems and robots in recent years. However, as these robots prepare to enter the real world, operating safely in the presence of imperfect model knowledge and external disturbances is going to be vital to ensure mission success. We introduce a class of distributionally robust adaptive control architectures that ensure robustness to distribution shifts and enable the development of certificates for validation and verification of learning-enabled systems. An overview of different projects at our lab that build upon this framework will be demonstrated to show different applications.

Friday, October 31, 2025

Posted October 7, 2025
Last modified October 9, 2025

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Note: First of 2 Seminars for 10/31. Zoom (click here to join)

Alexandre Mauroy, Université de Namur
Dual Koopman Operator Formulation in Reproducing Kernel Hilbert Spaces for State Estimation

The Koopman operator acts on observable functions defined over the state space of a dynamical system, thereby providing a linear global description of the system dynamics. A pointwise description of the system is recovered through a weak formulation, i.e. via the pointwise evaluation of observables at specific states. In this context, the use of reproducing kernel Hilbert spaces (RKHS) is of interest since the above evaluation can be represented as the duality pairing between the observables and bounded evaluation functionals. This representation emphasizes the relevance of a dual formulation for the Koopman operator, where a dual Koopman system governs the evolution of linear evaluation functionals. In this talk, we will leverage the dual formulation to build a Luenberger observer that estimates the (infinite-dimensional) state of the Koopman dual system, and equivalently the (finite-dimensional) state of the nonlinear dynamics. The method will be complemented with theoretical convergence results that support numerical Koopman operator-based estimation techniques known from the literature. Finally, we will extend the framework to a probabilistic approach by solving the problem of moments in the RKHS setting.


Posted October 8, 2025
Last modified October 28, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Note: Second of 2 Seminars for 10/31. Zoom (click here to join)

Umesh Vaidya, Clemson University
Koopman Meets Hamilton and Jacobi: Data-Driven Control Beyond Linearity

In this talk, we present recent advances in operator-theoretic methods for controlling nonlinear dynamical systems. We begin by establishing a novel connection between the spectral properties of the Koopman operator and solutions of the Hamilton–Jacobi (HJ) equation. Since the HJ equation lies at the core of optimal control, robust control, dissipativity theory, input–output analysis, and reachability, this connection provides a new pathway for leveraging Koopman spectral representations to address control problems in a data-driven setting. In particular, we show how Koopman coordinates can shift the classical curse of dimensionality associated with solving the HJ equation into a curse of complexity that is more manageable through modern computational tools. In the second part of the talk, we discuss safe control synthesis using the Perron–Frobenius operator. A key contribution is the analytical construction of a navigation density function that enables safe motion planning in both static and dynamic environments. We further present a convex optimization formulation of safety-constrained optimal control in the dual (density) space, allowing safety constraints to be incorporated systematically. Finally, we demonstrate the application of this unified operator-theoretic framework to the control of autonomous ground vehicles operating in off-road environments.

Friday, November 7, 2025

Posted July 26, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Rami Katz, Università degli Studi di Trento, Italy
Oscillations in Strongly 2-Cooperative Systems and their Applications in Systems Biology

The emergence of sustained oscillations (via convergence to periodic orbits) in high-dimensional nonlinear dynamical systems is a non-trivial question with important applications in control of biological systems, including the design of synthetic bio-molecular oscillators and the understanding of circadian rhythms governing hormone secretion, body temperature and metabolic functions. In systems biology, the mechanism underlying such widespread oscillatory biological motifs is still not fully understood. From a mathematical perspective, the study of sustained oscillations is comprised of two parts: (i) showing that at least one periodic orbit exists and (ii) studying the stability of periodic orbits and/or characterizing the initial conditions which yield solutions that converge to periodic trajectories. In this talk, we focus on a specific class of nonlinear dynamical systems that are strongly 2-cooperative. Using the theory of cones of rank k, the spectral theory of totally positive matrices and Perron-Frobenius theory, we will show that strongly 2-cooperative systems admit an explicit set of initial conditions of positive measure, such that every solution emanating from this set converges to a periodic orbit. We further demonstrate our results using the n-dimensional Goodwin oscillator and a 4-dimensional biological oscillator based on RNA–mediated regulation.

Friday, November 14, 2025

Posted August 1, 2025
Last modified November 3, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Thinh Doan, University of Texas at Austin AFOSR YIP and NSF CAREER Awardee
Multi-Time-Scale Stochastic Approximation as a Tool for Multi-Agent Learning and Distributed Optimization

Multi-time-scale stochastic approximation (SA) is a powerful generalization of the classic SA method for finding roots (or fixed points) of coupled nonlinear operators. It has attracted considerable attention due to its broad applications in multi-agent learning, control, and optimization. In this framework, multiple iterates are updated simultaneously but with different step sizes, whose ratios loosely define their time-scale separation. Empirical studies and theoretical insights have shown that such heterogeneous step sizes can lead to improved performance compared to single-time-scale (or classical) SA schemes. However, despite these advantages, existing results indicate that multi-time-scale SA typically achieves only a suboptimal convergence rate, slower than the optimal rate attainable by its single-time-scale counterpart. In this talk, I will present our recent work on characterizing the convergence complexity of multi-time-scale SA. We develop a novel variant of this method and establish new finite-sample guarantees that achieves the optimal (O(1/k)) convergence rate. Building upon these results, I will also discuss how these advances enable the design of efficient algorithms for key problems in multi-agent learning and distributed optimization over networks.

Friday, November 21, 2025

Posted July 13, 2025
Last modified November 4, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Dimitra Panagou, University of Michigan AFOSR YIP, NASA Early Career Faculty, and NSF CAREER Awardee
Safety-Critical Control via Control Barrier Functions: Theory and Applications

This seminar will focus on control barrier functions, as a tool for encoding and enforcing safety specifications, as well as their recent extensions (e.g., robust, adaptive, and predictive) to handle additive perturbations, parametric uncertainty and dynamic environments, with applications to (multi)-robot/vehicle motion planning and coordination. Time permitting, we will also cover how time constraints can be encoded as fixed-time control Lyapunov functions, and the trade-offs between safety and timed convergence.

Friday, December 5, 2025

Posted August 18, 2025
Last modified December 4, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Zequn Zheng, Louisiana State University
Generating Polynomial and Optimization-Based Algorithms for Tensor Decomposition

Tensors, or multidimensional arrays, are higher-order generalizations of matrices that naturally represent data with inherent multi-way structure. Tensor rank decomposition is a key tool for uncovering hidden patterns in such data. In this talk, we introduce a novel algorithm based on generating polynomials to compute tensor decompositions. We prove that under certain rank conditions, our method recovers the exact decomposition. For higher ranks beyond this threshold, we provide an optimization-based variant that effectively detects the tensor decomposition. Numerical experiments illustrate the robustness and efficiency of our approach.

Friday, December 12, 2025

Posted July 22, 2025
Last modified December 4, 2025

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Zoom (click here to join)

Javad Velni, Clemson University
Optimal Supplemental Lighting in Controlled Environment Agriculture: Data-driven and Model-based Perspectives

This seminar presents one aspect of my lab’s research focused on developing optimal supplemental lighting control strategies using LED lamps in controlled environment agriculture. The work aims to minimize electricity costs associated with supplemental lighting by integrating model-based optimization techniques with advanced machine learning methods, such as deep neural networks and Markov chains, used to predict uncertain environmental variables. Several scenarios are explored, ranging from a baseline optimal lighting approach for a single crop to more complex settings involving large-scale greenhouses with multiple crops and spatial light distribution considerations. Experimental results from a research greenhouse, where an Internet of Agricultural Things (IoAT) system was developed to grow lettuce, are presented and discussed. The seminar concludes with a roadmap highlighting several emerging research directions inspired by these findings.

Friday, January 16, 2026

Posted January 4, 2026
Last modified January 8, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Alberto Bressan, Penn State Eberly Family Chair Professor
Dynamic Blocking Problems for a Model of Fire Confinement

A classical problem in the Calculus of Variations asks to find a curve with a given length, which encloses a region of maximum area. In this talk I shall discuss the seemingly opposite problem of finding curves enclosing a region with minimum area. Problems of this kind arise naturally in the control of forest fires, where firemen seek to construct a barrier, minimizing the total area of the region burned by the fire. In this model, a key parameter is the speed at which the barrier is constructed. If the construction rate is too slow, the fire cannot be contained. After describing how the fire propagation can be modeled in terms of a PDE, the talk will focus on three main questions: (1) Can the fire be contained within a bounded region? (2) If so, is there an optimal strategy for constructing the barrier, minimizing the total value of the land destroyed by the fire? and (3) How can we find optimal strategies? Problem (1) is still largely open. See https://sites.psu.edu/bressan/2-research/ for a cash prize that has been offered for its solution since 2011.

Friday, January 23, 2026

Posted December 1, 2025
Last modified January 9, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Jameson Graber, Baylor University NSF CAREER Awardee
Remarks on Potential Mean Field Games

Mean field games were introduced about 20 years ago to model the limit of N-player differential games as N goes to infinity. There are many applications to economics, finance, social sciences and biology. In many interesting cases the Nash equilibrium turns out to be a critical point for a functional, called the potential, in which case the game itself is called potential. In this case I will present several mathematical results on potential mean field games, which are directly connected to the theory of optimal control of PDE. For related work, see https://doi.org/10.1007/s40687-024-00494-3.

Friday, January 30, 2026

Posted November 22, 2025
Last modified January 6, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Henk van Waarde, University of Groningen IEEE L-CSS Outstanding Paper and SIAM SIAG/CST Prize Awardee
Data-Driven Stabilization using Prior Knowledge on Stabilizability and Controllability

Direct approaches to data-driven control design map raw data directly into control policies, thereby avoiding the intermediate step of system identification. Such direct methods are beneficial in situations where system modelling is computationally expensive or even impossible due to a lack of rich data. We begin the talk by reviewing existing methods for direct data-driven stabilization. Thereafter, we discuss the inclusion of prior knowledge that, in conjunction with the data, can be used to improve the sample efficiency of data-driven methods. In particular, we study prior knowledge of stabilizability and controllability of the underlying system. In the case of controllability, we prove that the conditions on the data required for stabilization are equivalent to those without the inclusion of prior knowledge. However, in the case of stabilizability as prior knowledge, we show that the conditions on the data are, in general, weaker. We close the talk by discussing experiment design methods. These methods construct suitable inputs for the unknown system, in such a way that the resulting data contain enough information for data-driven stabilization (taking into account the prior knowledge).

Friday, February 6, 2026

Posted February 1, 2026
Last modified February 2, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:30 am Lockett 233 or Zoom (click here to join)

R. Tyrrell Rockafellar, University of Washington
Variational Analysis and Convexity in Optimal Control

Optimal control theory was considered by its originators to be a new subject which superseded much of the classical calculus of variations as a special case. In reality, it was more a reformulation of existing theory with different goals and perspectives. Now both can be united in a broader setting of variational analysis in which Lagrangian and Hamiltonian functions need not be differentiable or even continuous, but extended-real-valued, and convexity has a central role. The Control and Optimization Seminar for this talk will be held in person, with a Zoom option available for remote attendees.

Event contact: Gowri Priya Sunkara

Friday, February 13, 2026

Posted November 26, 2025
Last modified January 29, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Anthony Bloch, University of Michigan AMS, IEEE, and SIAM Fellow
Control, Stability and Learning on Dynamic Networks

In this talk we consider various aspects of dynamics, control and learning on graphs. We discuss diffusively coupled network dynamical systems and the role of coupling in stabilizing and destabilizing such systems. We also discuss dynamic networks of this type and in particular Lyapunov-based methods for analyzing the stability of networks undergoing switching. In addition we analyze the problem of learning the dynamics of switched systems from data, including linear and polynomial systems and systems on graphs. In addition we consider the control and dynamics of systems on hypergraphs which have applications to biological networks.

Friday, February 20, 2026

Posted December 7, 2025
Last modified December 28, 2025

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Richard Vinter, Imperial College London IEEE Fellow
Control of Lumped-Distributed Control Systems

Lumped-distributed control systems are collections of interacting sub-systems, some of which have finite dimensional vector state spaces (comprising ‘lumped’ components) and some of which have infinite dimensional vector state spaces (comprising ‘distributed’ components). Lumped-distributed control systems are encountered, for example, in models of thermal or distributed mechanical devices under boundary control, when we take the control actuator dynamics or certain kinds of dynamic loading effects into account. This talk will focus on an important class of (possibly non-linear) lumped-distributed control systems, in which the control action directly affects only the lumped subsystems and the output is a function of the lumped state variables alone. We will give examples of such systems, including a temperature-controlled test bed for measuring semiconductor material properties under changing temperature conditions and robot arms with flexible links. A key observation is an exact representation of the mapping from control inputs to outputs, in terms of a finite dimensional control system with memory. (We call it the reduced system representation.) The reduced system representation can be seen as a time-domain analogue of frequency response descriptions involving the transfer function from input to output. In contrast to frequency response descriptions, the reduced system representation allows non-linear dynamics, hard constraints on controls and outputs, and non-zero initial data. We report recent case studies illustrating the computational advantages of the reduced system representation. We show that, for related output tracking problems, computation methods based on the new representation offer significantly improved tracking and reduction in computation time, as compared with traditional methods, based on the approximation of infinite dimensional state spaces by high dimensional linear subspaces.

Friday, February 27, 2026

Posted January 8, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Lars Gruene, University of Bayreuth SIAM Fellow
Can Neural Networks Solve High Dimensional Optimal Feedback Control Problems?

Deep reinforcement learning has established itself as a standard method for solving nonlinear optimal feedback control problems. In this method, the optimal value function (and, in some variants, the optimal feedback law also) is stored using a deep neural network. Hence, the applicability of this approach to high-dimensional problems relies crucially on the network's ability to store a high-dimensional function. It is known that for general high-dimensional functions, neural networks suffer from the same exponential growth of the number of coefficients as traditional grid based methods, the so-called curse of dimensionality. In this talk, we use methods from distributed optimal control to describe optimal control problems in which this problem does not occur.

Friday, March 20, 2026

Posted December 1, 2025
Last modified March 5, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Khai Nguyen, North Carolina State University
On the Structure of Viscosity Solutions to Hamilton–Jacobi Equations

This talk presents regularity results for viscosity solutions to a class of Hamilton-Jacobi equations arising from optimal exit-time problems in nonlinear control systems under a weak controllability condition. A representation formula for proximal supergradients, based on transported normals, is derived, with applications to optimality conditions, the propagation of singularities, and the Hausdorff measure of the singular set.

Friday, March 27, 2026

Posted January 5, 2026
Last modified March 9, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Jonathan How, Massachusetts Institute of Technology AIAA and IEEE Fellow
Resilient Multi-Agent Autonomy: Perception and Planning for Dynamic, Unknown Environments

Unmanned ground and aerial systems hold promise for critical applications, including search and rescue, environmental monitoring, and autonomous delivery. Real-world deployment in safety-critical settings, however, remains challenging due to GPS-denied operation, perceptual uncertainty, and the need for safe trajectory planning in dynamic unknown environments. This talk presents recent advances in planning, control, and perception that together enable robust, scalable, and efficient aerial autonomy. On the planning and control side, I first introduce DYNUS, which enables uncertainty-aware trajectory planning for safe, real-time flight in dynamic and unknown environments. Building on this foundation, MIGHTY performs fully coupled spatiotemporal optimization to generate agile and precise motion by jointly reasoning about path and timing. Together with prior work on Robust MADER, these methods enable fast, safe, multi-robot navigation under uncertainty. On the perception side, I introduce complementary mapping frameworks that support long-term autonomy and planning. GRAND SLAM combines 3D Gaussian splatting with semantic and geometric priors to produce unified scene representations suitable for photorealistic planning. A second example is ROMAN, which builds on ideas from our prior open set mapping work including SOS MATCH and VISTA. ROMAN compresses environments into sparse, object-centric maps that are orders of magnitude smaller than traditional representations, while still enabling accurate re-localization and loop closure under extreme viewpoint changes. I also discuss the interaction between perception and control, with a focus on safety filtering for systems that rely on learned perception models. Finally, I present results from simulation and hardware experiments and conclude with open challenges in building resilient autonomous aerial systems. Together, these advances move us closer to reliable multi-robot autonomy with meaningful real-world impact. [For the speaker's biographical sketch, click here.]


Posted January 2, 2026
Last modified March 11, 2026

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Joint Computational Mathematics and Control and Optimization Seminar to Be Held In Person in 233 Lockett Hall and on Zoom (click here to join)

Jia-Jie Zhu, KTH Royal Institute of Technology in Stockholm
Optimization in Probability Space: PDE Gradient Flows for Sampling and Inference

Many problems in machine learning and Bayesian statistics can be framed as optimization problems that minimize the relative entropy between two probability measures. In recent works, researchers have exploited the connection between the (Otto-)Wasserstein gradient flow of the Kullback-Leibler (or KL) divergence and various sampling and inference algorithms, interacting particle systems, and generative models. In this talk, I will first contrast the Wasserstein flow with the Fisher-Rao flows of a few entropy energy functionals, and showcase their distinct analysis properties when working with different relative entropy driving energies, including the reverse and forward KL divergence. Building upon recent advances in the mathematical foundation of the Hellinger-Kantorovich (HK, a.k.a. Wasserstein-Fisher-Rao) gradient flows, I will then show the analysis of the HK flows and its implications in examples of machine learning tasks.

Event contact: Susanne Brenner

Friday, April 10, 2026

Posted February 5, 2026
Last modified February 6, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Wonjun Lee, Ohio State University
Linear Separability in Contrastive Learning via Neural Training Dynamics

The SimCLR method for contrastive learning of invariant visual representations has become extensively used in supervised, semi-supervised, and unsupervised settings, due to its ability to uncover patterns and structures in image data that are not directly present in the pixel representations. However, this success is still not well understood; neither the loss function nor invariance alone explains it. In this talk, I present a mathematical analysis that clarifies how the geometry of the learned latent distribution arises from SimCLR. Despite the nonconvex SimCLR loss and the presence of many undesirable local minimizers, I show that the training dynamics driven by gradient flow tend toward favorable representations. In particular, early training induces clustering in feature space. Under a structural assumption on the neural network, our main theorem proves that the learned features become linearly separable with respect to the ground-truth labels. To support the theoretical insights, I present numerical results that align with the theoretical predictions.

Friday, April 17, 2026

Posted December 27, 2025
Last modified February 25, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Aris Daniilidis, Technische Universität Wien
Variational Stability of Alternating Projections

The alternate projection method is a classical approach to deal with the convex feasibility problem. We shall first show that given two nonempty closed convex sets $A$ and $B$, the consecutive projections $x_{n+1} = PB(PA(x_n))$, $n \ge 1$ produce a self-contacted sequence, providing in particular an alternative way to establish convergence in the finite dimensional case [2]. In infinite dimensions, a regularity condition is required to ensure convergence of the above sequence $\{x_n\}_{n\ge 1}$ [4]. In [3], it was established that a regularity condition from [1] also ensures the variational stability of the above method. In this talk, we shall complete this result and show that variational stability is actually equivalent to the aforementioned regularity assumption. REFERENCES: [1] H. Bauschke, J. Borwein, On the convergence of von Neumann’s alternating projection algorithm for two sets, Set-Valued Anal. 1 (1993), 185–212. [2] A. Bohm, A. Daniilidis, Ubiquitous algorithms in convex optimization generate self-contracted sequences, J. Convex Anal. 29 (2022) 119–128. [3] C. De Bernardi, E. Miglierina, A variational approach to the alternating projections method, J. Global Optim. 81 (2021), 323-350. [4] H. Hundal, An alternating projection that does not converge in norm, Nonlinear Anal. 57 (2004), 35–61.

Friday, April 24, 2026

Posted January 2, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Behçet Açıkmeşe, University of Washington AIAA and IEEE Fellow
Optimization-Based Design and Control for Next-Generation Aerospace Systems

Next-generation aerospace systems (e.g., asteroid-mining robots, spacecraft swarms, hypersonic vehicles, and urban air mobility) demand autonomy that transcends current limits. These missions require spacecraft to operate safely, efficiently, and decisively in unpredictable environments, where every decision must balance performance, resource constraints, and risk. The core challenge lies in solving complex optimal control problems in real time, while (i) exploiting full system capabilities without violating safety limits, (ii) certifying algorithmic reliability for critical guidance, navigation, and control (GNC) systems, and (iii) co-designing hardware and software subsystems for optimal end-to-end performance. Our solution is optimization-based autonomy. By transforming GNC challenges into structured optimization problems, we achieve provably robust, computationally tractable solutions. This approach has already revolutionized aerospace, e.g., reusable rockets land autonomously via real-time trajectory planning, drones navigate dynamic obstacles, and spacecraft perform precision docking, all powered by algorithms that solve optimization problems with complex physics-based equations and inequalities in milliseconds. Emerging frontiers (such on-orbit satellite servicing, multi-vehicle asteroid exploration, large-scale orbital spacecraft swarms, and global hypersonic transport) push these methods further. Yet barriers remain, e.g., handling non-convex constraints, ensuring solver resilience, large-scale optimization for decision making and co-design, and bridging the gap between theory and flight-ready systems. This talk explores how real-time optimization is rewriting the rules of autonomy, and how researchers can turn these innovations into practice, propelling aerospace engineering into an era where aerospace systems think, adapt, and perform at the edge of the possible.

Friday, May 1, 2026

Posted January 24, 2026

Control and Optimization Seminar Questions or comments?

10:30 am – 11:20 am Note the Special Seminar Time. Zoom (click here to join)

Michael Friedlander, University of British Columbia SIAM Fellow
Seeing Structure Through Duality

Duality is traditionally introduced as a source of bounds and shadow prices. In this talk I emphasize a second role: revealing structure that enables scalable computation. Starting from LP complementary slackness, I describe a generalization called polar alignment that identifies which "atoms" compose optimal solutions in structured inverse problems. The discussion passes through von Neumann's minimax theorem, Kantorovich's resolving multipliers, and Dantzig's simplex method to arrive at sublinear programs, where an adversary selects worst-case costs from a set. The resulting framework unifies sparse recovery, low-rank matrix completion, and signal demixing. Throughout, dual variables serve as certificates that decode compositional structure.

Friday, May 8, 2026

Posted January 5, 2026

Control and Optimization Seminar Questions or comments?

9:30 am – 10:20 am Zoom (click here to join)

Necmiye Ozay, University of Michigan IEEE Fellow, and ONR Young Investigator, NASA Early Career Faculty, and NSF CAREER Awardee
TBA